Under Construction AI https://underconstruction.ai Ai News, Entertainment, Domain Names Tue, 23 Jul 2024 14:16:07 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://underconstruction.ai/wp-content/uploads/2024/04/cropped-uc-5-32x32.png Under Construction AI https://underconstruction.ai 32 32 DHS “Swamp Dog” Disables Internet Of Things https://underconstruction.ai/dhs-swamp-dog-disables-internet-of-things/ https://underconstruction.ai/dhs-swamp-dog-disables-internet-of-things/#respond Tue, 23 Jul 2024 14:06:52 +0000 https://underconstruction.ai/?p=28373 DHS's NEO robot targets compromised IoT devices to prevent large-scale cyberattacks, enhancing cybersecurity for households and critical

The post DHS “Swamp Dog” Disables Internet Of Things first appeared on Under Construction AI.

]]>

 

The article discusses a new initiative by the U.S. Department of Homeland Security (DHS) to combat cybersecurity threats posed by Internet of Things (IoT) devices within homes. DHS has developed a robot designed to execute Distributed Denial of Service (DDoS) attacks against malicious IoT devices that have been compromised and are being used as part of a botnet to perform large-scale cyberattacks. These compromised devices, referred to as “booby traps,” can be everyday household items like smart thermostats, cameras, and fridges that hackers exploit to launch attacks on a larger scale.

The robot, named NEO, is engineered to identify and neutralize these threats by isolating the compromised devices and disrupting their ability to communicate with the botnet. This innovative approach is part of DHS’s broader strategy to enhance national cybersecurity by proactively targeting the growing threat from IoT devices, which are notoriously difficult to secure due to their varied nature and often weak built-in security measures.

DHS’s deployment of NEO represents a significant advancement in the ongoing battle against cyber threats. The use of a DDoS robot to fight DDoS attacks showcases a novel method of turning the tables on cybercriminals, leveraging the same techniques used by attackers to defend against them. This move is expected to provide a valuable tool in the protection of both individual households and the broader internet infrastructure.

The development of NEO is in response to the increasing number of cyberattacks involving IoT devices, which have become a popular target for hackers due to their widespread use and often lax security. By disrupting these booby traps before they can be used in large-scale attacks, DHS aims to reduce the risk of significant damage to critical infrastructure and personal data.

In summary, the introduction of DHS’s NEO robot marks a proactive step in cybersecurity, specifically targeting the vulnerabilities within IoT devices. This initiative underscores the importance of developing new technologies to protect against evolving cyber threats. 

Summary by ChatGPT.com
See Original Article Here

The post DHS “Swamp Dog” Disables Internet Of Things first appeared on Under Construction AI.

]]>
https://underconstruction.ai/dhs-swamp-dog-disables-internet-of-things/feed/ 0
FBI Hacks Into Trump Shooter’s phone https://underconstruction.ai/fbi-hacks-into-trump-shooters-phone/ https://underconstruction.ai/fbi-hacks-into-trump-shooters-phone/#respond Tue, 16 Jul 2024 16:46:42 +0000 https://underconstruction.ai/?p=28351 FBI quickly cracks shooter's phone, highlighting advanced phone-hacking tools used by law enforcement and rekindling debates on privacy vs. security.

The post FBI Hacks Into Trump Shooter’s phone first appeared on Under Construction AI.

]]>

 

The FBI rapidly gained access to the phone of Thomas Matthew Crooks, who attempted to assassinate former President Donald Trump at a rally in Pennsylvania. This quick access highlights the increasing effectiveness of phone-hacking tools available to law enforcement agencies.

Many police departments use MDTFs like Cellebrite, an Israeli company that provides tools for extracting data from phones. A 2020 investigation found that over 2,000 law enforcement agencies across the US have access to such tools. These range from more common devices like Cellebrite to advanced and expensive options like GrayKey.

The article contrasts this quick access with previous high-profile cases where the FBI struggled to access encrypted phones. Notable examples include:

1. The 2015 San Bernardino shooting case, where Apple refused to help the FBI break into the shooter’s iPhone, citing concerns about creating a backdoor in their encryption. The FBI eventually gained access through a third party, reportedly spending around $1 million.

2. The 2019 Pensacola Naval Air Station shooting, where Apple again refused to unlock the shooter’s phones, leading to criticism from the FBI and then-Attorney General William Barr.

These cases highlight the ongoing tension between law enforcement’s need to access evidence and tech companies’ commitment to user privacy and security. While Apple has consistently refused to create backdoors in their encryption, the increasing sophistication of third-party MDTFs appears to be providing law enforcement with alternative means of access.

The article also touches on the potential risks associated with these tools, noting that they could be misused by undemocratic governments to violate human rights. Security experts quoted in the article explain that these tools often work by exploiting software vulnerabilities or using brute force methods to guess passwords.

Overall, the piece underscores the evolving landscape of digital privacy, encryption, and law enforcement capabilities in accessing locked devices.

Original article appears here.

Summary by Claude.Ai

The post FBI Hacks Into Trump Shooter’s phone first appeared on Under Construction AI.

]]>
https://underconstruction.ai/fbi-hacks-into-trump-shooters-phone/feed/ 0
NC State Develops Exoskelaton https://underconstruction.ai/nc-state-develops-exoskelaton/ Wed, 03 Jul 2024 13:34:38 +0000 https://underconstruction.ai/?p=28322 The Biomechatronics and Intelligent Robotics Lab at North Carolina State University has developed an AI-powered exoskeleton to assist both disabled and non-disabled individuals with movement.

The post NC State Develops Exoskelaton first appeared on Under Construction AI.

]]>

The Biomechatronics and Intelligent Robotics Lab at North Carolina State University has developed an AI-powered exoskeleton to assist both disabled and non-disabled individuals with movement. Key points include:

  1. The exoskeleton consists of a fanny pack, thigh sensors, and buckles, allowing users to control it within 10-20 seconds of putting it on.
  2. It uses AI to interpret joint angles and adapt to surroundings, helping users move in their intended direction.
  3. The device learns through virtual simulation in about 8 hours, eliminating the need for lengthy human-robot coordination training.
  4. It can assist with walking, running, and stair climbing, reducing energy expenditure by 13-24% compared to unassisted movement.
  5. Researchers aim to adapt the technology for elderly people and children with mobility impairments like cerebral palsy.
  6. An upper body exoskeleton is also being developed for stroke recovery and ALS patients.
  7. The current cost of materials is around $10,000, which is lower than commercially available exoskeletons, but researchers aim to make it more affordable and accessible.
  8. The project is funded by the National Science Foundation and National Institute for Health.

The researchers are working on improving comfort, human-centered design, and affordability to make the technology more widely available.

Summary by Claude | Republished with permission from Discovr.Ai

The post NC State Develops Exoskelaton first appeared on Under Construction AI.

]]>
Keeping Pace with Text-To-Video Ai https://underconstruction.ai/keeping-pace-with-text-to-video-ai/ Tue, 02 Jul 2024 12:42:07 +0000 https://underconstruction.ai/?p=28306 Since the rollout of ChatGPT in 2022, AI has revolutionized content creation, starting with text and expanding into image, audio, and now video.

The post Keeping Pace with Text-To-Video Ai first appeared on Under Construction AI.

]]>

Since the rollout of ChatGPT in 2022, AI has revolutionized content creation, starting with text and expanding into image, audio, and now video. The latest innovation, text-to-video AI, is transforming how narratives are visually conveyed, making visual content more accessible and customizable. This technology, still in its infancy, is rapidly evolving with new tools emerging weekly. Here, we explore six notable advancements in this field and their implications.

Six Technological Advancements in Text-to-Video AI

  1. OpenAI’s Sora: Launched in early 2024, Sora is a powerful text-to-video generator that converts written narratives into high-quality, minute-long videos. It integrates AI, machine learning, and natural language processing to create detailed scenes with lifelike characters. Currently available to select testers, Sora aims to extend video length, improve prompt understanding, and reduce visual inconsistencies. Toys ‘R’ Us recently used Sora for advertising, and its wider release is anticipated to revolutionize video creation across industries.
  2. LTX Studio by Lightricks: Known for products like Videoleap and Facetune, Lightricks’ LTX Studio converts text prompts into rich storyboards and videos. It offers extensive editing capabilities, allowing creators to fine-tune characters, settings, and narratives. The recent “Visions” update enhances pre-production features, enabling rapid transformation of ideas into pitch decks. LTX Studio empowers creators to maintain high-quality standards and pushes the boundaries of AI in video workflows.
  3. Kling by Kuaishou: Kling is the first publicly available text-to-video AI model by the Chinese company Kuaishou. It uses diffusion models and transformer architectures for efficient video generation, leveraging vast user-generated content for training. Although videos are limited to five seconds and 720 pixels, Kling generates highly realistic videos concerning physical dynamics.
  4. Dream Machine by Luma AI: Dream Machine generates high-quality videos from simple text prompts and is integrated with major creative software like Adobe. Available to everyone, it aims to foster a community of developers and creators through an open-source approach. However, it struggles with recreating natural movements, morphing effects, and text.
  5. Runway’s Gen-3: Runway’s Gen-3 Alpha offers improved video fidelity, consistency, and motion control. Developed for large-scale multimodal training, it supports tools like Motion Brush and Director Mode, offering fine-grained control over video structure and style. It’s noted for handling complex cinematic terms and producing photorealistic human characters, broadening its applicability in filmmaking and media production.
  6. Google’s Veo: Unveiled at Google’s I/O conference, Veo produces high-resolution 1080-pixel videos in various cinematic styles. Initially available in a private preview, it builds on Google’s research in video generation, combining various technologies to enhance quality and resolution. Veo plans to integrate its capabilities into YouTube Shorts and other Google products.

Challenges and Ethical Considerations

As text-to-video AI technologies advance, the potential for misuse, such as creating deepfakes, increases. These tools can spread misinformation, manipulate public opinion, and pose threats to personal reputations and democratic processes. Ethical guidelines, regulatory frameworks, and technological safeguards are essential to mitigate these risks. The industry needs transparent practices and ongoing dialogue to develop technologies that detect and flag AI-generated content to protect against malicious uses.

The mainstream adoption of text-to-video AI also raises complex legal questions, particularly concerning copyright and intellectual property rights. As these products create content based on vast public datasets, often including copyrighted material, determining ownership of AI-generated works becomes ambiguous. Clear guidelines are needed to ensure fair use, proper attribution, and protection against infringement.

Impact on the Film Industry

Generative AI is poised to disrupt the film industry significantly. A study by the Animation Guild suggests that by 2026, over 100,000 media and entertainment jobs in the U.S. will be affected by generative AI tools. Hollywood’s unions are concerned about job impacts, creative control, and the authenticity of cinematic arts. AI-generated content is gaining mainstream acceptance, democratizing access to expensive locations and special effects. However, widespread adoption depends on addressing ethical considerations and ensuring AI complements rather than replaces human creativity.

Conclusion

The future of text-to-video AI is promising but requires a balanced approach to innovation and responsibility. Collaboration among technology developers, content creators, and policymakers is crucial to ensure these tools are used responsibly. Establishing robust frameworks for rights management, enhancing transparency, and innovating within ethical boundaries will enable the full potential of text-to-video AI, benefiting various applications without compromising societal values or creative integrity. LINK

Summary by Chat GPT | Republished with permission from AiShortFilm.com

The post Keeping Pace with Text-To-Video Ai first appeared on Under Construction AI.

]]>
Divid – New Tool for Detecting Ai-Generated Videos https://underconstruction.ai/divid-new-tool-for-detecting-ai-generated-videos/ Mon, 01 Jul 2024 13:59:18 +0000 https://underconstruction.ai/?p=28250 Columbia Engineering researchers developed DIVID, a tool to detect AI-generated videos. This new technology addresses the rising issue of realistic AI videos used in scams by analyzing diffusion-generated video frames for inconsistencies.

The post Divid – New Tool for Detecting Ai-Generated Videos first appeared on Under Construction AI.

]]>

The article discusses the development of DIVID, a tool created by Columbia Engineering researchers to detect AI-generated videos. This innovation addresses the growing problem of highly realistic AI videos being used for scams. DIVID, short for DIffusion Video Detection, examines frames from diffusion-generated videos for inconsistencies that indicate AI manipulation. It builds on previous research involving Raidar, a tool for detecting AI-generated texts. The core method, DIRE (DIffusion Reconstruction Error), compares original frames to reconstructed ones to identify discrepancies, boasting a detection accuracy of up to 93.7%. This technology could potentially be integrated into platforms like Zoom to enhance real-time deepfake detection, offering a significant step forward in combating digital fraud and misinformation.

For further details, visit the full article here.

Summary by Chat GPT

The post Divid – New Tool for Detecting Ai-Generated Videos first appeared on Under Construction AI.

]]>
Toys R Us Under Fire For Making New Commercial Solely by Ai https://underconstruction.ai/toys-r-us-under-fire-for-making-new-commercial-solely-by-ai/ Sun, 30 Jun 2024 13:30:26 +0000 https://underconstruction.ai/?p=28154 As first reported on the website AiShortFilm.com (June 27, 2024), we appear not to be the only ones caught off guard to learn the company is still in business.

The post Toys R Us Under Fire For Making New Commercial Solely by Ai first appeared on Under Construction AI.

]]>

Video Property of  WTHR
Full Original Sora Video Posted Below

The outrage is stemming from an Ai Generated commercial featuring a likeness (?) of founder Charles Lazarus. The video was first spotted by AiShortFilm.com on June 27, 2024. ~Admin

Reprinted with Permission from AiShortFilm.com

The post Toys R Us Under Fire For Making New Commercial Solely by Ai first appeared on Under Construction AI.

]]>
Ai – Heal Thyself https://underconstruction.ai/ai-heal-thyself/ Sat, 29 Jun 2024 16:04:05 +0000 https://underconstruction.ai/?p=28087 OpenAi, parent of ChatGPT, released a new tool aiming to make ChatGPT more reliable in its answers. Hello CriticGPT.

With this seemingly becoming "self-aware", I can only imagine that this is what AGI's (Artificial General Intelligence) early development looks like.

The post Ai – Heal Thyself first appeared on Under Construction AI.

]]>

OpenAi, parent of ChatGPT, released a new tool aiming to make ChatGPT more reliable in its answers. Hello CriticGPT.

With this seemingly becoming “self-aware”, I can only imagine that this is what AGI’s (Artificial General Intelligence) early development looks like.

-Admin

 

CriticGPT, developed by OpenAI, is an advanced AI model designed to enhance the reliability of AI-generated content by assisting human reviewers in detecting and critiquing errors in code produced by ChatGPT. This model, part of the GPT-4 family, aims to address the increasing complexity of evaluating sophisticated AI outputs as large language models evolve.

Training and Performance CriticGPT’s training involved a dataset with intentionally inserted bugs, allowing the model to effectively recognize and flag various coding errors. This approach led to remarkable results, with CriticGPT catching about 85% of bugs compared to the 25% identified by human reviewers. Additionally, its feedback was preferred over human critiques in 63% of cases involving natural language model (LLM) errors, showcasing its superior performance in error detection. To further enhance its capabilities, researchers developed the Force Sampling Beam Search (FSBS) technique, which improved CriticGPT’s ability to provide detailed code reviews while minimizing false positives.

Applications and Limitations While CriticGPT is primarily focused on code review, it has also shown potential in identifying errors in non-code tasks, highlighting its versatility in improving AI outputs. However, the model’s effectiveness diminishes with longer and more complex tasks, as it was trained on relatively short responses. Despite its impressive performance, CriticGPT still produces some false positives and requires human oversight to ensure accuracy. Additionally, the model struggles with detecting errors spread across multiple code strings, making it difficult to identify the source of certain AI hallucinations.

Future Integration Plans OpenAI plans to integrate CriticGPT into its Reinforcement Learning from Human Feedback (RLHF) pipeline, providing human trainers with an AI assistant to help review and refine generative AI outputs. This integration aims to enhance the overall quality and alignment of AI systems with human expectations. By leveraging CriticGPT’s capabilities, OpenAI anticipates improving the efficiency and accuracy of their AI training processes, potentially leading to more reliable and sophisticated AI models in the future.

Overall, CriticGPT represents a significant advancement in AI error detection and quality assurance, offering valuable support in the continuous improvement of AI-generated content.

Reprinted with Permission from Discovr.Ai

The post Ai – Heal Thyself first appeared on Under Construction AI.

]]>
Creepy Robot Smiles – With Human Skin! https://underconstruction.ai/creepy-robot-smiles-with-human-skin/ Sat, 29 Jun 2024 16:02:57 +0000 https://underconstruction.ai/?p=28082 A recent experiment with a bot sporting human skin actually smiled when stimulated. The addition of eyes staring back at you made this even more creepy - and I hope I am not the only one who feels this way.

- Admin

The post Creepy Robot Smiles – With Human Skin! first appeared on Under Construction AI.

]]>

The integration of living human skin cells into robots represents a groundbreaking advancement in the field of robotics, aiming to transform human-robot interactions by enabling machines to display emotions and communicate in a more human-like manner. This technology promises to bridge the gap between artificial and biological entities, making robots more relatable and easier to interact with across various settings.

One of the most significant implications of this development is in the healthcare industry. Human-like robots could provide essential support and comfort to patients, especially those requiring companionship or assistance in medical environments. These robots, equipped with the ability to emote and respond to human expressions, can create a more empathetic and supportive atmosphere, potentially improving patient outcomes and overall well-being.

Beyond healthcare, the cosmetics industry stands to benefit from this technology as well. The ability to recreate wrinkle formation on a small scale using living human skin cells allows for more accurate testing of skincare products. This advancement can lead to the development of more effective treatments for preventing or improving wrinkles, enhancing the efficacy of cosmetic products and providing better results for consumers​ (Popular Science)​​ (Laughing Squid)​.

The technology involves using advanced bioengineering techniques to grow and maintain living human skin cells on robotic structures. This process includes creating a suitable environment for the cells to thrive and ensuring that the robotic system can mimic the mechanical properties of human skin. By integrating these living cells, robots can exhibit more natural and nuanced facial expressions, making interactions with humans more seamless and intuitive.

Moreover, the potential applications of this technology extend beyond healthcare and cosmetics. In educational and customer service settings, human-like robots can improve engagement and communication by providing a more lifelike and responsive presence. This can enhance the learning experience for students and create a more satisfactory customer service experience in various industries.

In summary, the development of robots with living human skin cells marks a significant step forward in human-robot interaction. By enabling robots to emote and communicate more naturally, this technology can improve their relatability and effectiveness across multiple sectors, including healthcare, cosmetics, education, and customer service. The ability to closely mimic human expressions and responses opens up new possibilities for the integration of robots into everyday life, enhancing their utility and acceptance​ (Popular Science)​​ (Laughing Squid)​.

Reprinted with Permission from Discovr.Ai

The post Creepy Robot Smiles – With Human Skin! first appeared on Under Construction AI.

]]>