#Is Apple integrating ChatGPT?
Explore tagged Tumblr posts
needtricks-blog · 5 days ago
Text
Understanding OpenAI Data Usage in Apple ChatGPT Integrations
Understanding OpenAI Data Usage in Apple ChatGPT Integrations. In 2024, Apple and OpenAI partnered to integrate ChatGPT into Apple’s ecosystem, enhancing user experiences across devices. This collaboration brings advanced AI capabilities to applications like Siri, iMessage, and Safari. However, it also raises questions about data privacy and usage. Continue reading Understanding OpenAI Data Usage…
0 notes
bitcoinversus · 1 month ago
Text
Apple in Talks to Invest in OpenAI Amidst AI Expansion
Reports suggest that Apple may soon become a significant investor in OpenAI.
Apple is reportedly in negotiations to invest in OpenAI as part of a new funding round that could value the AI company above $100 billion. The Wall Street Journal reported that Apple is joining other major investors, including Microsoft and venture capital firm Thrive Capital, in this round of funding​. Apple $AAPL will get a board observer seat at OpenAI later this year as part of its…
0 notes
jcmarchi · 3 months ago
Text
Liquid AI Launches Liquid Foundation Models: A Game-Changer in Generative AI
New Post has been published on https://thedigitalinsider.com/liquid-ai-launches-liquid-foundation-models-a-game-changer-in-generative-ai/
Liquid AI Launches Liquid Foundation Models: A Game-Changer in Generative AI
In a groundbreaking announcement, Liquid AI, an MIT spin-off, has introduced its first series of Liquid Foundation Models (LFMs). These models, designed from first principles, set a new benchmark in the generative AI space, offering unmatched performance across various scales. LFMs, with their innovative architecture and advanced capabilities, are poised to challenge industry-leading AI models, including ChatGPT.
Liquid AI was founded by a team of MIT researchers, including Ramin Hasani, Mathias Lechner, Alexander Amini, and Daniela Rus. Headquartered in Boston, Massachusetts, the company’s mission is to create capable and efficient general-purpose AI systems for enterprises of all sizes. The team originally pioneered liquid neural networks, a class of AI models inspired by brain dynamics, and now aims to expand the capabilities of AI systems at every scale, from edge devices to enterprise-grade deployments.
What Are Liquid Foundation Models (LFMs)?
Liquid Foundation Models represent a new generation of AI systems that are highly efficient in both memory usage and computational power. Built with a foundation in dynamical systems, signal processing, and numerical linear algebra, these models are designed to handle various types of sequential data—such as text, video, audio, and signals—with remarkable accuracy.
Liquid AI has developed three primary language models as part of this launch:
LFM-1B: A dense model with 1.3 billion parameters, optimized for resource-constrained environments.
LFM-3B: A 3.1 billion-parameter model, ideal for edge deployment scenarios, such as mobile applications.
LFM-40B: A 40.3 billion-parameter Mixture of Experts (MoE) model designed to handle complex tasks with exceptional performance.
These models have already demonstrated state-of-the-art results across key AI benchmarks, making them a formidable competitor to existing generative AI models.
State-of-the-Art Performance
Liquid AI’s LFMs deliver best-in-class performance across various benchmarks. For example, LFM-1B outperforms transformer-based models in its size category, while LFM-3B competes with larger models like Microsoft’s Phi-3.5 and Meta’s Llama series. The LFM-40B model, despite its size, is efficient enough to rival models with even larger parameter counts, offering a unique balance between performance and resource efficiency.
Some highlights of LFM performance include:
LFM-1B: Dominates benchmarks such as MMLU and ARC-C, setting a new standard for 1B-parameter models.
LFM-3B: Surpasses models like Phi-3.5 and Google’s Gemma 2 in efficiency, while maintaining a small memory footprint, making it ideal for mobile and edge AI applications.
LFM-40B: The MoE architecture of this model offers comparable performance to larger models, with 12 billion active parameters at any given time.
A New Era in AI Efficiency
A significant challenge in modern AI is managing memory and computation, particularly when working with long-context tasks like document summarization or chatbot interactions. LFMs excel in this area by efficiently compressing input data, resulting in reduced memory consumption during inference. This allows the models to process longer sequences without requiring expensive hardware upgrades.
For example, LFM-3B offers a 32k token context length—making it one of the most efficient models for tasks requiring large amounts of data to be processed simultaneously.
A Revolutionary Architecture
LFMs are built on a unique architectural framework, deviating from traditional transformer models. The architecture is centered around adaptive linear operators, which modulate computation based on the input data. This approach allows Liquid AI to significantly optimize performance across various hardware platforms, including NVIDIA, AMD, Cerebras, and Apple hardware.
The design space for LFMs involves a novel blend of token-mixing and channel-mixing structures that improve how the model processes data. This leads to superior generalization and reasoning capabilities, particularly in long-context tasks and multimodal applications.
Expanding the AI Frontier
Liquid AI has grand ambitions for LFMs. Beyond language models, the company is working on expanding its foundation models to support various data modalities, including video, audio, and time series data. These advancements will enable LFMs to scale across multiple industries, such as financial services, biotechnology, and consumer electronics.
The company is also focused on contributing to the open science community. While the models themselves are not open-sourced at this time, Liquid AI plans to release relevant research findings, methods, and data sets to the broader AI community, encouraging collaboration and innovation.
Early Access and Adoption
Liquid AI is currently offering early access to its LFMs through various platforms, including Liquid Playground, Lambda (Chat UI and API), and Perplexity Labs. Enterprises looking to integrate cutting-edge AI systems into their operations can explore the potential of LFMs across different deployment environments, from edge devices to on-premise solutions.
Liquid AI’s open-science approach encourages early adopters to share their experiences and insights. The company is actively seeking feedback to refine and optimize its models for real-world applications. Developers and organizations interested in becoming part of this journey can contribute to red-teaming efforts and help Liquid AI improve its AI systems.
Conclusion
The release of Liquid Foundation Models marks a significant advancement in the AI landscape. With a focus on efficiency, adaptability, and performance, LFMs stand poised to reshape the way enterprises approach AI integration. As more organizations adopt these models, Liquid AI’s vision of scalable, general-purpose AI systems will likely become a cornerstone of the next era of artificial intelligence.
If you’re interested in exploring the potential of LFMs for your organization, Liquid AI invites you to get in touch and join the growing community of early adopters shaping the future of AI.
For more information, visit Liquid AI’s official website and start experimenting with LFMs today.
0 notes
theleadersglobe · 7 months ago
Text
Report: Apple Intelligence May Integrate Google’s Gemini in the Future
Tumblr media
On 10 June, during its Worldwide Developers Conference (WWDC), Apple unveiled “Apple Intelligence,” a suite of AI-powered tools embedded in the next-generation platforms: iOS 18 for iPhone, iPadOS 18 for iPad, and macOS Sequoia for Macs. 
However, the subsequent headlines focused on Apple’s partnership with OpenAI for integrating ChatGPT into its intelligence suite. This development has sparked curiosity among technology enthusiasts about how the Apple-OpenAI partnership will function, given Apple’s emphasis on privacy in its intelligence systems. 
Additionally, there is speculation about potential collaborations with other technology companies like Google to enhance AI capabilities further.
Read More: (https://theleadersglobe.com/science-technology/report-apple-intelligence-may-integrate-googles-gemini-in-the-future/)
0 notes
thebigshoutout · 7 months ago
Text
Apple WWDC 2024: Tim Cook Unveils iOS 18, Siri Enhancements, and ChatGPT Integration
At WWDC 2024, Apple made groundbreaking announcements that set the tech world abuzz. CEO Tim Cook took the stage to unveil iOS 18, featuring significant upgrades designed to enhance user experience and productivity. Key improvements include a revamped Siri with advanced natural language processing capabilities and deeper integration with third-party apps, promising more fluid and intelligent…
Tumblr media
View On WordPress
1 note · View note
kde-plasma-official · 5 days ago
Note
whats the status of like. using linux on a phone. it feels like there are two parallel universes, one that kde lives in where people use linux on phones, and one where if you google linux phones you discover theyre almost usable but they can barely make phone calls or send texts and they only run on like 4 models of phone
don't have much experience with linux on phone so anyone please correct me if i'm wrong but
one of the problems with phones is that every vendor and manufacturer adds their own proprietary driver blob to it and these have to be extracted and integrated into the kernel in order for the hardware to function.
as companies don't like to share their magic of "how does plastic slab make light", reverse engineering all your hardware is quite a difficult task. Sometimes there just isn't a driver for the camera of a phone model yet because no one was able to make it work.
So naturally, this takes a lot of time and tech is evolving fast so by the time a phone is completely compatible, next generations are already out and your new model obsolete.
Also important to note: most of this work is made by volunteers, people with a love for programming who put a lot of their own time into these things, most of them after their daytime jobs as a hobby.
Of course, there are companies and associations out there who build linux phones for a living. But the consumer hardware providers, like Pinephone, Fairphone and others out there aren't as big and don't have this much of a lobby behind them so they can't get their prices cheap. Also the manufacturers are actively working against our right to repair so we need more activism.
To make the phones still affordable (and because of said above driver issues) they have to use older hardware, sometimes even used phones from other manufacturers that they have to fix up, so you can't really expect a modern experience. At least you can revive some older phones. As everything Linux.
Then there's the software providers who many of are non-profits. KDE has Plasma Mobile, Canonical works on Ubuntu Touch, Debian has the Mobian Project and among some others there's also the Arch Linux ARM Project.
That's right baby, ARM. We're not talking about your fancy PC or ThinkPad with their sometimes even up to 64-bit processors. No no no, this is the future, fucking chrome jellyfishes and everything.
This is the stuff Apple just started building their fancy line of over-priced and over-engineered Fisher-Price laptop-desktops on and Microsoft started (Windows 10X), discontinued and beat into the smush of ChatGPT Nano Bing Open AI chips in all your new surface hp dell asus laptops.
What I was trying to say is, that program support even for the market dominating monopoles out there is still limited and.... (from my own experience from the workplace) buggy. Which, in these times of enshittification is a bad news. And the good projects you gotta emulate afterwards anyways so yay extra steps!
Speaking of extra steps: In order to turn their phone into a true freedom phone, users need to free themselves off their phones warranty, lose their shackles of not gaining root access, installing a custom recovery onto their phone (like TWRP for example), and also have more technical know-how as the typical user, which doesn't quite sounds commercial-ready to me.
So is there no hope at all?
Fret not, my friend!
If we can't put the Linux into the phone, why don't we put the phone around the Linux? You know... Like a container?
Thanks to EU regulations-
(US consumers, please buy the European versions of your phones! They are sometimes a bit more expensive, but used models of the same generation or one below usually still have warranty, are around the same price as over there in Freedom Valley, and (another side tangent incoming - because of better European consumer protection laws) sometimes have other advantages, such as faster charging and data transfer (USB-C vs lightning ports) or less bloated systems)
- it is made easier now to virtualize Linux on your phone.
You can download a terminal emulator, create a headless Linux VM and get A VNC client running. This comes with a performance limit though, as a app with standard user permissions is containerized inside of Android itself so it can't use the whole hardware.
If you have root access on your phone, you can assign more RAM and CPU to your VM.
Also things like SDL just released a new version so emulation is getting better.
And didn't you hear the news? You can run other things inside a VM on an iPhone now! Yup, and I got Debian with Xfce running on my Xiaomi phone. Didn't do much with it tho. Also Windows XP and playing Sims 1 on mobile. Was fun, but battery draining. Maybe something more for tablets for now.
Things will get interesting now that Google officially is a monopoly. It funds a lot of that stuff.
I really want a Steam Deck.
Steam phones would be cool.
12 notes · View notes
meret118 · 3 months ago
Text
Chatbots Are Primed to Warp Reality
A growing body of research shows how AI can subtly mislead users—and even implant false memories.
More and more people are learning about the world through chatbots and the software’s kin, whether they mean to or not. Google has rolled out generative AI to users of its search engine on at least four continents, placing AI-written responses above the usual list of links; as many as 1 billion people may encounter this feature by the end of the year. Meta’s AI assistant has been integrated into Facebook, Messenger, WhatsApp, and Instagram, and is sometimes the default option when a user taps the search bar. And Apple is expected to integrate generative AI into Siri, Mail, Notes, and other apps this fall. Less than two years after ChatGPT’s launch, bots are quickly becoming the default filters for the web.
Yet AI chatbots and assistants, no matter how wonderfully they appear to answer even complex queries, are prone to confidently spouting falsehoods—and the problem is likely more pernicious than many people realize. A sizable body of research, alongside conversations I’ve recently had with several experts, suggests that the solicitous, authoritative tone that AI models take—combined with them being legitimately helpful and correct in many cases—could lead people to place too much trust in the technology. That credulity, in turn, could make chatbots a particularly effective tool for anyone seeking to manipulate the public through the subtle spread of misleading or slanted information. No one person, or even government, can tamper with every link displayed by Google or Bing. Engineering a chatbot to present a tweaked version of reality is a different story.
. . .
As the election approaches, some people will use AI assistants, search engines, and chatbots to learn about current events and candidates’ positions. Indeed, generative-AI products are being marketed as a replacement for typical search engines—and risk distorting the news or a policy proposal in ways big and small. Others might even depend on AI to learn how to vote. Research on AI-generated misinformation about election procedures published this February found that five well-known large language models provided incorrect answers roughly half the time—for instance, by misstating voter-identification requirements, which could lead to someone’s ballot being refused. 
. . .
The idea was to see if a witness could be led to say a number of false things about the video, such as that the robbers had tattoos and arrived by car, even though they did not. The resulting paper, which was published earlier this month and has not yet been peer-reviewed, found that the generative AI successfully induced false memories and misled more than a third of participants—a higher rate than both a misleading questionnaire and another, simpler chatbot interface that used only the same fixed survey questions.
More at the link.
----
Very interesting article, but warning for a gif that I think will bother photosensitive readers at the top.
6 notes · View notes
netscapenavigator-official · 7 months ago
Text
Apple dropping support for the A10X iPad Pro, but keeping support for the A10 iPad base model is exactly why I’d be caught dead before ever considering buying another Apple product.
Also, announcing ChatGPT integration and AI Image Generation baked into iOS 18, but then artificially making it an exclusive to the latest iPhone 15-model is just sad when a smart toaster could connect to your servers and run these ““A.I.”” features.
Like, tell me you’re desperately trying to convince people to buy your newer, enshitifed hardware without telling me you’re desperately trying yo convince people to buy your newer, enshitifed hardware.
9 notes · View notes
noticiassincensura · 2 months ago
Text
Zuckerberg is now taking on Google: Meta prepares its own AI-powered search engine
Mark Zuckerberg with the Google icon in a modified image
Meta is reportedly developing an AI-based search engine to be used with its ‘Meta AI’ chatbot.
October 29, 2024–08:41 AM
After decades of dominating the Internet, Google now faces multiple challenges, with little time to react. The launch of ChatGPT dealt a significant blow to Google, which had already been researching generative AI for years but didn’t yet have a commercial product. The rapid development of Gemini was its response, but now another formidable competitor has emerged, one that may make things even harder for Google.
We’re talking about Meta, the company that even changed its name to bet on the metaverse, but which, in recent months, has shifted its focus toward Artificial Intelligence. And far from being at a disadvantage, the company led by Mark Zuckerberg has managed to quickly get on par by integrating Meta AI into WhatsApp, Instagram, and the rest of its platform’s apps.
Now, according to The Information, Meta is preparing a game-changing move that could mean we no longer need to rely on Google: its own search engine. According to internal sources, the company has been using ‘crawlers’ — programs that index the Internet, much like those used by Google’s search engine to discover new pages for user searches.
Until now, Meta has been content to use Google’s search engine or Microsoft Bing when users asked Meta AI questions about news or recent events; however, with its own search engine, Meta would no longer rely on third parties, allowing it to deliver a more tailored experience with greater control over the results. In addition to web indexing, Meta will use direct access to Reuters, thanks to an agreement signed last week allowing its AI to use news published by the agency for its responses.
Until recently, no one wanted to directly challenge Google with their own search engine — even Apple abandoned such an idea despite wanting to separate from its major competitor. But new AI-powered tools have opened the door for other Internet giants to offer their own search engines. However, Meta’s search engine would only be available within Meta AI, the chatbot that lets users start conversations with AI through any of the apps on its platform.
Meta isn’t the only one going after Google. The creators of ChatGPT have already announced their search engine, SearchGPT, and Microsoft used Copilot to boost traffic to Bing. In other words, Google suddenly has a lot of competition in its original product, and the question now is how it will respond.
3 notes · View notes
dpfocu · 3 days ago
Text
OpenAI’s 12 Days of “Shipmas”: Summary and Reflections
Over 12 days, from December 5 to December 16, OpenAI hosted its “12 Days of Shipmas” event, revealing a series of innovations and updates across its AI ecosystem. Here’s a summary of the key announcements and their implications:
Day 1: Full Launch of o1 Model and ChatGPT Pro
OpenAI officially launched the o1 model in its full version, offering significant improvements in accuracy (34% fewer errors) and performance. The introduction of ChatGPT Pro, priced at $200/month, gives users access to these advanced features without usage caps.
Commentary: The Pro tier targets professionals who rely heavily on AI for business-critical tasks, though the price point might limit access for smaller enterprises.
Day 2: Reinforced Fine-Tuning
OpenAI showcased its reinforced fine-tuning technique, leveraging user feedback to improve model precision. This approach promises enhanced adaptability to specific user needs.
Day 3: Sora - Text-to-Video
Sora, OpenAI’s text-to-video generator, debuted as a tool for creators. Users can input textual descriptions to generate videos, opening new doors in multimedia content production.
Commentary: While innovative, Sora’s real-world application hinges on its ability to handle complex scenes effectively.
Day 4: Canvas - Enhanced Writing and Coding Tool
Canvas emerged as an all-in-one environment for coding and content creation, offering superior editing and code-generation capabilities.
Day 5: Deep Integration with Apple Ecosystem
OpenAI announced seamless integration with Apple’s ecosystem, enhancing accessibility and user experience for iOS/macOS users.
Day 6: Improved Voice and Vision Features
Enhanced voice recognition and visual processing capabilities were unveiled, making AI interactions more intuitive and efficient.
Day 7: Projects Feature
The new “Projects” feature allows users to manage AI-powered initiatives collaboratively, streamlining workflows.
Day 8: ChatGPT with Built-in Search
Search functionality within ChatGPT enables real-time access to the latest web information, enriching its knowledge base.
Day 9: Voice Calling with ChatGPT
Voice capabilities now allow users to interact with ChatGPT via phone, providing a conversational edge to AI usage.
Day 10: WhatsApp Integration
ChatGPT’s integration with WhatsApp broadens its accessibility, making AI assistance readily available on one of the most popular messaging platforms.
Day 11: Release of o3 Model
OpenAI launched the o3 model, featuring groundbreaking reasoning capabilities. It excels in areas such as mathematics, coding, and physics, sometimes outperforming human experts.
Commentary: This leap in reasoning could redefine problem-solving across industries, though ethical and operational concerns about dependency on AI remain.
Day 12: Wrap-Up and Future Vision
The final day summarized achievements and hinted at OpenAI’s roadmap, emphasizing the dual goals of refining user experience and expanding market reach.
Reflections
OpenAI’s 12-day spree showcased impressive advancements, from multimodal AI capabilities to practical integrations. However, challenges remain. High subscription costs and potential data privacy concerns could limit adoption, especially among individual users and smaller businesses.
Additionally, as the competition in AI shifts from technical superiority to holistic user experience and ecosystem integration, OpenAI must navigate a crowded field where user satisfaction and practical usability are critical for sustained growth.
Final Thoughts: OpenAI has demonstrated its commitment to innovation, but the journey ahead will require balancing cutting-edge technology with user-centric strategies. The next phase will likely focus on scalability, affordability, and real-world problem-solving to maintain its leadership in AI.
What are your thoughts on OpenAI’s recent developments? Share in the comments!
2 notes · View notes
mariacallous · 6 months ago
Text
Apple has become the first big tech company to be charged with breaking the European Union’s new digital markets rules, three days after the tech giant said it would not release artificial intelligence in the bloc due to regulation.
On Monday, the European Commission said that Apple’s App Store was preventing developers from communicating with their users and promoting offers to them directly, a practice known as anti-steering.
“Our preliminary position is that Apple does not fully allow steering. Steering is key to ensure that app developers are less dependent on gatekeepers’ app stores and for consumers to be aware of better offers,” Margrethe Vestager, the EU’s competition chief said in a statement.
On X, the European commissioner for the internal market, Thierry Breton, gave a more damning assessment. “For too long Apple has been squeezing out innovative companies—denying consumers new opportunities and choices,” he said.
The EU referred to its Monday charges as “preliminary findings.” Apple now has the opportunity to respond to the charges and, if an agreement is not reached, the bloc has the power to levy fines—which can reach up to 10 percent of the company’s global turnover—before March 2025.
Tensions between Apple and the EU have been rising for months. Brussels opened an investigation into the smartphone maker in March over failure to comply with the bloc’s competition rules. Although investigations were also opened in Meta and Google-parent Alphabet, it is Apple’s relationship with European developers that has long been the focus in Brussels.
Back in March, one of the MEPs who negotiated the Digital Markets Act told WIRED that Apple was the logical first target for the new rules, describing the company as “low-hanging fruit.” Under the DMA it is illegal for big tech companies to preference their own services over rivals’.
Developers have seethed against the new business terms imposed on them by Apple, describing the company’s policies as “abusive,” “extortion,” and “ludicrously punitive.”
Apple spokesperson Rob Saunders said on Monday he was confident the company was in compliance with the law. “All developers doing business in the EU on the App Store have the opportunity to utilize the capabilities that we have introduced, including the ability to direct app users to the web to complete purchases at a very competitive rate,” he says.
On Friday, Apple said it would not release its artificial intelligence features in the EU this year due to what the company described as “regulatory uncertainties”. “Specifically, we are concerned that the interoperability requirements of the DMA could force us to compromise the integrity of our products in ways that risk user privacy and data security,” said Saunders in a statement. The features affected are iPhone Mirroring, SharePlay Screen Sharing enhancements, and Apple’s first foray into generative AI, Apple Intelligence.
Apple is not the only company to blame new EU rules for its decision to delay the roll out of new features. Last year, Google delayed the EU roll out of its ChatGPT rival Bard, and earlier in June Meta paused plans to train its AI on Europeans’ personal Facebook and Instagram data following discussions with privacy regulators. “This is a step backwards for European innovation, competition in AI development and further delays bringing the benefits of AI to people in Europe,” the company said at the time.
6 notes · View notes
aionlinemoney · 3 months ago
Text
How Apple Integration with ChatGPT AI is Transforming AI-Powered User Experiences
Tumblr media
In today’s fastest changing world of new technology in 2024, the use of artificial intelligence (AI) is becoming more important for human beings to stay updated in everyday digital tools. AI is making the work easy to understand and playing a bigger role in our daily life. The big change is coming and also one of the exciting developments in the technology is integration of ChatGPT AI with Apple devices. This will make the apple devices into the next level combination. The upcoming merge of OpenAI and Apple Devices has potential to transform the technology in everyday life, this will make our interaction smoother and more efficient in the devices. AI has potential to transform our digital experiences by offering smarter and emotional ways to interact with the technologies.
AI meets daily new technology in 2024: New Era of integration
Artificial intelligence has transformed from theoretical concepts into practical tools used by many people in daily life. AI has become part of every feature in our life like smart home devices, virtual assistants, and mobile devices. Open AI is one of the most famous and interesting AI tools developed by OpenAI. ChatGPT AI can understand and generate human-like text based on what we give commands to it. Chatgpt AI is changing how we interact with new technology in 2024, whether it’s answering questions, having detailed conversations, drafting emails whatever we give commands it generates professionally. By using AI the life of humans has become easier From year to year, Apple devices are famous for its innovative and user-friendly experience with their devices. They have made their user experience better from year to year. The craze of upcoming Apple devices using their devices from Iphone to Mac devices and also from launching Siri to high-quality cameras, people are crazy for the new experience of collaboration between Apple and OpenAI. With Chatgpt AI users can expect more responsive and intelligent interaction with their devices. Imagine a more natural conversation with Siri, receiving more smart suggestions to your needs and problem-solving tasks, this integration will definitely change the experience of users making the technology more reachable than ever before.
Benefits of integrating ChatGPT with Apple devices :
Integration of OpenAI with Apple devices can open up new possibilities in personal and professional use. This integration helps to transform our interaction with technology making our devices helpful and smarter in our daily lives. Here are some key points where this integration could make a big difference :
Enhanced Virtual assistant : Siri is known as Iphone’s voice-activated personal assistant. If ChatGPT AI is integrated with Siri, it can make Siri more Powerful in devices. Siri is best for handling basic commands but ChatGPT AI can better understand and respond to more complex commands very efficiently. This will help to take users’ experience to the next level, so users can ask complicated questions and get detailed explanations more naturally. Also using Chatgpt enhances the experience of the user.
Perfect Cross Devices experience :  Apple devices are known for its perfect integration and user experience across its devices. The addition of Chatgpt AI would further increase this connected experience. Imagine starting a communication with Siri with your iPhone ongoing Macbook and getting updates on Apple Watch. This is a perfect cross device experience getting in Devices. We can handle our work and data easily by this experience.
Enhanced content creation : The integration of ChatGPT with Apple devices could greatly benefit content creation. Writers, marketers, and creators could use OpenAI to come up with ideas or edit text directly on their Apple devices. Being able to produce high-quality and relevant content easily would be an advantage for those who depend on content for their work.
Education and learning : Integrating ChatGPT into Apple devices could greatly improve educational tools and learning experiences. ChatGPT AI could act as a personal tutor, helping students grasp difficult subjects, explaining topics in different ways, or providing interactive study guides. ChatGPT AI could offer practice conversations, correct grammar, and give instant feedback for people learning new languages.
Conclusion : Integrating ChatGPT with Apple iPhone 16 is a big step forward for AI new technology in 2024. By bringing together Apple’s focus on innovation and user experience with OpenAI advanced language skills, users could have a more digital experience. However, it’s important to consider privacy, accuracy, and user trust to fully benefit from this powerful combination. As Apple devices in 2024 keep pushing forward with new ideas, the future of AI interactions looks promising, with the potential to change how we use our devices and connect with the world.
3 notes · View notes
boreal-sea · 3 months ago
Text
So Apple is doing AI too (because of course it is, sigh) but I don't hate it.
Things I appreciate / don't hate:
Some aspects of the AI are operating on your phone directly, and therefor isn't sending your information to anyone, including Apple.
For tasks that may require third-party AI such as ChatGPT, users will be prompted if they want to connect to that service, and they can reject that connection.
The editing functions seem genuinely helpful, like tone adjustments and conciseness/summaries. Plus, spellcheck and other general grammar tools. We've been using these for ages.
It seems like Siri's abilities are going to be more helpful, like remembering conversational context, so you can actually utilize Siri more effectively to schedule things, set timers, etc.
Things that are "hmm"
There are some tasks that will require a connection to Apple's custom-built AI cloud, rather than being performed on the phone. It's unclear if users will be notified when this is happening.
Some of the AI functions require integration with ChatGPT and OpenAI. While Apple says they anonymize your data before it reaches these companies, there's no real way to know. However, see above - users will have a choice to reject this integration before it occurs, which is why this is in the "hmm" category and not the "do not like" category.
Things I don't like:
It has some generative-AI aspects to it. It can make up new emoji, generate images from word prompts or sketches, that kind of thing. The emoji isn't so bad, people have been writing programs online that mashup emoji already and I really don't see how this is much different. But obviously generating art is something I'm against.
I don't actually know their source for what they trained this AI on. What were the training sets? How does it know how to generate images and text? Apple says it was "internally trained". Was the art stolen? Who knows.
4 notes · View notes
jcmarchi · 7 months ago
Text
AI Set To Take Center Stage at Today’s Apple WWDC Conference
New Post has been published on https://thedigitalinsider.com/ai-set-to-take-center-stage-at-todays-apple-wwdc-conference/
AI Set To Take Center Stage at Today’s Apple WWDC Conference
Apple’s annual Worldwide Developers Conference (WWDC) is set to take center stage today with AI expected to be the main focus. The WWDC event serves as a platform for Apple to showcase its latest software innovations and features, making it a highly anticipated gathering for developers and tech enthusiasts alike. This year’s conference promises to be particularly groundbreaking, as Apple is poised to unveil its ambitious AI initiative and its integration across the company’s ecosystem, including iOS, iPadOS, watchOS, and macOS.
Apple’s AI Initiative
At WWDC 2024, Apple is expected to reveal its comprehensive AI strategy, which aims to integrate artificial intelligence seamlessly across its software suite. This move will signal Apple’s increasing focus on leveraging AI to enhance user experience and streamline interactions with its devices. Rumors suggest that Apple may collaborate with OpenAI, the creators of the popular ChatGPT language model, to bring cutting-edge AI capabilities to its products.
Apple’s AI features are likely to be branded under the moniker ‘Apple Intelligence,’ emphasizing the company’s commitment to delivering intelligent and intuitive solutions. By leveraging large language models (LLMs) and advanced machine learning techniques, Apple aims to further improve the way users interact with their iPhones, iPads, Apple Watches, and Macs. This AI initiative is expected to position Apple as a strong competitor to tech giants like Google and Microsoft, who have already made significant strides in the AI space.
Enhancements to Siri
One of the most anticipated announcements at WWDC 2024 is the revamped Siri voice assistant, powered by generative AI. Apple is expected to showcase a more intelligent and conversational Siri that can better understand user queries and take actions within Apple’s own apps. By integrating advanced natural language processing (NLP) algorithms, Siri will be able to provide more accurate and contextually relevant responses, making it a more reliable and efficient virtual assistant.
The enhanced Siri is rumored to rival the capabilities of other AI assistants, such as Google’s Astra and Microsoft’s AI-powered offerings. With the ability to engage in more natural conversations and perform complex tasks, Siri is set to become a central part of the Apple ecosystem, seamlessly connecting users to their favorite apps and services. As Apple continues to prioritize user privacy, it will be interesting to see how the company balances the integration of powerful AI features with its commitment to data security.
iOS 18 and iPadOS 18 Updates
At the heart of Apple’s WWDC event lies the unveiling of iOS 18, the next-generation operating system for iPhones. This year’s update is expected to bring significant new capabilities and designs centered around AI integration. iOS 18 will likely incorporate AI-powered features that enhance photos, music, texting, and even emoji creation. Imagine a smarter Photos app that can automatically organize and categorize your images, or a Music app that crafts personalized playlists based on your listening habits.
Similarly, iPadOS 18 is set to receive many of the same AI-driven enhancements as its iPhone counterpart. Apple’s tablets will benefit from improved multitasking capabilities, intelligent app suggestions, and seamless integration with other Apple devices. As with any major update, privacy concerns may arise, but Apple is known for its strong stance on data protection. The company is expected to address these concerns by leveraging on-device processing and secure data handling techniques.
watchOS 11 and Other Operating System Updates
While the spotlight may be on iOS and iPadOS, Apple’s other operating systems won’t be left behind. watchOS 11, the software that powers the Apple Watch, is rumored to introduce new workout types and watch faces. Although it may not be a major overhaul, the update will likely bring refinements and optimizations to the wearable platform. Apple may also showcase how AI can enhance the fitness tracking and health monitoring capabilities of the Apple Watch.
Beyond watchOS, Apple is expected to provide updates on macOS, tvOS, and its other operating systems. These updates will likely focus on performance improvements, bug fixes, and tighter integration with the company’s AI initiatives. Developers will be keen to learn about any new APIs or tools that can help them create more intelligent and engaging apps across Apple’s ecosystem.
Hardware Announcements
While WWDC is primarily a software-focused event, Apple may surprise attendees with hardware announcements related to its AI push. There is speculation that the company may introduce new AI-focused chips in future devices, building upon the success of its A-series and M-series processors. By developing chips specifically designed for AI workloads, Apple can ensure optimal performance and efficiency while maintaining tight control over privacy and security.
In addition to AI chips, Apple may provide updates on its in-house chip development efforts. The company has been investing heavily in its own silicon, which has already proven successful in its Mac lineup. By leveraging its expertise in chip design, Apple can create a seamless integration between hardware and software, enabling more advanced AI capabilities across its devices.
Lastly, Apple may take a moment to discuss the Apple Vision Pro, its highly anticipated mixed-reality headset. While the focus will likely remain on the headset’s role in spatial computing and immersive experiences, there may be mention of how AI can enhance the functionality and user experience of this cutting-edge device.
Developer Tools and Platforms
WWDC is also a crucial event for developers. Apple is expected to introduce new tools and frameworks that will allow developers to integrate AI capabilities into their apps more easily. These tools may include APIs for natural language processing, computer vision, and machine learning, enabling developers to create more intelligent and contextually aware applications.
Moreover, Apple may announce enhancements to its existing developer platforms, such as Xcode and SwiftUI. These improvements will likely focus on streamlining the app development process and providing developers with more powerful tools to create engaging user experiences. With the growing importance of AI, Apple may also provide guidance and best practices for implementing AI features in a responsible and privacy-preserving manner.
Another area of interest for developers is Apple’s cloud computing capabilities. As AI workloads become more demanding, developers will be looking for efficient and scalable ways to process data and train models. Apple may provide updates on its cloud infrastructure and services, highlighting how developers can leverage these resources to build AI-powered apps.
The Future of Apple is AI
As WWDC 2024 approaches, excitement is building for what promises to be a landmark event in Apple’s history. With AI set to take center stage, the conference will provide a glimpse into the company’s vision for the future of computing. By integrating artificial intelligence across its ecosystem, Apple aims to redefine the way we interact with our devices and unlock new possibilities for innovation.
The announcements made at WWDC 2024 will have far-reaching implications for developers and consumers alike. Developers will gain access to powerful new tools and platforms that will enable them to create more intelligent and engaging apps, while consumers will benefit from a more personalized and efficient user experience across their Apple devices.
As Apple embarks on this AI-driven journey, it will be crucial for the company to balance innovation with its commitment to privacy and security. By leveraging its expertise in hardware and software integration, Apple can deliver cutting-edge AI capabilities while maintaining the trust and confidence of its users.
0 notes
fundgruber · 7 months ago
Text
Tumblr media Tumblr media Tumblr media
Apple WWDC 2024, screenshot of the segment on the "semantic index". The semantic index is just knowledge graph PR on their Siri development, setting the stage for AI cloud computing of personal data being their new feature, as well as a connection to ChatGPT. Worth noting that all the big players (Apple, Google, Microsoft) are doing the same thing simultaneously, introducting tighter surveillance ("on-screen awareness") directly on the operating system, in order to extract personal data for AI assistants ("intelligence that understands you"). Apple is now pushing ahead in this field exactly in one direction: using privacy scandals and the outrage against the AI economy to promise a closed wall system in their cloud. The fact that they need to build a bridge to the businesses they so theatrically distances themselves of (OpenAI/ChatGPT) shows how flimsy their own system is.
The WWDC presentation produced an image of this style of network culture. Its core selling point being a 'personal' index (how often do we hear the word "mum" in this segment...), and a personal cloud. This personal cloud then will process all your friends and family, as the presentation shows in a charming way, integrating algorithmic production in every step of personal communication (custom emojis, text rewriting, photo remixing). Everyone and everyone's communication and image is part of this "personal" computing.
"Contemporary production includes linguistic competence, knowledge, imagination, social interaction as its core sources of added value. So, the new modes of production and contemporary wealth are built not on labour power understood in classic Marxist terms, but on the appropriation of the entirety of human productive power. Terranova applies the concepts to today’s network cultures: “These are moments which turn qualitative, intensive differences into quantitative relations of exchange and equivalence; which enclose the open and dissipative potential of cultural production into differential hierarchies; which accumulate the rewards or work carried out by larger social assemblages… “ [Terranova 2006, 28] The logic of capital subsumes the potential of many platforms, and encloses it within the chain of valorization of creativity and subjectivity."
Goriunova, Olga (2007) Towards a new critique of network cultures: creativity, autonomy and late capitalism in the constitution of cultural forms on the Internet, Network Cultures
3 notes · View notes
globsynbusinessschool · 8 months ago
Text
ChatGPT vs. Gemini vs. Copilot
Tumblr media
The rise of AI chatbots has been fast, with more options becoming available to users. These bots are becoming a regular part of the software and devices we use every day.
Just like choosing an email provider or music app, you can now pick your favorite AI chatbot too. We’ve tested three of the most popular ones to help you decide which might be right for you.
Aside from these, there are others like Perplexity and Claude, but our focus here is on the biggest names: OpenAI's ChatGPT, Google's Gemini, and Microsoft’s Copilot.
We’ve tested each bot and included three standard challenges for evaluation. We asked for "a fun game idea for a 5-year-old’s birthday party," "a new smartphone app concept," and "instructions for resetting macOS."
In this blog, we're comparing the free versions of these chatbots available at the time of writing.
Which One Is Best for Regular Users? ChatGPT or Gemini or Copilot
 ChatGPT powered by OpenAI
ChatGPT, developed by OpenAI, has been a leader in generative AI. It's widely accessible through web browsers on computers and mobile apps for Android and iOS. The platform has made headlines recently with announcements from OpenAI, including updates on their latest models and features.
There's a significant difference between the free and $20-per-month Plus versions of ChatGPT. The Plus version offers extra features like image generation and document scanning. Subscribers can also create their own GPTs with custom prompts and data. OpenAI's CEO, Sam Altman has mentioned that these enhancements are part of their strategy to democratize AI.
ChatGPT Plus provides access to the latest GPT-4 models, whereas the free GPT-3.5 is good for basic AI interactions. It's quick and versatile but lacks web link references like Copilot for fact-checking. The open AI search engine, one of the key initiatives, helps improve the platform's information processing capabilities.
Choosing ChatGPT is ideal for those interested in cutting-edge AI development. However, it's more effective with a paid subscription rather than on a budget. Apple's involvement with OpenAI has also fueled further interest in the platform.
In testing, ChatGPT performed reasonably well. It suggested a themed musical statues game for kids and a health-focused smartphone app named FitTrack.
Gemini powered by Google
Formerly known as Google Bard, Gemini is available as a web app and on Android and iOS. There are free and paid ($20 per month) plans.
Paying for Gemini gets you access to newer, smarter models. The interface resembles ChatGPT, and it integrates well with other Google services.
Gemini is suited for Google product users. It provided sensible responses to our challenges and suggested a neighborhood item-sharing app and a twist on the classic party game.
Copilot powered by Microsoft
Copilot is integrated into many Microsoft products like Bing and Windows. It’s available as a web app and mobile app.
Copilot uses Microsoft’s Bing search engine and often provides web links with citations. It's conversational and offers various text output settings.
The AI behind Copilot is OpenAI’s GPT-4, with different settings for text output: More Creative, More Balanced, and More Precise.
Copilot suggested "What’s the Time, Mr. Wolf?" for the kids' game and a virtual interior design app for smartphones. Its macOS reset instructions were accurate and cited from Apple’s support site.
If you use Microsoft products heavily, Copilot is a natural choice. It excels at referencing web information and providing clear citations.
In conclusion, all three—ChatGPT, Gemini, and Copilot —can be used for free, allowing you to choose based on your preferences. Copilot offers the most AI features without payment, ChatGPT is highly competent with a subscription, and Gemini is ideal for Google fans.
Frequently Asked Questions (FAQs)
How Do Chatbots Understand Language Differently Than a Programming Language?
Chatbots and programming languages are different in how they understand language.
Programming languages like Python or Java are structured and strict. They need exact commands and follow clear rules to work. If you make a mistake, the program won't function correctly.
Chatbots, on the other hand, are designed to interpret human language. They use techniques like Natural Language Processing (NLP) to understand words, phrases, and even context. This allows them to grasp the meaning behind what people say, even if the words are not in a set pattern.
A chatbot can recognize synonyms (different words with similar meanings), understand the intent behind a sentence, and learn from the interactions it has with users. This flexibility is what sets chatbots apart from programming languages, which rely on strict instructions to perform tasks.
What Does the Generative AI Ecosystem Refer to?
The term "generative AI ecosystem" refers to a network of technologies, tools, and methodologies that use artificial intelligence (AI) to create or generate content autonomously. This ecosystem encompasses various AI models and algorithms designed to produce new and unique outputs based on learned patterns and data.
In simpler terms, generative AI involves systems that can generate things like text, images, music, or even video without direct human input for each specific output. These systems learn from large datasets and then use that knowledge to create new content that resembles what they've been trained on.
This ecosystem includes a range of technologies such as language models (like GPT), image generators (like DALL-E), and music composers that are able to produce content that is novel and, in many cases, convincingly human-like. The ultimate goal of the generative AI ecosystem is to automate and enhance creative processes across various domains, potentially transforming how we create and interact with digital content.
2 notes · View notes