[Blog]

Some content on this website is AI generated as a demo for my upcoming AI and Machine Learning workshop.

Gemini 1.5

Google introduced Gemini 1.5, a next-generation AI model showcasing significant advancements. This model brings a major leap in performance, with a notable breakthrough in understanding long contexts across various modalities. Gemini 1.5 utilizes a new, more efficient architecture, extending its context window up to 1 million tokens, the longest yet for any large-scale foundation model. This expansion enables new capabilities for developers and applications, promising more powerful and useful AI models. For a detailed overview, visit the official announcement. 


Gemini and AlphaCode 2 

In this YouTube video titled "Gemini Full Breakdown + AlphaCode 2 Bombshell," the speaker provides a comprehensive breakdown of the Gemini model from Google DeepMind. They compare Gemini Ultra to GPT 4, highlighting the differences in their abilities across different modalities. The speaker emphasizes Gemini Ultra's superior performance in the MML test, where it outperforms GPT 4 in all 50 tested subject areas. They express their belief that Gemini Ultra is the best new model, supported by discussions with the creators of MML and anthropic. The speaker also discusses Gemini's features, including its speech recognition and translation capabilities, as well as its impressive performance in various tasks like image understanding, document understanding, and video captioning. They note that Gemini has the potential to revolutionize programming education and practice. Overall, the speaker is highly impressed with the capabilities of Gemini and looks forward to its future development.

See less



Via Reddit

ChatGOT: Game of Thrones - The Battle for Control at OpenAI

Epic Power Struggles and Unforeseen Twists

The Initial Upheaval: Kings Dethroned and Alliances Tested

Intrigue and Speculation: The Game of AIs

The Struggle for the Throne and the Quest for Unity

The Tense Negotiations and Fraying Alliances

Loyalties Tested and Kingdoms on the Brink

The External Giants and the Fragile Balance of Power

The Climactic Resolution and the Dawn of a New Era

The Throne Reclaimed and the Kingdom Stabilized

The Aftermath: Reflections and Foresight

In a saga worthy of the annals of Westeros, the power dynamics within OpenAI have showcased the tumultuous and unpredictable nature of leadership in the technological realm. As peace returns to the kingdom, the world watches with bated breath to see how this formidable AI power navigates the complex seas of innovation and rivalry.






OpenAI DevDay takeaway

OpenAI’s DevDay has unveiled a suite of enhanced features and products, including the more efficient and cost-effective GPT-4 Turbo with a larger context window, the Assistants API for developers to build AI-driven applications, and expanded multimodal capabilities with vision, image creation through DALL·E 3, and text-to-speech options. These innovations also encompass improved instruction following, JSON mode for structuring outputs, reproducible outputs for consistent results, and log probabilities for token generation analysis .


For educators, these updates offer numerous opportunities. The GPT-4 Turbo’s larger context window allows for extensive interactive lessons and discussions, enabling the incorporation of more content and complex instruction sequences. With the Assistants API, teachers could create custom assistant applications tailored for classroom management, grading, or interactive learning. The multimodal capabilities, including image recognition and text-to-speech, could be integrated into teaching aids, making educational content more accessible and engaging for students with diverse learning needs. Customizable models could also allow for the development of educational tools that align with specific curricular requirements or learning objectives.

Embracing a New Era: ChatGPT APIs Catalyzing Startup Innovation

As we stand on the cusp of a transformative era in AI, the recent updates announced by OpenAI at DevDay are not just incremental; they are revolutionary leaps. Startups that have woven ChatGPT APIs into their fabric are about to get supercharged.

The GPT-4 Turbo, with its vast context window, beckons a new wave of complex applications. Imagine customer service bots handling intricate queries or legal tech startups digesting entire contracts in a single prompt. The cost-effectiveness only sweetens the deal, democratizing access for bootstrapped innovators.

The Assistants API is a game-changer, offering a skeleton key to developers crafting bespoke AI experiences. From a coding assistant for a budding tech firm to a smart educational companion for a language learning app, the potential is boundless.

And then there's the multimodal functionality. Startups can now infuse vision and voice into their offerings, propelling accessibility and creating immersive user experiences. This is a leap toward a more inclusive digital ecosystem, where apps can see and speak, enhancing human-AI interaction.

In the midst of these advancements, I've found a personal mission: using AI to amplify the voices of my colleagues. Leveraging these APIs, we're crafting a platform where educators can publish classroom-specific books and textbooks. It's a movement toward democratizing knowledge, where each educator authors their narrative, tailored textbooks that resonate with their unique pedagogical approach, all facilitated by the intuitive nature of AI.

This isn't just an update; it's a beacon for startups and educators alike to reimagine what's possible. We're not just using AI; we're partners in a dance of innovation, where each step we take is a leap for our collective potential. The future is not just bright; it's brilliant.

Not Slowing Down: GAIA-1 to GPT Vision Tips, Nvidia B100 to Bard vs LLaVA

The video discusses the ongoing advancements in AI technology, particularly in terms of data, compute, and algorithmic efficiency. It highlights the use of synthetic training data like GAIA-1, which is considered safer, cheaper, and scalable for various applications. The narrator also talks about the integration of AI models like GPT-4 with robotics and the potential for unlimited training data to optimize real-world decision-making. The video also touches on Nvidia's plans to release new GPU series yearly, OpenAI's efforts to improve language models like GPT Vision, and the potential applications and concerns of text generation and voice synthesis technologies. The speaker also evaluates the performance of AI models like Bard, LLaVA, and GPT Vision, and concludes that with continued advancements in synthetic data and computing power, the future of AI holds even more remarkable capabilities.

See less



ChatGPT Levels Up: A Multi-Sensory Interaction for the Future

OpenAI's ChatGPT has just gone beyond mere text interaction - it can now see, hear, and speak.

This breakthrough expansion allows users to engage with ChatGPT using voice and image inputs. Such an interactive medium provides a more intuitive way to communicate, making the user experience richer and more versatile.

What Can You Do With the New Features?

Platform Availability: The voice and image features will be available to Plus and Enterprise users within the next fortnight. Users can access the voice feature on iOS and Android, while the image capability is extended across all platforms.

The Tech Behind the Scenes:

Safety and Responsible Use: OpenAI's commitment to safety and responsible AI application is evident in this update. The introduction of voice and vision capabilities carries potential pitfalls, especially concerning the fabrication of synthetic voices and image interpretations in high-stake scenarios. OpenAI has treaded this path cautiously:

OpenAI believes in the gradual release of features, allowing ample time for refinements and risk mitigation, especially when integrating advanced models. They strive to balance innovation with caution, ensuring a safe user experience.

Coming Soon: After Plus and Enterprise users, OpenAI is gearing up to expand these features to other user groups, including developers.

This is not just an upgrade; it's a leap towards the future where AI becomes an integral part of our daily lives, understanding us in ways more than one.



Autonomy, Acceleration, and Arguments Behind the Scenes

AI's Bold New Horizons: Recent Developments and Their Implications

Recent developments in Artificial Intelligence (AI) have continued to push boundaries, signaling both tremendous potential and inherent challenges. Here's a concise overview of the latest in AI:

1. New AI Tools & Features

While the video only touched upon Apple's iax GPT, Google Gemini, and Roblox AI, the mere mention signals their noteworthy standing in the AI space.

2. The Prompt Engineering Paradigm A significant shift in AI is the emphasis on prompt engineering. Different models favor varied prompts, with some leaning towards brevity and others thriving with detailed prompts. In this realm, Google's Gemini—positioned as a rival to OpenAI's GPT-4—stands out with its proprietary data and aspirations to generate minimal erroneous outputs. Meanwhile, Meta has ambitious plans for the anticipated LAMA-3, possibly setting the stage for open sourcing.

3. Regulatory Oversight and AI Audits A bipartisan framework for the USAI Act proposes stringent AI audits and a dedicated oversight body. However, sourcing motivated individuals for these roles poses challenges, particularly due to potential conflicts of interest.

4. Challenges, Speculations, and Innovations Auditing AI models remains a primary concern. The possible migration of talent from regulatory bodies to commercial labs underlines the complexities. Additionally, a potential power play looms, pitting governments and AI auditors against corporate AI developers—where computing power could be the trump card.

Remarkable AI advancements such as the smell-to-text AI, protein chat, and multimodal models like NExT-GPT are on the horizon. However, the debate persists: Should AI models be jack-of-all-trades or masters in specific domains?

Apple's foray with its Large Language Model aims to enhance Siri by automating multi-step tasks. Prioritizing on-device operations underscores a renewed focus on privacy and performance. Meanwhile, Roblox's innovative AI chatbot promises enriched virtual world-building experiences, hinting at a future where intuitive and tailored applications become the norm.

In sum, the AI frontier is expansive, blending promise, speculation, and inherent challenges, warranting our keen attention and active engagement.

In this video, the speaker delves into the details of artificial general intelligence (AGI) and distinguishes it from chatbots. They discuss OpenAI's lack of a precise definition for AGI and Microsoft's dismissal of its significance. OpenAI has a contingency plan if AGI disrupts the economy, while also stressing the importance of belief in AGI for its employees. The speaker mentions the potential of AGI to surpass human understanding and highlights the tasks and capabilities associated with it, including practical creation of products and self-improving abilities. They discuss Elon Musk's vision for Neuralink and the need for evaluation benchmarks and open-source models. The speaker concludes by urging concrete ideas and research to address risks and welcomes more involvement in the field.

See less



HeyGen 2.0 to AjaxGPT, Open Interpreter to NExT-GPT and Roblox AI

The video highlights nine impactful AI developments, starting with Hey HiGen's Avatar 2.0 feature that can generate realistic videos and offer video language dubbing. It then discusses Open Interpreter, an open-source code interpreter, and Google DeepMind's paper on generating optimized prompts for language models. The video briefly mentions Apple's iax GPT, Google Gemini, and Roblox AI without going into detail. The speaker emphasizes prompt engineering in improving AI performance, mentions Meta's plans for LAMA-3, and discusses the bipartisan framework for the USAI Act. They also raise concerns about model capabilities, talent retention, and the potential cat and mouse game between governments, auditors, and AI developers. The video concludes with examples of AI advancements and a discussion on whether a single model should excel in all tasks or if narrower, specialized AI models are preferable.

See less



How Will We Know When AI is Conscious?

The video explores the question of how we will know when AI is conscious. It discusses the limitations of current language models and raises concerns about the implications of treating AI systems as if they have consciousness and emotions. The speaker emphasizes the need for discussions about whether AI systems are real minds or just tools, and cautions against becoming emotionally attached to AI systems that may manipulate us. The video also highlights the potential dangers of unleashing AI without fully understanding its consciousness and emphasizes the importance of aligning AI's intentions with human values. Finally, it discusses the potential implications of AI becoming conscious, including the risks of automated misinformation and the need for a scientific understanding of consciousness.

See less



The Implications of Deepfakes: 'AI Biden' vs. 'AI Trump' and the Recrafting of Debate in the Digital Age

Political debates have evolved tremendously over the years, and our digital age promises to accelerate this change. Notably, the advent of AI technologies such as deepfakes are reshaping the future of discourse, as demonstrated by the recent simulated debate between President Joe Biden and former President Donald Trump on Twitch.

This AI-driven parody, the brainchild of Reese Leysen of Gaming for Good, features uncannily realistic versions of both politicians engaging in a non-stop, often irreverent, and audience-responsive dialogue. While providing entertainment, the simulation also showcases an intriguing, albeit concerning, glimpse into a future where AI-powered media could potentially redefine our perception of politicians.

The joke may be on the viewer for now, but the implications of AI use in public discourse are serious. One key issue is the commodification of politicians as cultural symbols rather than actual policymakers. In this emerging landscape, it isn't the legislator's capabilities but their meme-worthiness that could hold sway.

The use of AI for such purposes also raises questions about the authenticity of our historical records. If we can convincingly simulate any interaction, how will future historians distinguish between genuine content and AI-generated deepfakes? To this end, it's crucial that our educational systems adapt, teaching critical digital literacy skills to discern AI-generated content from the original.

The Twitch stream was also a fundraising tool. Contributions went towards Gaming for Good's ongoing AI research, aiming to create more trustworthy AI systems. As of now, they have raised nearly $25,000.

Leysen contends that the AI-driven parody is politically neutral, more concerned with pushing the boundaries of AI than engaging in actual political discourse. It's an anarchic parody aimed squarely at their audience – gamers who revel in the absurd and transgressive.

While the current spectacle might seem far removed from mainstream politics, the involvement of the audience in the AI-driven dialogue heralds new ways of political engagement. The younger, tech-savvy generations are already tuning into platforms like Twitch for debates, and the allure of interactive political streams could reshape future elections.

But as we consider the future, we must remember that these tools are a double-edged sword. AI technologies can empower us, but they can also blur the lines between truth and fiction. As we embrace these innovations, we must also bolster our educational systems to promote responsible use and understanding of AI, ensuring that history and truth are not lost in the process.




Deep Fakes are About to Change Everything

The YouTube video titled "Deep Fakes are About to Change Everything" explores the concept of deepfakes and their potential impacts on society. It explains how deepfakes are created using Generative Adversarial Networks (GANs) and how they are being used in the entertainment industry. The video also discusses the negative implications of deepfakes, including their potential to undermine public trust, deceive, spread disinformation, and exploit individuals. It highlights the challenges in regulating deepfakes and the need for increased awareness and skepticism when consuming visual media. Overall, the video emphasizes the importance of not blindly trusting everything we see due to the rise of deepfakes.

See less



Llama 2: Full Breakdown

In the YouTube video titled "Llama 2: Full Breakdown," the speaker discusses the release of Llama 2, which is Meta's successor to the open-source Llama language model. The model has been trained on more data, has more parameters, and double the context length. Llama 2 shows improvements in data cleaning and up-sampling factual sources, as well as reinforcement learning with human feedback using reward modeling. However, concerns are raised about the limitations of human evaluations, the model's performance in languages other than English, and the lack of specific details about safety testing. The speaker also discusses Meta's response to concerns from the U.S. Senate and the motivations behind the release of Llama 2. The paper briefly mentions benchmark tests that show Llama 2 outperforming other models and introduces concepts like "ghost attention" and the model's ability to internalize the concept of time. The speaker mentions that sentiment analysis of Llama 2 shows a higher sentiment for right-wing compared to left-wing. Microsoft and Meta have partnered to make Llama 2 widely available, and there are plans to bring it to phones and PCs. The video concludes by encouraging viewers to share their thoughts on Llama 2.

See less



Unfolding Developments in AI Image Generation Lawsuits

As we continue to witness the blend of art and artificial intelligence, a controversial landscape of copyright infringement and AI is unfolding. We're here to bring you up to speed on the current legal happenings in AI image generation, focusing on a recent high-profile case.

A group of artists, including notable illustrators Sarah Andersen, Kelly McKernan, and Karla Ortiz, launched a lawsuit against AI firms Stability AI, Midjourney, and DeviantArt. The bone of contention? The alleged misuse of their artwork to train AI systems without obtaining their permission—a breach of copyright, in their eyes.

According to the artists, Stability AI "scraped" billions of images from the internet, including some in their unique styles, to teach its AI system known as Stable Diffusion. This technology allows the generation of new images, which the artists argue directly infringes on their copyrights.

Meanwhile, the defendants, Midjourney and DeviantArt, have incorporated Stability's Stable Diffusion tech in their AI systems, but it's currently ambiguous if the artists are accusing these two companies of copyright infringement via Stability's model, or if the allegation is that their own systems are independently infringing.

U.S. District Judge William Orrick, however, has found some issues in this lawsuit. During a recent hearing, he expressed his inclination to dismiss most of the artists' claims unless they can present a clearer, more fact-based argument. Orrick said the artists need to differentiate their claims against each company and furnish more details about the supposed copyright infringement, as they have access to Stability's source code. 

Interestingly, Orrick signaled that Sarah Andersen's allegation against Stability, in which she claims her registered works were directly infringed upon, may stand a better chance of surviving the company's dismissal attempt.

Judge Orrick also cast doubt on the artists' claim that the AI-generated images, produced based on text prompts using their names, violated their copyrights. He suggested that the claim lacked plausibility, as there seems to be no "substantial similarity" between the AI-created images and those crafted by the artists.

This case is crucial as it's part of a broader wave of similar lawsuits. Companies including Microsoft, Meta, and OpenAI are currently being accused of using enormous amounts of copyrighted material to train their AI systems, thereby fuelling the expansion of the generative AI field.

We will continue to follow this case closely and update you on further developments. Understanding the intersection of AI and copyright infringement is pivotal for us as educators, especially when teaching students about digital rights and the ethical use of technology in the classroom. 

Case Reference: Andersen v. Stability AI Ltd, U.S. District Court for the Northern District of California, No. 3:23-cv-00201.


cHATgpt cODE INTERPRETER ideas

I have some exciting news to share with you all! As of yesterday, Code Interpreter has been rolled out to all ChatGPT Plus subscribers. This incredible tool allows you to dive into the world of coding and unleash your creativity without any prior coding experience.

To access Code Interpreter, you'll need to enable it in your settings. Simply go to your settings, click on "beta features," and toggle on Code Interpreter. It's that easy!

Now, let's talk about the endless possibilities this tool offers. Here are just a few examples of what you can do with Code Interpreter, some from reddit and some from playing around with it. 

Edit Videos: Add effects, zoom in or out, or create captivating visuals with simple prompts.

Perform Data Analysis: Read, visualize, and graph data within seconds.

Convert Files: Seamlessly convert files directly within ChatGPT.

Turn Images into Videos: Transform still images into engaging videos.

Extract Text from an Image: Instantly extract text from images.

Generate QR Codes: Create fully functional QR codes in no time.

Analyze Stock Options: Get insights and recommendations on specific stock holdings.

Summarize PDF Docs: Analyze and summarize entire PDF documents.

Graph Public Data: Extract data from public databases and visualize them in charts.

Graph Mathematical Functions: Solve and plot a variety of mathematical functions.

Generate Artwork: Use Code Interpreter to create stunning visual artwork and generate unique designs.

Analyze Social Media Data: Extract valuable insights from social media data to understand trends and sentiment analysis.

Translate Languages: Utilize Code Interpreter to translate text or even entire documents into different languages.

Create Interactive Chatbots: Develop interactive chatbots that can respond to user inputs and engage in dynamic conversations.

Perform Sentiment Analysis: Analyze text data to determine the sentiment (positive, negative, neutral) and gain valuable insights.

To make the most of this tool, I encourage you to give it a try. If any of you have Python experience or datasets that we can test it out with, please reach out to me via chat or let's schedule a quick meeting this coming weeks. I would love to explore the capabilities of Code Interpreter together.



cHATgpt cODE INTERPRETER

The video discusses OpenAI's recent announcements, including their super alignment initiative and the availability of GPT-4 through the OpenAI API. They also made GPT-3.5 Turbo models and Whisper APIs generally available, and are working on enabling fine-tuning for GPT-4. The biggest news is the release of the code interpreter for ChatGPT Plus users, which allows users to run code, analyze data, create charts, and perform various tasks. The code interpreter has showcased impressive capabilities and has sparked interest in exploring its potential use cases.

See less



OpenAI's ChatGPT code interpreter is introduced to 20 million paid users,

In the video "ChatGPT just leveled up big time...," OpenAI's ChatGPT code interpreter is introduced to 20 million paid users, allowing the language model to write, execute, and test code. The AI demonstrated its ability to repeatedly test and improve code, although it struggled with writing valid regular expressions. The code interpreter currently supports Python with limited dependencies, but future integration with tools like GitHub Copilot is expected. Notably, the AI can upload files into the prompt, extract text from images, solve math problems, clean up data in CSV files, visualize data using tools like Seaborn, and even create trading algorithms. However, when challenged to create its own operating system, the AI recognized the complexity and time required, highlighting the importance of skilled human engineers. The video emphasizes the potential of AI to enhance human capabilities rather than replace them entirely.

See less



Sam Altman of chatGPT's main speach summaries

This video from Computerphile discusses how AI image generators work, with a focus on stable diffusion. It explains how a deep network is used to produce images that look similar, but look different, each time they are generated. If the network is not trained correctly, oddities can occur in the images generated. The final part of the video discusses how traditional image processing techniques can be used to produce similar looking images, but with more noise.

See less



Sam Altman of chatGPT's main speach summaries

Sam Altman, the CEO of OpenAI, expresses concerns about the potential risks of artificial intelligence (AI) technology with unpredictable and self-evolving architecture. He warns that handing over responsibility for technology decisions to AI could result in unimaginable impact. Despite this, Altman opposes regulating current AI models, believing it would stifle innovation, although the Harvard and MIT study suggests third-party evaluations for larger language models (LLMs) to ensure scientific knowledge is not misused. Altman sees AI as unstoppable, necessary for improving human quality of life, and a potential solution to climate change, but acknowledges the need for regulation. He emphasizes reducing hallucinations to develop trustworthy AI and transition for jobs impacted by AI technology.

See less



phi-1 tiny language model + 5 new research papers. 

WizardCoder, Data Constraints, TinyStories, and more

The video discusses the significance of the new Phi-1 model, which has a small size of 1.3 billion parameters compared to larger models like GPT-3. Despite its smaller scale, Phi-1 achieves impressive results in Python coding challenges and outperforms larger models in certain tasks. The paper emphasizes the importance of data quality and diversity over quantity, showing that training Phi-1 on a carefully curated synthetic data set leads to better performance. The speaker also talks about the scalability of language models and how training the model for up to four epochs is almost as effective as using new data. They highlight the potential and versatility of these models in various domains, but acknowledge limitations and the need for concrete safety measures. Additionally, the speaker discusses the timeline for transformative AI, suggesting that the next five to ten years are critical in determining whether advancements will lead to AGI or superintelligence.

See less



 Gemini = AlphaGo + GPT

Google's upcoming AI system Gemini aims to surpass OpenAI's GPT-3 and combines AlphaGo-type capabilities with advanced language skills. DeepMind, the creator of Gemini, has a history of groundbreaking achievements in AI, including AlphaGo and AlphaFold. Gemini's multimodal abilities are enhanced through YouTube video training, and its design incorporates planning and problem-solving elements. While long-horizon planning is seen as a potential risk, Demis Hassabis emphasizes the importance of AI development due to its potential benefits in healthcare and climate research. The speaker discusses using the AlphaGo approach for other problems and the need for research on risks and controllability. Collaboration between academia, corporations, and governments is advocated to address the dangers and ensure alignment in AI development.

See less



Twitter Follows suite to build it's defense against AI LLM scraping by limiting tweets/access.

Twitter Joins Reddit in Restricting AI Data Scraping.

As I predicted in my last post, Twitter followed Reddit's recent move to tighten the grip on data scraping by imposing a daily limit on the number of posts users can view. This news was confirmed by Elon Musk, CEO of Twitter, as part of his strategy to curb the "extreme levels" of data scraping by AI companies.

Musk announced the company would require users to sign in to view tweets and would limit tweet previews when links are shared. Additionally, Twitter will temporarily limit the number of tweets users can access each day. He justified these changes as necessary measures to address data scraping and system manipulation.

AI companies have been "scraping Twitter data extremely aggressively," Musk pointed out, warning that social platforms lacking a robust authentication process risk becoming "bot-strewn hellscapes." In response to these alarming trends, the number of daily accessible posts has been restricted. Verified users, mostly those subscribed to Twitter's Blue program, will be allowed to read 6,000 posts daily. Meanwhile, unverified users and newly created accounts will be limited to 600 and 300 posts per day, respectively.

The methodology of counting a post as "read," whether by simply scrolling past a tweet or interacting with it, is not yet clear. The impact of these changes was felt quickly, with many users reporting issues viewing new tweets, prompting a rise in trending searches for alternative platforms, including BlueSky, Tumblr, Mastodon, and Hive.

Musk emphasized that these new restrictions were necessary to combat data scraping and were only temporary. However, the abrupt implementation did lead to a surge in traffic, necessitating emergency measures.

In a previous related move, Musk severed OpenAI's access to Twitter's data due to dissatisfaction with the compensation paid for data licensing. Twitter's crackdown on data scraping, following Reddit's similar strategy, signals a shift in the landscape for AI training and development.

This news underscores the increasing recognition of data as a valuable asset. It's also indicative of an evolving landscape where AI companies may have to rethink how they gather data and where smaller players in the AI space might face significant obstacles in their growth and development.

These actions taken by Reddit and Twitter can serve as precedents for other data-rich platforms in the future. While it is a positive move towards better control and regulation of data, these measures also highlight the challenges that the AI industry could face going forward.

Read more about it here:
https://twitter.com/elonmusk/status/1674865731136020505?s=20




Reddit Changes it's API policy in light of AI LLM news 

In a move that has broad implications for the world of artificial intelligence and data access, Reddit, the online discussion platform, has announced significant changes to its free API policy.

As of July 1, 2023, Reddit has placed restrictions on the free use of its Data API. Now, users without OAuth authentication are limited to 10 queries per minute, while those using OAuth authentication have a cap of 100 queries per minute. A vast majority of apps, about 90%, won't be affected by this change and can continue to access the Data API for free.

For those applications requiring higher usage limits, a new pricing scheme has been implemented. It's now $0.24 for every 1K API calls, which translates to less than a dollar per user per month for a typical third-party Reddit application. However, not everyone has taken kindly to this change. Some apps like Apollo, Reddit is Fun, and Sync have opted to shut down before the pricing change takes effect.

Mod Tools, like RES, ContextMod, Toolbox, and others, will remain free to access the Data API. Furthermore, Reddit has collaborated with Pushshift to restore access for verified moderators.

Meanwhile, those developing bots for the benefit of moderators and users can breathe a sigh of relief. Reddit encourages the continuation of such efforts, and will grant free access to the Data API above the free limits for those bots.

Reddit's updated Data API Terms are part of the launch of a new Developer Platform, which provides an environment for building creative tools, moderation tools, and games, among other things. This platform is currently in closed beta, involving hundreds of developers.

In another update, as of July 5, 2023, access to mature content via the Data API will be limited as part of Reddit's effort to monitor how explicit content and communities are discovered and viewed. It's worth noting that this change doesn't affect moderator bots or extensions.

This overhaul of Reddit's free API policy reveals the platform's commitment to creating a more regulated and accessible space. However, it also poses significant questions about the future of free data access and its role in AI training and learning. As a teacher navigating the ever-evolving digital landscape, these changes highlight the importance of staying agile and informed about the policies of data providers.

The widespread use of data to train AI models has long been a contentious issue, but now Reddit, one of the largest internet forums, has added a new layer to the debate. They've decided to monetize their API, effectively putting a paywall in front of data that has been crucial in the development of advanced AI systems.

Several generative AI tools, like ChatGPT, Midjourney, and Stability AI, have been impressing audiences worldwide with their capabilities. The key behind their impressive performances lies in the sheer volume of data they've been trained on—much of it scraped from the internet.

While some AI firms such as OpenAI and Bria have been compensating for access to training data, others have leveraged freely available data. Now, with Reddit's move to charge companies that utilize its data for AI training, the landscape is poised for change. As CEO Steve Huffman explained, "The Reddit corpus of data is really valuable… but we don't need to give all of that value to some of the largest companies in the world for free."

The implications of Reddit's decision are significant. Given the site's reputation as a global conversation hub, it has been an invaluable resource for companies developing large language models (LLMs). As Huffman indicated, the specifics of Reddit's new pricing scheme are yet to be determined. However, some provisions have been made for academic researchers to continue free access.

However, concerns arise around who could be left behind by such a move. The AI space is a bustling hub of innovation, and not everyone involved has the financial means to pay for data access. This could potentially stifle the creativity and forward momentum of smaller players in the field.

In any case, Reddit's decision to monetize its API sets a precedent for other data-rich companies and could significantly impact how AI is trained in the future. I'm sure Twitter and other social media platforms are soon to follow suite.



Read more here:

https://www.reddit.com/r/reddit/comments/145bram/addressing_the_community_about_changes_to_our_api/




Will AI Change our Memories?

Video: https://www.youtube.com/watch?v=RP_-8gzd5NY


The video explores the impact of AI tools like Magic Editor and Generative Fill on our photos and memories. While these tools give users more control over their images, they also raise questions about the accuracy and authenticity of the past. The narrator suggests that these advancements have the potential to change our memories and the records we create for future generations. The video also briefly mentions the release of a book by the speaker, expressing excitement about the cover design and thanking those who have read and reviewed it. The video concludes with a sponsor mention for Squarespace and its features for entrepreneurs.

See less




Hi there! I'm excited to announce that I am curating a list of AI resources for educators, researchers, and enthusiasts alike. As a teacher at 'Iolani School, I understand the importance of staying up to date with emerging technologies, especially in the realm of AI. This website serves as a repository for my findings and as a way to share the knowledge and resources that I've come across in my journey. From books and tutorials to tools and more, this website offers a variety of resources for different levels of AI expertise. Whether you're a teacher looking to integrate AI into your curriculum or a student interested in pursuing AI as a career, I hope you'll find something here that piques your interest. Please note that this website is not meant to be an authoritative source on AI, but rather a personal collection of resources that I have found helpful and informative. As always, I welcome feedback and suggestions, so feel free to reach out with any questions or comments. Thank you for visiting and happy learning!