Skip to content

Cultivating Knowledge: Using ChatGPT, Claude, Copilot, Gemini, and Perplexity for Gardening and Everything Else

    The focus of this article is how to find information using five major AI tools. This article is NOT about specialized informational AI tools for gardening. To become familiar with this topic, you can begin by sampling ExtensionBot, Penn State Extension’s Tilva, and MasterGardener.ai. I will not discuss using an AI to write computer code, create an app. create images or compose music, although all these things are possible. I will also not describe specific tools in detail. Throughout this article, I will use AIs and tools interchangeably. There are guides and “helps” for each of the resources listed in this article’s final section and you should refer to them for mechanics. I offer some exercises you should do to develop a sense of what’s possible. Look for the bolded text. Using a desktop or laptop computer, you can copy and paste it into the search boxes for the AI tools listed at the end of this article.

    Some background should take a little of the mystery – if not the awe – out of what happens when you ask a question or pursue a conversation with an AI such as Gemini or Copilot through your phone or computer. Remember the proverb: “Give a man a fish, and you feed him for a day. Teach a man to fish, and you feed him for a lifetime.”

    What is Generative AI?

    Each tile uses 15,000 watts of power and is packed with Tesla’s custom-built AI chips arranged in an organized grid. 8 December 2023. Photo: Steve Jurvetson. Wikimedia Commons. CC BY 2.0 
    A data center rooj in Arizona. Rows of black and white vents.
    Cooling towers and backup generators on the roof of a data center in Mesa, Arizona. 23 November 2025. For a larger and more revealing image, click on the Wikimedia Commons link that follows. Photo: Rsparks3. Wikimedia Commons. CCO 1.0 Universal Deed

    You can communicate with an AI tool by either typing, speaking, or uploading a diversity of data. These tools go beyond scripted responses by creating unique responses, handing complex tasks, maintaining context, learning from interactions, and often integrating with other digital resources. If you only ask a single question and never follow up, you’re missing much of an assistant’s power. It thrives on context and, frankly it loves to be challenged or corrected. Conversation brings out the “best” in the tool and is certainly a lot of fun. Think of an AI assistant as a super powered prediction machine or — to anthropomorphize — a genie. It doesn’t know facts like a human. Instead, it acts like a professional gambler, using massive amounts of math to bet on which words are most likely to follow its and yours based on patterns absorbed from trillions of sentences. It is a better and faster pattern matcher than any human. Accuracy does not come from true understanding, but from the sheer scale of memorized data. However, describing its workings as statistics is like saying a symphony is “merely” sound waves—while technically true, it overlooks the layered complexity these models can capture. Trained with large collections of preserved human ideas and insights, AIs generate responses that are often plausible and helpful, though not always factually reliable.

    “AI Chatbots Remain Overconfident – Even When They’re Wrong” by Jason Bittel (Carnegie Mellon University, July 22, 2025) quotes researcher Trent Cash: “. . . remember that LLMs [discussed below] are not inherently correct and that it might be a good idea to ask them how confident they are when answering important questions. Of course . . . LLMs might not always be able to accurately judge confidence, but in the event that the chatbot does acknowledge low confidence, it’s a good sign that its answer cannot be trusted.” So ask: How confident are you of this answer? I asked Gemini for a verdict, the response: You can generally trust a chatbot to give you a directionally correct answer or a solid first draft. However, if the answer involves a specific date, a legal statute, medical advice, or a complex math calculation, there is roughly a 1 in 10 chance it contains a subtle but significant error. In the world of data, “directionally correct” means not perfectly accurate but the overall trend or conclusion is right.

    Think Like an AI

    When you prompt an AI, it immediately establishes a context window to weight the “nature” of your request—mathematically anchoring the conversation to relevant patterns, like gardening instead of rocket science. It does this by breaking your text into chunks called tokens, then using an attention mechanism to “spotlight” the most important keywords. By navigating word embeddings—mathematical maps where “daisy” is near “flower” but far from “rocket”—the AI stays within this frame while filtering out irrelevant noise. Because it is probabilistic, the AI doesn’t know the full answer in advance; it simply calculates the most likely next token to ensure the response stays statistically on track with your topic.  The video Large Language Models explained briefly is a fast‑moving, under‑eight‑minute lecture that is both concise and comprehensive. Narration and detailed animated illustrations work in tandem from start to finish, creating a presentation that is clear, cohesive, and genuinely user‑friendly. If the topic intrigues you, it’s well worth a look.

    This article is not concerned with mathematics or coding, but it’s interesting to step aside for a moment to understand why AIs generally perform so well in these fields. This is because an AI writes code or solves math by treating them as languages with very strict patterns. Because it is trained on trillions of examples, it recognizes the logical ‘grammar’ of a formula or a script just as easily as it recognizes the syntax of a sentence. When you ask a mathematical question, the AI uses its predictive web to reconstruct a step‑by‑step solution based on patterns it has learned, mimicking the structured way a programmer or mathematician might solve the problem. By using a technique called Chain of Thought (discussed below), it writes out its reasoning one step at a time, using each previous calculation as an anchor for predicting the next. This structured reasoning significantly improves accuracy on mathematical problems by encouraging consistent, sequential logic. The output becomes more stable and reliable, though it is still probabilistic rather than fully predictable.

    Generators and electric power hardware on the roof of a data center in Mesa, Arizona. 23 November 2025. For a larger and more revealing image, click on the Wikimedia Commons link that follows. Photo: Rsparks3. Wikimedia Commons. CCO 1.0 Universal Deed

    To demonstrate the mathematical ability of AI tools I asked Gemini to suggest a problem focused on the application of a pesticide to a lawn. It responded with this, which you can run across the AIs noted at the end of this article: You need to treat a rectangular lawn that measures 80 ft by 125 ft. The pesticide label requires an application rate of 4 fluid ounces of concentrate per 1,000 square feet. Your hand sprayer is calibrated to deliver 2 gallons of total spray mixture per 1,000 square feet.

    The Education of an AI 

    To become an expert gambler, an AI endures a “training” marathon—like a student reading every book in a giant library at lightning speed to internalize statistical relationships rather than meaning. Large Language Models (LLMs) are built from the internet, and without this vast archive of digitized text they simply couldn’t exist. Developers gather enormous amounts of publicly available material from online sources, then clean and filter it using automated tools to remove irrelevant content. This becomes the dataset on which the LLM trains. The scale is far beyond anything a human can readily comprehend. I asked Copilot how long it might take one person to read all the training material of a typical LLM, and the answer was: If one person tried to read the amount of text a model like Copilot is trained on—at a normal pace, eight hours a day—it would take on the order of a hundred thousand years. The model learns by trial and error — it predicts the next word based on all the words that came before it, then checks its guess against the actual word in the original training text. Whenever it gets it wrong, it adjusts its internal math slightly to do better next time. Repeated billions of times, these tiny adjustments are what training is. This probability‑based learning happens only during training. LLMs are rarely retrained from scratch—it costs too much and takes too much time—though they can be updated more cheaply through a process called fine-tuning. Some AIs use live web search to retrieve new information; but this is reading something new, not memorizing it or adding it to the model’s internal knowledge. Without web search, a model simply has a knowledge cutoff—a point in time beyond which it knows nothing new. Of course, its knowledge is encoded as word placement probabilities, not stored as a library of facts.

    Training an AI on specialized data like medical journals provides facts and encyclopedic knowledge, but it does not teach the model to think like a doctor — that requires deeper reasoning and contextual judgment that no single dataset can supply. General-purpose models like Gemini or Claude develop that broader capacity differently: they are trained on a massive cross-section of human knowledge. Then refined through a process called Reinforcement Learning from Human Feedback (RLHF), in which tens of thousands of human reviewers rank the model’s responses, steering it away from merely plausible answers toward ones that are helpful, safe, and accurate. Do it enough times across enough domains, and the model learns to handle ambiguity, connect ideas across fields, and stay flexible when problems lack clean boundaries. These are capabilities that narrowly trained specialist models tend to sacrifice for precision. Neither approach alone is ideal for complex tasks: specialists offer depth; generalists offer adaptable reasoning. The increasingly common solution is to combine them — starting with a powerful foundation model and layering in domain-specific fine-tuning, targeted behind the scene prompting, or retrieval of authoritative external sources. This preserves broad reasoning while adding the focused expertise.

    Inference

    Inference is the “active” phase that makes AI tools so powerful. Every query triggers billions of calculations, consuming significantly more electricity than a standard search. To handle this, NVIDIA GPUs—specialized chips built to perform massive mathematical operations simultaneously—have long been the gold standard. However, the industry is now shifting toward specialized inference chips, such as LPUs (Language Processing Units), which are designed specifically for speed and energy efficiency rather than just “learning.”

    “ChatGPT 4 generated this image based on the following prompt: Generate an image you feel represents yourself; ChatGPT.” 20 April 2024. You must be logged in to try this yourself. To try this in Gemini select Create Image under the search box. Ask Create an image that ChatGPT thinks represents it.  Wikimedia Commons. Image: ChatGPT. [Links to Michael C. Wright] CCO 1.0
    Increasingly, these specialized chips enable a trained model to take new, unseen data and use learned patterns to make an immediate prediction. This is inference. It occurs when an AI identifies a plant disease from a photo, a smart system decides to water a garden based on soil sensors, or a greenhouse camera spots a pest outbreak. In these cases, the system isn’t following hard-coded rules; it is applying what it learned during its training phase to a fresh situation. New, more efficient computer chips are critical because they allow these models to run faster and cheaper.

    Hallucination: Confident, Authoritative – but Wrong

    Looking for a gardening joke on AI hallucinations, I asked an AI: Give me a gardening joke on artificial intelligence and hallucination. You can ask for some too. It responded: Why did the AI get kicked out of the botanical garden? Because it kept pointing at a common dandelion and insisting—with absolute confidence and three fake footnotes—that it was a rare “Golden-Petaled Solar-Vacuum Plant” discovered by a secret university extension program in 1924. When the Master Gardener asked for the source, the AI just smiled and said, “I don’t have the link, but I’m sure it’s a growing trend!” Asking Gemini for some clarification, the response was: In the context of the joke, “growing” doesn’t just refer to plants or popularity—it refers to the AI “growing” (fabricating) its own facts and citations out of thin air. It highlights the irony that while the AI claims the plant is part of a trend, the only thing actually expanding is the fabrication itself.

    Modern AI tools minimize hallucinations by shifting from “frozen” memory toward dynamic verification. To ground answers in reality, Retrieval-Augmented Generation (RAG) is triggered when a model needs to consult external, live databases for facts it wasn’t originally trained on. For complex reasoning, Chain of Thought (CoT) allows the model to verify logical links step-by-step, though this is often reserved for difficult problems to conserve energy. Internal integrity is shaped by Reinforcement Learning from Human Feedback (RLHF), previously discussed, where human rankings teach the model to prioritize honesty over guessing, and Constitutional AI (Claude), which provides a “conscience” by requiring the model to critique and revise its own drafts against a set of written principles. By using these mechanisms, AI moves beyond simple pattern matching. Of course, errors do not always come from hallucination. Errors in logic and reasoning, forgetfulness, bias, misinterpretation of intent, and even software failures can be at fault.

    A “hallucination” happens when the model prioritizes plausibility over truth, confidently filling gaps with what sounds right. Ask about a nonexistent person, and it may generate a biography that feels real. This same “creative guessing,” though, can be a strength. Unexpected answers can mirror human imagination, offering fresh angles and novel connections. Our criticisms and cautions are ignited, but remember as with many technologies, early criticism focuses on flaws, while improvements follow—think airplanes or computers. Still, as previously suggested, when accuracy matters, verification is essential. Treat AI outputs with healthy skepticism and confirm important details using reliable sources, whether online or in print. On the other hand, neither traditional sources nor people are always reliable.

    Obviously, no one wants incorrect answers—but can it be useful when a model makes things up? AI hallucinations are a bit like creativity with the training wheels off. Loosen the rules, and the model wanders into imaginative territory, inventing “zucchini gremlins” or describing soil microbes as “tiny librarians reshelving nutrients.” You can encourage this mode by asking for metaphors, fictional scenarios, exaggerated explanations, or “what-ifs.” Ideas will pop like popcorn; you can prune, shape, or compost them into something useful. Often, the AI will outpace your own creativity.

    You can do some neat things with this creative mode. Try it. Think of similes and metaphors: What are seeds like or Give me ten metaphors to explain a seed. Or While there are no known quotes from Thomas Jefferson about spaghetti squash, what might they be if they existed? And you can ask for any number of them. How about some archaeology: Imagine a newly discovered diary from the master gardener of the Hanging Gardens of Babylon—what does it reveal? Or give vegetables a voice and do some time travel: What did a potato, traveling from the Andes to central Mexico in 1400, tell a tomato about soil conditions in Tenochtitlan (capital of the Aztec Empire)? How about a story for young readers. Tell me a short story explaining seeds suitable for a third grader. These aren’t really hallucinations—but they’re all expressions of the same spark of possibility.

    The Query: A Conversation Started and Pursued

    Prompts and hallucinations are closely intertwined—a key point before exploring the art of prompting. Poor prompting heightens risks by creating ambiguity that leads models to invent plausible but inaccurate details, such as a vague “Tell me about plant diseases” yielding fabricated symptoms. Hallucination is a serious and perplexing problem the industry is very aware of. As we’ve seen previously, AIs are increasingly equipped with processes to defend against such errors. If you’d like to explore this further “A Concise Review of Hallucination in LLMs and their Mitigation” (arXiv, 2 December 2025) is useful.

    The key to successfully using an AI can begin with a contextual prompt. While all questions are prompts, not all prompts are questions. A prompt is any input you provide to an LLM to trigger or shape its response. A bad prompt will produce less than optimal results. Any serious conversational string or thread can begin with an initial contextual prompt. Good prompts are especially important when working with multipurpose models. Often, we are told to be concise but the more context that can be provided the more likely your results will be helpful. The goal of this contextual prompt is to define your identity — including your relevant expertise—and the overall intent of your inquiry. You can ask for responses in a specific format and specify the kinds of sources you want the information to come from.  Write it once and save it, pasting it in at the beginning of any query. You may come up with something much better, but here’s a suggestion: As an Extension Master Gardener (EMG) in Central Virginia (Zone 7b), I’m looking for research-based, peer-reviewed horticultural data and advice tailored to our local climate, soils, and pests. Please provide scientifically accurate yet accessible information suitable for an intelligent lay audience, prioritizing cooperative extension and academic sources.

    To maximize AI performance for Master Gardener tasks, use a structured, formulaic approach. Start by assigning the AI an expert horticultural role and defining your preferred output format (e.g., bullet points or a timeline). Clearly provide your environmental context—including USDA zone, light levels, and soil conditions—alongside specific constraints like organic practices or low-maintenance goals. Finally, treat the process as a collaboration. Use back-and-forth feedback to refine the results; be explicit about what is helpful and what is not. AI thrives on both positive and negative reinforcement to sharpen its accuracy. I asked Gemini for an example, and it responded: Act as an Extension Specialist using Virginia Cooperative Extension and peer-reviewed sources to determine the most effective organic methods for managing Plum Curculio in a small home orchard. This research is for a Zone 7b Central Virginia site with six peach and apple trees located near a wooded edge in well-drained loam. Please provide a seasonal maintenance timeline suitable for home gardeners that prioritizes a strictly organic/IPM management style. Specifically, evaluate the use of physical barriers like kaolin clay (Surround WP) while avoiding broad-spectrum organic sprays where possible. Run this across the AIs I’ve listed at the end of this article and verify the response with respected sources (for instance the Virginia Cooperative Extension  2026 Pest Management Guide).

    You might try a different approach: ask your AI collaborator to pose questions that push your thinking. Your answers can then help the AI refine and tailor its response. After you receive that response, consider whether any important factors were overlooked—and ask about those specifically. For example, you might bring up deer predation. The AI’s follow-up can then incorporate that additional context. Think of it as a collaborative process rather than trying to outsmart the model. I asked ChatGPT how this would work. Here are its questions – bouncing off of the Plum Curculio problem: Please ask me questions that I can answer to help you respond to a Plum Curculio problem in an orchard. A few of the question waiting for my answers: What specific varieties of peaches and apples do you have (if you know)? About how old and tall are the trees? (young, semi-dwarf, full-size)? How close are they to the wooded edge (e.g., 10 ft, 50 ft, etc.)? Are all trees in one block, or spread out? This approach explicitly invites collaboration. After 15 questions it concludes: Answer these, and I’ll turn it into a tight, Virginia-specific organic IPM plan that fits your exact situation.

    Below are sample prompts, created with some AI collaboration, to get you started. All the AIs will suggest follow-ups (prompts) of varying types after your prompt — although Claude requires a paid subscription for this feature. Remember to paste in the contextual prompt before inserting a prompt given below. (To complete the exercises, you can use the contextual prompt by either typing it directly into the AI’s search box or pasting it in. For the latter, save it as a Word file so you can open and copy it easily. When you’re ready, place the contextual prompt in the search box first, then paste the exercise prompt underneath it before submitting.) Also, insert additional information you think is useful. Of course, there is some repetition between the two prompts. You can always just ask a question and not bother with the contextual prompt but the quality of the responses you receive will be uncertain. Also, if you enter the same prompt in the same platform more than once you will not get precisely the same answer. After you receive a response, be sure to examine references when provided and request a summary if appropriate. Finally, Gemini recommends: Even with a perfect prompt, always perform a quick sanity check. If Gemini {substitute whatever tool you’re using] suggests a planting date or specific herbicide, ask it: Is this recommendation consistent with the current year’s [State Name] Cooperative Extension pest management guide? This often triggers the model to self-correct if it provided a generic, non-local answer.

    1. You are a Master Gardener who gives clear, research-based advice. We’re collaborators. We have a client in Crozet, Virginia (Zone 7b) with a small sunny backyard. She’s a beginner gardener and new to the area and wants to grow tomatoes, peppers, and herbs in raised beds. Ask me any questions you need answered, then give me a simple layout, planting dates, soil mix, watering schedule, and basic pest control tips using mostly organic methods.
    2. As a Master Gardener specializing in low-maintenance landscapes, let’s collaborate on a front yard design for a client in Zone 7b. Since the site receives full sun and the client wants an attractive, pollinator-friendly garden that is easy to manage, please provide specific recommendations for shrubs and perennials, along with spacing guidelines and a practical seasonal care calendar. Keep your advice simple and direct, and feel free to ask me any questions needed to refine the plan.
    3. As a master gardener, let’s collaborate on a high-stakes plan to eradicate a severe Southern Blight outbreak in a rose garden in Zone 7b. Because this infection threatens a nearby frequently visited ornamental garden and risks significant financial loss because the ornamental garden requests a donation from visitors, we need a swift, professional resolution to prevent further spread. Please ask any necessary questions to help me stabilize the site and protect the investment in the ornamental garden. 

    Conversations Can Also Be Started with Images or A File Upload

    The associated exercise will provide identification and more. Photo: Charles D’Aniello

    Most major tools —Perplexity, Gemini, ChatGPT, Claude, and Copilot—let you upload a flower photo (any image) for identification, though each handles it differently:  Copy and paste the image on the left into each of the AIs and decide which response you find most useful. Ask: What is this? Do they agree?

    AI assistants like ChatGPT, Claude, Gemini, Perplexity, and Copilot—and now even Alexa+—allow users to upload files for summarization and analysis. Some tools sync with cloud storage; others limit files to a single session, with retention varying by tool and account type. When you upload a file, the AI doesn’t permanently memorize it. Instead, it holds the content in working memory and uses it to filter its broader knowledge. Think of it as giving the AI a temporary guidebook: it already has a vast internal encyclopedia from training, but relies on your file for the specifics. If you upload a photo of a diseased plant, for instance, it draws on its general knowledge of botany and biology while using your photo to assess the actual symptoms.

    Selected General AI Assistants

    Tackle complex problems using multiple tools; since each AI is trained on unique datasets and utilizes different reasoning models, using them in tandem serves as a cross-verification method. Should you sign up—or even pay? That’s ultimately your call, but the pattern across Perplexity, Gemini, Claude, Copilot, and ChatGPT is predictable: free plans provide solid but limited models, tighter usage caps, and fewer features. Paid plans open the door to each platform’s flagship Large Language Models, higher usage limits, and specialized tools that can handle long documents, images, etc.

    If you want to sample these tools without registration, you can try basic ChatGPT or use Copilot for text questions without signing up. Perplexity also offers limited unregistered usage as a guest. However, Claude requires a login even for its free tier, and in the past a Google account was necessary to use Gemini, though a standard Gmail account would suffice. This is no longer necessary, but the guest account without sign-in comes with limitations. Think of AI assistants like a specialized toolbox where each tool has a unique “brain” and personality. While some platforms like Perplexity or Copilot act as hubs that let you talk to multiple AI models in one place, this is often a watered-down experience. You get what you pay for: A larger context window is a primary feature of premium subscription tiers, allowing the AI to maintain coherence over long documents and complex discussions without losing track of key details. Understanding the underlying tech helps here: an AI’s context window functions as its short-term memory. It represents the specific “chunk” of text—including your prompt, recent history, and uploaded data—that the model can process simultaneously. This limit is measured in tokens, or pieces of words. If a conversation exceeds this limit, the model must drop or compress older information to make room for the new. You are unlikely to bump into this limitation.

    I do not go into detail about each platform, but I do provide links to instructional videos. Do the exercises given previously and you may be enticed to learn more.

    Click here to try Perplexity

    Perplexity is named for its goal: to lower your “perplexity” (uncertainty). It describes itself as a “research‑first” assistant, where every answer is backed by a verifiable citation. It synthesizes information from multiple websites to provide coherent, summarized answers rather than a list of links. In addition to searching the web, Perplexity uses a blend of its own internal LLMs and external models to deliver fast, citation‑grounded responses. While it does use LLMs, the system is built around a search‑first design in which the model interprets queries and synthesizes retrieved information rather than generating answers from internal knowledge alone. In its higher tiers, it can run a Model Council, comparing outputs from multiple LLMs  to improve accuracy. Perplexity can be used on the web, through browser extensions, or via its mobile apps for a more chatbot‑like conversational experience.

    To learn about Perplexity, visit Learn 80% of Perplexity in under 10 minutes! by Jeff Su,  Getting Started (official page), and Practical Tips for Using Perplexity (Perplexity Help Center).

    Click here to try ChatGPT.

    ChatGPT gets its name from its design: “Chat” for conversation and “GPT” for Generative Pre‑trained Transformer, a model trained on massive datasets to predict and generate human-like logic and responses. While it became iconic as the first interactive alternative to search, it has evolved beyond a conversational partner into a multimodal engine that can process sight, sound, and complex reasoning simultaneously. By integrating features like Voice Mode and Advanced Reasoning (in some of its models), it no longer merely summarizes information—it plans and acts, while using saved preferences and past interactions.

    To learn more about ChatGPT, visit 25 ChatGPT Tips for Beginners (Become an EXPERT) by Skillademia, ChatGPT Tutorial – A Crash Course on Chat GPT for Beginners by Adrian Twarog, and  ChatGPT Get Step-by-Step Help. You can begin by asking: What can you help me with?

    Click here to try Claude.

    Claude is named in tribute to Claude Shannon, the “father of information theory,” and is built by Anthropic for depth and reasoning. It excels at reading and synthesizing very large amounts of text into polished, human‑sounding writing, and is known for long‑context analysis, careful reasoning, and measured, considered answers. Guided by its “Constitutional AI” framework, Claude aspires to be principled, honest, and helpful. It is also strong at coding and debugging—especially in its Opus‑level models—and focuses more on high‑quality text, analysis, and professional‑grade writing than on image or multimedia generation.

    To learn more about Claude, visit FULL Claude Tutorial for Beginners in 2026! (Become a PRO!) by AI Foundations, Claude In 29 Minutes by Tina Huang, and Anthropic Academy. See especially AI Fluency: Framework and Foundations, “. . . developed by Anthropic, Prof. Rick Dakan (Ringling College of Art and Design) and Prof. Joseph Feller (University College Cork). Learn about this free course at https://www.anthropic.com/ai-fluency.” See also  CLAUDE AI Tutorial for Beginners – How to Use Claude AI.

    Click here to try Copilot.

    Copilot is Microsoft’s AI assistant, designed to work alongside you as a collaborative partner — your copilot. It emphasizes accuracy, real‑time web grounding, and transparent citations, and it presents information in clear, structured formats that turn complex topics into digestible steps, summaries, or comparisons. It can draw from extension sites, academic sources, and—when connected—your Microsoft 365 content. Its strengths lean toward reliability, reasoning, and workflow integration, making it especially effective for research, synthesis, document work, and detail-heavy problems. Copilot can be used independently of Microsoft 365 applications.

    To learn about Copilot, visit How to Use Microsoft Copilot – 2026 Beginner’s GuideMicrosoft Copilot Tutorial by Kevin Stratvert, and Microsoft 365 Copilot Beginner’s Guide (2025) by Mike Tholfsen.

    Click here to try Gemini.

    Gemini represents a major evolution in Google’s AI efforts, replacing Bard and marking a “moonshot” moment for the company. Its name reflects the unification of Google Brain and DeepMind into a single research powerhouse, while also nodding to NASA’s Project Gemini. It is trained to process text, images, audio, video, and code simultaneously, allowing it to reason fluidly across multiple formats or media types. Gemini is deeply integrated into the Google ecosystem and features prominently on Pixel devices. It leverages Google Search to ground its responses in real‑world data, prioritizing high‑authority sources. For example, a gardener can upload a plant photo for visual analysis, and Gemini can cross‑reference symptoms with botanical and agricultural databases to propose likely diagnoses. Its outputs can be exported directly into Google Docs, Sheets, Slides, or other Workspace apps.  When searching for current information, users can also access Gemini through AI‑powered search mode within the Google Search box. Within the Search box, Gemini behaves like an AI‑powered search engine focused on live web results, while the standalone version offers deeper reasoning, multimodal analysis, and Workspace integration. Traditional Google Search points you to sources, but Gemini synthesizes those sources into an answer.

    To learn more about Gemini, visit The Only Google Gemini Tutorial You Need! Use Gemini Like a Pro in 2025 (Step-by-Step by Nuno Tavares, Google Gemini PRO: Tutorial for Beginners by Kevin Stratvert, and Google Workspace Learning Center.

    A word on Alexa+

    For many asking Alexa questions is a way of life, so it needs to be mentioned. Apologies for the blatant commercial nature of this — but it goes with the territory. Open the Amazon homepage on your browser and enter Alexa+ in the search box. The name Alexa was chosen as a dual homage: it references the Library of Alexandria to symbolize a vast repository of knowledge, while the “X” sound was selected because it is a high-frequency that smart speakers can easily recognize. While the classic Alexa was designed as a “task-oriented” assistant for smart homes, the new Alexa+ has significantly enhanced its research abilities. If you have access, give it a try with some of the exercises.  You can enter a prompt by speaking or typing into the Alexa app on either your phone or computer. For instance, with newer Android phones you can type to Alexa by tapping the search box at the top of the app. This opens the keyboard even though a microphone icon is shown. With either the original or enhanced Alexa you can have the answer sent to your email. Just say: “Send this to my email.” Overall, you will find it very useful in a diversity of roles. Amazon uses Claude integration to handle more complex conversations where its own models might struggle and it can route to experts in such areas as news, math, or science. Think of it as a useful starting point — a way to surface ideas or orient your thinking. To learn more about Alexa+, visit  How To Get Started with Alexa+ | Full Setup PLUS Dad Joke Battle!

    Bring in the Specialists

    Now that you’ve gained familiarity with broad, multipurpose AI tools, return to my earlier sentence. This article is NOT about specialized informational AI tools for gardening. To become familiar with this topic, you can begin by sampling ExtensionBotPenn State Extension’s Tilva, and MasterGardener.aiUsing the same exercises you’ve already practiced, try each tool and compare their responses with one another and with the multipurpose AIs. What distinguishes these specialist tools from multipurpose tools? Read the descriptions on each tool’s website, then use the general AIs to help you analyze the specialized ones. Ask the multipurpose AIs: What distinguishes ExtensionBot, Tilva, and MasterGardener.ai from one another? Does the nature and quality of the multipurpose tools’ responses to this question vary? Can specialized and general tools complement one another?

    Featured Image: This clay tablet was a breakthrough in its day—an early attempt to tame information—but still thousands of years from the functional intelligence we now call AI. I found the image on the Wikimedia Commons but identified it by pasting it into Gemini. I asked Gemini “What is this?” And it responded: This image shows an ancient cuneiform tablet, specifically a fragment of an Assyrian hymn catalogue. Dating back to roughly 800–700 BCE (though the filename suggests an even earlier origin for the source material), these tablets served as organized “bibliographies” or indexes for priests and scribes in ancient Mesopotamia.” Further on in its response: “Rather than containing the full text of the hymns, this tablet acts as a catalogue. It typically lists the “incipits” (the opening lines) of various songs, prayers, and rituals dedicated to specific deities. From Wikimedia Commons: “Assyrian hymn catalogue; Assur (Iraq), middle of 2nd millennium BC; Special exhibition ‘Beyond the Horizon – Space and Knowledge in the Old World Cultures’ at the Pergamon Museum (22.06 -. 30.09.2012).” Wikimedia Commons. Photo: Anagoria. CC BY 3.0

    Note: AI tools assisted in the research and preparation of this article. Their outputs were verified by comparing them with other AIs or by corroborating them through other sources.

    Sources

    Master Gardeners especially should review Virginia Cooperative Extension Guidance for the Use of Generative AI. VCE-1213NP. It is concise and provides access to useful practical resources, including guidance for effective prompting.

    For background Wikipedia and Encyclopedia Britannica are helpful. In Wikipedia, see “Artificial Intelligence” and “Artificial Intelligence” in Britannica. You can also use these sources to research associated topics.

    Achieve More with GenAI. MasterClass is a subscription service. I found this brief and fast-moving course very helpful and suggestive of the diversity of applications for AI.

    For a critique of AI see The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want. By Emily M. Bender and Alex Hanna. Harper Business, 2025. Argues that LLMs are statistical synthetic text extruding machines, not intelligent. Especially useful for explaining how AI systems work.

    Effective prompting techniques (Deep Dive) | AI Fluency: Framework & Foundations Course. A lesson in the course AI Fluency; Framework and Foundations. “. . . developed by Anthropic, Prof. Rick Dakan (Ringling College of Art and Design) and Prof. Joseph Feller (University College Cork). Learn about the full free course at https://www.anthropic.com/ai-fluency.” Clear and excellent video presentations.

    Five Things You Need to Know About AI. (2024) BBC.

    Generative AI at UVA (University of Virginia). Includes links to very helpful video presentations.

    Harnessing Generative AI’s Potential in Extension while Ensuring Ethical Use. Mississippi State University Extension.

    Large Language Models explained briefly. Lesson by Grant Sanderson; text adaptation by Justin Sun. (Originally published November 19, 2024; updated March 22, 2026.) An excellent video that explains how LLMs work.

    Plugged In: For UMaine Extension Staff and Volunteers. University of Maine Cooperative Extension.