Tumgik
#Software Development Virtual Assistant
pankhconsultancy · 9 months
Text
Tumblr media
Our Software Development Virtual Assistant is knowledgeable in project management, software testing, documentation, and other areas. Professional business apps for iOS, Windows, and Android are developed by competent software and mobile application developers at Pankh Consultancy Pvt. Ltd. These are the services that design software according to a client's specifications. Contact Now ! +1 (646-795-6661)
4 notes · View notes
vishalpanchal · 1 month
Text
Transform Healthcare Delivery with Our Tailored Software Solutions.
0 notes
hvirtuals · 2 months
Text
Hvirtuals - How Virtual Assistants Are Revolutionizing Small Businesses
In today's fast-paced digital world, small businesses are constantly looking for ways to stay competitive and efficient. One solution that's making waves is the use of virtual assistants. Hvirtuals is at the forefront of this revolution, offering a wide range of virtual assistant services that are transforming how small businesses operate.
Virtual assistants are remote professionals who provide various administrative, technical, and creative services to businesses. They're not just glorified secretaries; they're skilled experts who can handle everything from digital marketing to software development. Let's explore how virtual assistants are changing the game for small businesses.
Tumblr media
Boosting Productivity with Virtual Assistant Services
One of the biggest advantages of hiring a virtual assistant is the dramatic increase in productivity. Small business owners often wear many hats, juggling multiple tasks at once. By delegating time-consuming tasks to a virtual assistant, business owners can focus on core activities that drive growth.
Hvirtuals offers a range of virtual assistant services tailored to small businesses. These include managing emails, scheduling appointments, handling customer inquiries, and even managing social media accounts. By offloading these tasks, business owners can reclaim valuable time and energy.
Expanding Reach with Digital Marketing Services
In the digital age, having a strong online presence is crucial for small businesses. However, many lack the expertise or resources to implement effective digital marketing strategies. This is where virtual assistants specializing in digital marketing come in.
As a digital marketing agency, Hvirtuals provides comprehensive digital marketing services. Their virtual assistants can help with everything from content creation and social media management to email marketing and SEO optimization. By leveraging these services, small businesses can expand their reach and attract more customers without the need for an in-house marketing team.
Embracing Mobile Technology
With the increasing use of smartphones, having a mobile app can give small businesses a significant edge. However, mobile app development can be costly and time-consuming. This is where virtual assistants with expertise in app development come into play.
Hvirtuals, as a mobile app development company, offers services to create custom mobile applications for small businesses. Their remote full-time software developers can build apps that enhance customer engagement, streamline operations, or even open new revenue streams. By hiring software developers from India through Hvirtuals, small businesses can access top-notch talent at competitive rates.
Enhancing Online Presence with Web Development
In today's digital landscape, a well-designed website is essential for any business. Virtual assistants specializing in web development can help small businesses create professional, user-friendly websites that attract and retain customers.
Hvirtuals offers full stack web development services, covering everything from front-end design to back-end functionality. Their web development agency approach ensures that small businesses get a comprehensive solution tailored to their specific needs. Whether it's a simple informational site or a complex e-commerce platform, their virtual assistants can deliver high-quality web development services.
Streamlining Operations with Custom Software
Every business has unique processes and challenges. Off-the-shelf software solutions don't always fit the bill. This is where custom software development comes in handy. As a custom software development company, Hvirtuals can provide virtual assistants who specialize in creating bespoke software solutions.
These virtual software developers can create tools that automate repetitive tasks, manage inventory, track sales, or handle any other specific need a small business might have. By streamlining operations with custom software, small businesses can significantly improve efficiency and reduce costs.
Finding the Right Talent with Recruitment Support
Growing businesses often need to expand their team, but finding the right talent can be challenging and time-consuming. Virtual assistants can help with recruitment support services, handling everything from writing job descriptions to screening candidates.
Hvirtuals offers recruitment support services that can streamline the hiring process for small businesses. Their virtual assistants can manage job postings, review resumes, conduct initial interviews, and even help with onboarding new employees. This allows business owners to focus on making final decisions while ensuring a thorough hiring process.
Generating Leads and Boosting Sales
Lead generation is crucial for business growth, but it can be a complex and time-consuming process. Virtual assistants specializing in lead generation can help small businesses attract and nurture potential customers.
As a lead generation agency, Hvirtuals offers services to help businesses identify and engage with potential customers. Their virtual assistants use various lead generation tools and strategies to create a steady stream of qualified leads. By hiring a lead generation expert through Hvirtuals, small businesses can boost their sales pipeline without the need for an in-house sales team.
Improving Online Visibility with SEO Services
In the digital age, being visible in search engine results is crucial for attracting customers. However, search engine optimization (SEO) can be complex and time-consuming. This is where virtual assistants with SEO expertise come in handy.
Hvirtuals offers some of the best SEO services in the industry. Their virtual assistants can optimize website content, improve site structure, build quality backlinks, and implement other SEO strategies to improve a business's search engine rankings. For those searching for "SEO services near me," Hvirtuals provides remote services that deliver local results.
In conclusion, virtual assistants are indeed revolutionizing how small businesses operate. From handling day-to-day tasks to providing specialized services like software development and digital marketing, virtual assistants are helping small businesses compete in the digital age. Companies like Hvirtuals are leading this revolution, offering a wide range of digital solutions that empower small businesses to grow and thrive. By leveraging these services, small businesses can access expertise and capabilities that were once only available to larger corporations, leveling the playing field and opening up new opportunities for growth and success.
1 note · View note
dieterziegler159 · 7 months
Text
How AI Writing Steers Conversational Evolution
The Marvels of Large Language Models Exposed In the form of Artificial Intelligence it is Large Language Models (LLMs) that are upending Natural Language Processing (NLP), pushing it towards the unknown. The emerging AI prototypes are equipped with incredible potential to understand and create human-like sentences that mark the transition from traditional human-machine communication to something…
Tumblr media
View On WordPress
0 notes
stelleninfotechpvt · 1 year
Text
0 notes
bruceblog-7766 · 1 year
Text
Grow your business with remote Workforce.
1 note · View note
Photo
Tumblr media
REMOTE TEAM SERVICES IN INDIA
0 notes
bliow · 2 months
Text
AGARTHA Aİ - DEVASA+ (4)
Tumblr media
In an era where technology and creativity intertwine, AI design is revolutionizing the way we conceptualize and create across various industries. From the runway to retail, 3D fashion design is pushing boundaries, enabling designers to craft intricate garments with unparalleled precision. Likewise, 3D product design is transforming everything from gadgets to furniture, allowing for rapid prototyping and innovation. As we explore these exciting advancements, platforms like Agartha.ai are leading the charge in harnessing artificial intelligence to streamline the design process and inspire new ideas. 
AI design
Artificial intelligence (AI) has revolutionized numerous industries, and the realm of design is no exception. By leveraging the power of machine learning and advanced algorithms, AI is transforming the way designers create, innovate, and deliver their products. AI-driven tools enable designers to harness vast amounts of data, allowing for more informed decision-making and streamlined workflows.
In the context of graphic design, AI can assist artists in generating ideas, creating unique visuals, and even automating repetitive tasks. For instance, programs powered by AI design can analyze trends and consumer preferences, producing designs that resonate with target audiences more effectively than traditional methods. This shift not only enhances creativity but also enables designers to focus on strategic thinking and ideation.
Moreover, AI is facilitating personalized design experiences. With the help of algorithms that analyze user behavior, products can be tailored to meet the specific needs and tastes of individuals. This level of customization fosters deeper connections between brands and consumers, ultimately driving customer satisfaction and loyalty in an increasingly competitive market.
3D fashion design
In recent years, 3D fashion design has revolutionized the way we create and visualize clothing. Using advanced software and tools, designers can create lifelike virtual garments that allow for innovative experimentation without the need for physical fabric. This trend has not only streamlined the design process but has also significantly reduced waste in the fashion industry.
Moreover, 3D fashion design enables designers to showcase their creations in a more interactive manner. By utilizing 3D modeling and rendering technologies, designers can present their collections in virtual environments, making it easier for clients and consumers to appreciate the nuances of each piece. This immersive experience also helps in gathering valuable feedback before producing the final product.
Furthermore, the integration of 3D fashion design with augmented reality (AR) and virtual reality (VR) technologies is bringing a fresh perspective to the industry. Consumers can virtually try on clothes from the comfort of their homes, thereby enhancing the shopping experience. As this field continues to evolve, it promises to bridge the gap between creativity and technology, paving the way for a sustainable and forward-thinking fashion future.
3D product design
3D product design has revolutionized the way we conceptualize and create products. With advanced software tools and technologies, designers can now create highly detailed and realistic prototypes that are not only visually appealing but also functional. This process allows for a quicker iteration of ideas, enabling designers to experiment with various styles and functionalities before arriving at the final design.
One of the significant advantages of 3D product design is the ability to visualize products in a virtual environment. Designers can see how their creations would look in real life, which is essential for understanding aesthetics and usability. Additionally, this technology enables manufacturers to identify potential issues in the design phase, reducing costs associated with prototype development and rework.
Moreover, the rise of 3D printing has further enhanced the significance of 3D product design. Designers can swiftly turn their digital models into tangible products, allowing for rapid prototyping and small-batch manufacturing. This agility not only speeds up the time-to-market for new products but also paves the way for more innovative designs that were previously impossible to execute.
Agartha.ai
Agartha.ai is a revolutionary platform that merges artificial intelligence with innovative design, creating a new avenue for designers and creators alike. With the rapid advancements in technology, Agartha.ai leverages AI to streamline various design processes, enabling users to produce unique and captivating designs with ease.
The platform provides tools that empower both emerging and established designers to explore the possibilities of AI design. By utilizing intelligent algorithms, Agartha.ai can assist in generating design options, ensuring that creativity is not hindered but enhanced. This results in a more efficient workflow and allows designers to focus on the conceptual aspects of their projects.
One of the standout features of Agartha.ai is its ability to adapt to different design disciplines, such as 3D fashion design and 3D product design. By supporting a broad spectrum of design fields, it positions itself as a versatile tool that meets the evolving needs of today's creative professionals. Whether it's crafting intricate fashion pieces or developing innovative product designs, Agartha.ai is at the forefront of the design revolution.
329 notes · View notes
logicpng · 1 year
Text
Tumblr media Tumblr media Tumblr media
i Believe i am finally done making references
edit: pasting the image descriptions out of the alt text. since they're refs they're really long I am so sorry
[First image ID:
Digital artwork of Aldebaran Aster - a humanoid being in a suit, four arms, and a star shaped head - standing next to a large program window titled: "Aster: Info". He is holding up the mouse pointer in one of his hands and laughing, with a smug smile.
The text in the window reads:
"Current module:
[Selected radio button] Rigel [Selected radio button] Vega [Glitchy text box] Name: Aldebaran :)
Pronouns:
[Checked tick box] He/him [Unchecked tick box] She/her [Checked tick box] They/them
Other: [Long empty text field]
Module description:
[Following text in a large text box:]
The result of an undocumented bug, never caught during development. Aldebaran, as he dubs himself, combines Rigel's love for putting on a show and Vega's scripting skills, and it shows in the artistic ways he bends the OS to his will.
Being a fusion of two AIs unable to cooperate, though, has the side effect of giving him a bad temper and lack of patience. Combining that with the rush of being in control of everything from GUI to the very kernel? Doesn't seem like a smart idea. Especially not when the laptop has experimental technology baked into it.
But hey, you've backed up before this, right...?
[Text box ends]
[Large lavender OK button]"
First Image ID end]
[Second Image ID:
Digital artwork of The User - a human with a gray-green skin, dark green hair with a white t-shirt, track suit shorts and green socks - standing next to a large program window titled: "User information". They are standing with a laptop bearing the CaelOS logo on its back, and scratching their head, looking a little nervous.
The text in the window reads:
"Base info:
Name: Urs Norma; Pronunciation: OO-rs NOR-mah; Age: 25
Pronouns:
[Checked tick box] He/him [Checked tick box] She/her [Checked tick box] They/them
Other: Any/All
Personality profile:
[Following text in a large text box:]
Young adult figuring out... being an adult.
After hastily finding a used tech store, they found a replacement for their busted laptop. As it turns out, the machine hosts an OS that never saw the light of day, featuring experimental technology. At least, it's compatible with most software they need...
Despite the world being cruel and unforgiving, the spark of optimism remains bright. Just like the AI the laptop hides, all they can do is perpetually learn from their mistakes, and maybe even relay some of that knowledge to the little virtual assistants they find themself talking to every day.
[Text box ends]
[Large green OK button]"
Second Image ID]
[Third Image ID:
Reference image of Urs Norma - an androgynous person with gray-green skin and dark green hair. A large program window titled "The User: Outfits" shows them standing neutrally, facing the camera, in three different outfits:
Home: Plain white shirt, track suit shorts, green socks.
Work (casual): Green hoodie that says "gorf" with a cat face on it in white, gray-purple pants, and black dress shoes. Left and right hand feature black and white rings on their respective middle fingers.
Work (formal): Pink dress shirt, slightly unbuttoned at the top, same pants, same dress shoes, and same rings.
Behind them is an outline of Aster, that has text in it saying "height comparison aster :)". At the top of their rays, they're noticeably shorter than Urs.
A window titled "Head", slightly overlapping the large window, shows lineart of the user's head in profile and from the back.
Third Image ID end]
196 notes · View notes
Text
Tumblr media Tumblr media
A few years ago, during one of California’s steadily worsening wildfire seasons, Nat Friedman’s family home burned down. A few months after that, Friedman was in Covid-19 lockdown in the Bay Area, both freaked out and bored. Like many a middle-aged dad, he turned for healing and guidance to ancient Rome. While some of us were watching Tiger King and playing with our kids’ Legos, he read books about the empire and helped his daughter make paper models of Roman villas. Instead of sourdough, he learned to bake Panis Quadratus, a Roman loaf pictured in some of the frescoes found in Pompeii. During sleepless pandemic nights, he spent hours trawling the internet for more Rome stuff. That’s how he arrived at the Herculaneum papyri, a fork in the road that led him toward further obsession. He recalls exclaiming: “How the hell has no one ever told me about this?”
The Herculaneum papyri are a collection of scrolls whose status among classicists approaches the mythical. The scrolls were buried inside an Italian countryside villa by the same volcanic eruption in 79 A.D. that froze Pompeii in time. To date, only about 800 have been recovered from the small portion of the villa that’s been excavated. But it’s thought that the villa, which historians believe belonged to Julius Caesar’s prosperous father-in-law, had a huge library that could contain thousands or even tens of thousands more. Such a haul would represent the largest collection of ancient texts ever discovered, and the conventional wisdom among scholars is that it would multiply our supply of ancient Greek and Roman poetry, plays and philosophy by manyfold. High on their wish lists are works by the likes of Aeschylus, Sappho and Sophocles, but some say it’s easy to imagine fresh revelations about the earliest years of Christianity.
“Some of these texts could completely rewrite the history of key periods of the ancient world,” says Robert Fowler, a classicist and the chair of the Herculaneum Society, a charity that tries to raise awareness of the scrolls and the villa site. “This is the society from which the modern Western world is descended.”
The reason we don’t know exactly what’s in the Herculaneum papyri is, y’know, volcano. The scrolls were preserved by the voluminous amount of superhot mud and debris that surrounded them, but the knock-on effects of Mount Vesuvius charred them beyond recognition. The ones that have been excavated look like leftover logs in a doused campfire. People have spent hundreds of years trying to unroll them—sometimes carefully, sometimes not. And the scrolls are brittle. Even the most meticulous attempts at unrolling have tended to end badly, with them crumbling into ashy pieces.
In recent years, efforts have been made to create high-resolution, 3D scans of the scrolls’ interiors, the idea being to unspool them virtually. This work, though, has often been more tantalizing than revelatory. Scholars have been able to glimpse only snippets of the scrolls’ innards and hints of ink on the papyrus. Some experts have sworn they could see letters in the scans, but consensus proved elusive, and scanning the entire cache is logistically difficult and prohibitively expensive for all but the deepest-pocketed patrons. Anything on the order of words or paragraphs has long remained a mystery.
But Friedman wasn’t your average Rome-loving dad. He was the chief executive officer of GitHub Inc., the massive software development platform that Microsoft Corp. acquired in 2018. Within GitHub, Friedman had been developing one of the first coding assistants powered by artificial intelligence, and he’d seen the rising power of AI firsthand. He had a hunch that AI algorithms might be able to find patterns in the scroll images that humans had missed.
After studying the problem for some time and ingratiating himself with the classics community, Friedman, who’s left GitHub to become an AI-focused investor, decided to start a contest. Last year he launched the Vesuvius Challenge, offering $1 million in prizes to people who could develop AI software capable of reading four passages from a single scroll. “Maybe there was obvious stuff no one had tried,” he recalls thinking. “My life has validated this notion again and again.”
As the months ticked by, it became clear that Friedman’s hunch was a good one. Contestants from around the world, many of them twentysomethings with computer science backgrounds, developed new techniques for taking the 3D scans and flattening them into more readable sheets. Some appeared to find letters, then words. They swapped messages about their work and progress on a Discord chat, as the often much older classicists sometimes looked on in hopeful awe and sometimes slagged off the amateur historians.
On Feb. 5, Friedman and his academic partner Brent Seales, a computer science professor and scroll expert, plan to reveal that a group of contestants has delivered transcriptions of many more than four passages from one of the scrolls. While it’s early to draw any sweeping conclusions from this bit of work, Friedman says he’s confident that the same techniques will deliver far more of the scrolls’ contents. “My goal,” he says, “is to unlock all of them.”
Before Mount Vesuvius erupted, the town of Herculaneum sat at the edge of the Gulf of Naples, the sort of getaway wealthy Romans used to relax and think. Unlike Pompeii, which took a direct hit from the Vesuvian lava flow, Herculaneum was buried gradually by waves of ash, pumice and gases. Although the process was anything but gentle, most inhabitants had time to escape, and much of the town was left intact under the hardening igneous rock. Farmers first rediscovered the town in the 18th century, when some well-diggers found marble statues in the ground. In 1750 one of them collided with the marble floor of the villa thought to belong to Caesar’s father-in-law, Senator Lucius Calpurnius Piso Caesoninus, known to historians today as Piso.
During this time, the first excavators who dug tunnels into the villa to map it were mostly after more obviously valuable artifacts, like the statues, paintings and recognizable household objects. Initially, people who ran across the scrolls, some of which were scattered across the colorful floor mosaics, thought they were just logs and threw them on a fire. Eventually, though, somebody noticed the logs were often found in what appeared to be libraries or reading rooms, and realized they were burnt papyrus. Anyone who tried to open one, however, found it crumbling in their hands.
Terrible things happened to the scrolls in the many decades that followed. The scientif-ish attempts to loosen the pages included pouring mercury on them (don’t do that) and wafting a combination of gases over them (ditto). Some of the scrolls have been sliced in half, scooped out and generally abused in ways that still make historians weep. The person who came the closest in this period was Antonio Piaggio, a priest. In the late 1700s he built a wooden rack that pulled silken threads attached to the edge of the scrolls and could be adjusted with a simple mechanism to unfurl the document ever so gently, at a rate of 1 inch per day. Improbably, it sort of worked; the contraption opened some scrolls, though it tended to damage them or outright tear them into pieces. In later centuries, teams organized by other European powers, including one assembled by Napoleon, pieced together torn bits of mostly illegible text here and there.
Today the villa remains mostly buried, unexcavated and off-limits even to the experts. Most of what’s been found there and proven legible has been attributed to Philodemus, an Epicurean philosopher and poet, leading historians to hope there’s a much bigger main library buried elsewhere on-site. A wealthy, educated man like Piso would have had the classics of the day along with more modern works of history, law and philosophy, the thinking goes. “I do believe there’s a much bigger library there,” says Richard Janko, a University of Michigan classical studies professor who’s spent painstaking hours assembling scroll fragments by hand, like a jigsaw puzzle. “I see no reason to think it should not still be there and preserved in the same way.” Even an ordinary citizen from that time could have collections of tens of thousands of scrolls, Janko says. Piso is known to have corresponded often with the Roman statesman Cicero, and the apostle Paul had passed through the region a couple of decades before Vesuvius erupted. There could be writings tied to his visit that comment on Jesus and Christianity. “We have about 800 scrolls from the villa today,” Janko says. “There could be thousands or tens of thousands more.”
In the modern era, the great pioneer of the scrolls is Brent Seales, a computer science professor at the University of Kentucky. For the past 20 years he’s used advanced medical imaging technology designed for CT scans and ultrasounds to analyze unreadable old texts. For most of that time he’s made the Herculaneum papyri his primary quest. “I had to,” he says. “No one else was working on it, and no one really thought it was even possible.”
Progress was slow. Seales built software that could theoretically take the scans of a coiled scroll and unroll it virtually, but it wasn’t prepared to handle a real Herculaneum scroll when he put it to the test in 2009. “The complexity of what we saw broke all of my software,” he says. “The layers inside the scroll were not uniform. They were all tangled and mashed together, and my software could not follow them reliably.”
By 2016 he and his students had managed to read the Ein Gedi scroll, a charred ancient Hebrew text, by programming their specialized software to detect changes in density between the burnt manuscript and the burnt ink layered onto it. The software made the letters light up against a darker background. Seales’ team had high hopes to apply this technique to the Herculaneum papyri, but those were written with a different, carbon-based ink that their imaging gear couldn’t illuminate in the same way.
Over the past few years, Seales has begun experimenting with AI. He and his team have scanned the scrolls with more powerful imaging machines, examined portions of the papyrus where ink was visible and trained algorithms on what those patterns looked like. The hope was that the AI would start picking up on details that the human eye missed and could apply what it learned to more obfuscated scroll chunks. This approach proved fruitful, though it remained a battle of inches. Seales’ technology uncovered bits and pieces of the scrolls, but they were mostly unreadable. He needed another breakthrough.
Friedman set up Google alerts for Seales and the papyri in 2020, while still early in his Rome obsession. After a year passed with no news, he started watching YouTube videos of Seales discussing the underlying challenges. Among other things, he needed money. By 2022, Friedman was convinced he could help. He invited Seales out to California for an event where Silicon Valley types get together and share big ideas. Seales gave a short presentation on the scrolls to the group, but no one bit. “I felt very, very guilty about this and embarrassed because he’d come out to California, and California had failed him,” Friedman says.
On a whim, Friedman proposed the idea of a contest to Seales. He said he’d put up some of his own money to fund it, and his investing partner Daniel Gross offered to match it.
Seales says he was mindful of the trade-offs. The Herculaneum papyri had turned into his life’s work, and he wanted to be the one to decode them. More than a few of his students had also poured time and energy into the project and planned to publish papers about their efforts. Now, suddenly, a couple of rich guys from Silicon Valley were barging into their territory and suggesting that internet randos could deliver the breakthroughs that had eluded the experts.
More than glory, though, Seales really just hoped the scrolls would be read, and he agreed to hear Friedman out and help design the AI contest. They kicked off the Vesuvius Challenge last year on the Ides of March. Friedman announced the contest on the platform we fondly remember as Twitter, and many of his tech friends agreed to pledge their money toward the effort while a cohort of budding papyrologists began to dig into the task at hand. After a couple of days, Friedman had amassed enough money to offer $1 million in prizes, along with some extra money to throw at some of the more time-intensive basics.
Friedman hired people online to gather the existing scroll imagery, catalog it and create software tools that made it easier to chop the scrolls into segments and to flatten the images out into something that was readable on a computer screen. After finding a handful of people who were particularly good at this, he made them full members of his scroll contest team, paying them $40 an hour. His hobby was turning into a lifestyle.
The initial splash of attention helped open new doors. Seales had lobbied Italian and British collectors for years to scan his first scrolls. Suddenly the Italians were now offering up two new scrolls for scanning to provide more AI training data. With Friedman’s backing, a team set to work building precision-fitting, 3D-printed cases to protect the new scrolls on their private jet flight from Italy to a particle accelerator in England. There they were scanned for three days straight at a cost of about $70,000.
Seeing the imaging process in action drives home both the magic and difficulty inherent in this quest. One of the scroll remnants placed in the scanner, for example, wasn’t much bigger than a fat finger. It was peppered by high-energy X-rays, much like a human going through a CT scan, except the resulting images were delivered in extremely high resolution. (For the real nerds: about 8 micrometers.) These images were virtually carved into a mass of tiny slices too numerous for a person to count. Along each slice, the scanner picked up infinitesimal changes in density and thickness. Software was then used to unroll and flatten out the slices, and the resulting images looked recognizably like sheets of papyrus, the writing on them hidden.
The files generated by this process are so large and difficult to deal with on a regular computer that Friedman couldn’t throw a whole scroll at most would-be contest winners. To be eligible for the $700,000 grand prize, contestants would have until the end of 2023 to read just four passages of at least 140 characters of contiguous text. Along the way, smaller prizes ranging from $1,000 to $100,000 would be awarded for various milestones, such as the first to read letters in a scroll or to build software tools capable of smoothing the image processing. With a nod to his open-source roots, Friedman insisted these prizes could be won only if the contestants agreed to show the world how they did it.
Luke Farritor was hooked from the start. Farritor—a bouncy 22-year-old Nebraskan undergraduate who often exclaims, “Oh, my goodness!”—heard Friedman describe the contest on a podcast in March. “I think there’s a 50% chance that someone will encounter this opportunity, get the data and get nerd-sniped by it, and we’ll solve it this year,” Friedman said on the show. Farritor thought, “That could be me.”
The early months were a slog of splotchy images. Then Casey Handmer, an Australian mathematician, physicist and polymath, scored a point for humankind by beating the computers to the first major breakthrough. Handmer took a few stabs at writing scroll-reading code, but he soon concluded he might have better luck if he just stared at the images for a really long time. Eventually he began to notice what he and the other contestants have come to call “crackle,” a faint pattern of cracks and lines on the page that resembles what you might see in the mud of a dried-out lakebed. To Handmer’s eyes, the crackle seemed to have the shape of Greek letters and the blobs and strokes that accompany handwritten ink. He says he believes it to be dried-out ink that’s lifted up from the surface of the page.
The crackle discovery led Handmer to try identifying clips of letters in one scroll image. In the spirit of the contest, he posted his findings to the Vesuvius Challenge’s Discord channel in June. At the time, Farritor was a summer intern at SpaceX. He was in the break room sipping a Diet Coke when he saw the post, and his initial disbelief didn’t last long. Over the next month he began hunting for crackle in the other image files: one letter here, another couple there. Most of the letters were invisible to the human eye, but 1% or 2% had the crackle. Armed with those few letters, he trained a model to recognize hidden ink, revealing a few more letters. Then Farritor added those letters to the model’s training data and ran it again and again and again. The model starts with something only a human can see—the crackle pattern—then learns to see ink we can’t.
Unlike today’s large-language AI models, which gobble up data, Farritor’s model was able to get by with crumbs. For each 64-pixel-by-64-pixel square of the image, it was merely asking, is there ink here or not? And it helped that the output was known: Greek letters, squared along the right angles of the cross-hatched papyrus fibers.
In early August, Farritor received an opportunity to put his software to the test. He’d returned to Nebraska to finish out the summer and found himself at a house party with friends when a new, crackle-rich image popped up in the contest’s Discord channel. As the people around him danced and drank, Farritor hopped on his phone, connected remotely to his dorm computer, threw the image into his machine-learning system, then put his phone away. “An hour later, I drive all my drunk friends home, and then I’m walking out of the parking garage, and I take my phone out not expecting to see anything,” he says. “But when I open it up, there’s three Greek letters on the screen.”
Around 2 a.m., Farritor texted his mom and then Friedman and the other contestants about what he’d found, fighting back tears of joy. “That was the moment where I was like, ‘Oh, my goodness, this is actually going to work. We’re going to read the scrolls.’”
Soon enough, Farritor found 10 letters and won $40,000 for one of the contest’s progress prizes. The classicists reviewed his work and said he’d found the Greek word for “purple.”
Farritor continued to train his machine-learning model on crackle data and to post his progress on Discord and Twitter. The discoveries he and Handmer made also set off a new wave of enthusiasm among contestants, and some began to employ similar techniques. In the latter part of 2023, Farritor formed an alliance with two other contestants, Youssef Nader and Julian Schilliger, in which they agreed to combine their technology and share any prize money.
In the end, the Vesuvius Challenge received 18 entries for its grand prize. Some submissions were ho-hum, but a handful showed that Friedman’s gamble had paid off. The scroll images that were once ambiguous blobs now had entire paragraphs of letters lighting up across them. The AI systems had brought the past to life. “It’s a situation that you practically never encounter as a classicist,” says Tobias Reinhardt, a professor of ancient philosophy and Latin literature at the University of Oxford. “You mostly look at texts that have been looked at by someone before. The idea that you are reading a text that was last unrolled on someone’s desk 1,900 years ago is unbelievable.”
A group of classicists reviewed all the entries and did, in fact, deem Farritor’s team the winners. They were able to stitch together more than a dozen columns of text with entire paragraphs all over their entry. Still translating, the scholars believe the text to be another work by Philodemus, one centered on the pleasures of music and food and their effects on the senses. “Peering at and beginning to transcribe the first reasonably legible scans of this brand-new ancient book was an extraordinarily emotional experience,” says Janko, one of the reviewers. While these passages aren’t particularly revelatory about ancient Rome, most classics scholars have their hopes for what might be next.
There’s a chance that the villa is tapped out—that there are no more libraries of thousands of scrolls waiting to be discovered—or that the rest have nothing mind-blowing to offer. Then again, there’s the chance they contain valuable lessons for the modern world.
That world, of course, includes Ercolano, the modern town of about 50,000 built on top of ancient Herculaneum. More than a few residents own property and buildings atop the villa site. “They would have to kick people out of Ercolano and destroy everything to uncover the ancient city,” says Federica Nicolardi, a papyrologist at the University of Naples Federico II.
Barring a mass relocation, Friedman is working to refine what he’s got. There’s plenty left to do; the first contest yielded about 5% of one scroll. A new set of contestants, he says, might be able to reach 85%. He also wants to fund the creation of more automated systems that can speed the processes of scanning and digital smoothing. He’s now one of the few living souls who’s roamed the villa tunnels, and he says he’s also contemplating buying scanners that can be placed right at the villa and used in parallel to scan tons of scrolls per day. “Even if there’s just one dialogue of Aristotle or a beautiful lost Homeric poem or a dispatch from a Roman general about this Jesus Christ guy who’s roaming around,” he says, “all you need is one of those for the whole thing to be more than worth it.”
26 notes · View notes
pankhconsultancy · 9 months
Text
Tumblr media
Pankh Consultancy Pvt. ltd Software Development Virtual Assistant, The ideal tool for realizing your ideas is a virtual assistant. Virtual assistant developers frequently provide technical, creative, and administrative support to software developers. Increase the amount of software that your business develops. Employ a virtual assistant at Pankh Consultancy Pvt. ltd. who has a lot of software development experience. Reach out to us immediately! +1 (646-895-6661)
2 notes · View notes
sabakos · 7 months
Note
🔥 Artificial intelligence
Unlike pretty much everyone, either haters or fans, I think that AI is going to enter a dark age soon.
As we all know, there's a ton of money being spent on it and it's being pushed into every corner of the internet, everyone's promoting "virtual assistants" and "chatbots" and editing your search terms behind the scene.
And absolutely none of it fucking works. All these bots are trash, they'll always be trash, and the fact that they've all been pushed to production environments is an indication of just how little anyone in software bothers to test anything anymore.
I know that absurd amounts of investor funding distorts what we'd usually expect from a "free" market but I think the end result of this is going to be a permanent loss of goodwill with the public, to the extent that companies advertising the use of AI in their products are going to see business driven to their competitors. Even if I'm wrong about the fundamental nature of human language and they manage to make generative AI that actually "works" some time in the next decade, it will flop because everyone will associate it with "broken tech pushed on us by idiot silicon valley techbros" for a whole generation.
I never would have predicted specifically this. Despite whatever he's saying now on Twitter, I don't think Big Yud would have ever predicted it either. We're too stupid as a society to build superintelligent AI. We deployed unintelligent AI and it's going to fail in a bunch of tiny, mediocre, death-by-a-thousand-cuts type ways that are just going to annoy everyone, and so no one is going to want to see it through to developing anything that can actually endanger us.
16 notes · View notes
hahaalaine · 3 months
Text
i was in a meeting with product yesterday and they were pitching the idea of using GenAI as a virtual assistant for our software. everyone else on my team was getting so excited and wanted to implement something that could do simple yet tedious tasks for us. i was the only one in that room that brought up the fact that AI will create shit out of thin air and how many problems that would cause for us. not to mention how much more time it would take to develop a tool like that off the bat. i was just bewildered that everyone was salivating about the idea, especially the most senior member of our team who uses ChatGPT the most, because they don't think about consequences??? i got them to pair it down to an advanced search engine for our help doc archive since its overwhelming and hard to learn how to search but i felt so alone in that room. Like people are so seduced by the IDEA of AI that they don't stop to think that the tech isn't fucking there yet!!!
9 notes · View notes
tanadrin · 1 year
Text
The invention of the basic BCI was revolutionary, though it did not seem so at the time. Developing implantable electronics that could detect impulses from, and provide feedback to, the body's motor and sensory neurons was a natural outgrowth of assistive technologies in the 21st century. The Collapse slowed the development of this technology, but did not stall it completely; the first full BCI suite capable of routing around serious spinal cord damage, and even reducing the symptoms of some kinds of brain injury, was developed in the 2070s. By the middle of the 22nd century, this technology was widely available. By the end, it was commonplace.
But we must distinguish, as more careful technologists did even then, between simpler BCI--brain-computer interfaces--and the subtler MMI, the mind-machine interface. BCI technology, especially in the form of assistive devices, was a terrific accomplishment. But the human sensory and motor systems, at least as accessed by that technology, are comparatively straightforward. Despite the name, a 22nd century BCI barely intrudes into the brain at all, with most of its physical connections being in the spine or peripheral nervous system. It does communicate *with* the brain, and it does so much faster and more reliably than normal sensory input or neuronal output, but there nevertheless still existed in that period a kind of technological barrier between more central cognitive functions, like memory, language, and attention, and the peripheral functions that the BCI was capable of augmenting or replacing.
*That* breakthrough came in the first decades of the 23rd century, again primarily from the medical field: the subarachnoid lace or neural lace, which could be grown from a seed created from the patient's own stem cells, and which found its first use in helping stroke patients recover cognitive function and suppressing seizures. The lace is a delicate web of sensors and chemical-electrical signalling terminals that spreads out over, and carefully penetrats certain parts of the brain; in its modern form, its function and design can be altered even after it is implanted. Most humans raised in an area with access to modern medical facilities have at least a diagnostic lace in place; and, in most contexts, they are regarded as little more than a medical tool.
But of course some of the scientists who developed the lace were interested in pushing the applications of the device further, and in this, they were inspired by the long history of attempts to develop immersive virtual reality that had bedevilled futurists since the 20th century. Since we have had computers capable of manipuating symbolic metaphors for space, we have dreamed of creating a virtual space we can shape to our hearts' content: worlds to escape to, in which we are freed from the tyranny of physical limitations that we labor under in this one. The earliest fiction on this subject imagined a kind of alternate dimension, which we could forsake our mundane existence for entirely, but outside of large multiplayer games that acted rather like amusement parks, the 21st century could only offer a hollow ghost of the Web, bogged down by a cumbersome 3D metaphor users could only crudely manipulate.
The BCI did little to improve the latter--for better or worse, the public Web as we created it in the 20th century is in its essential format (if not its scale) the public Web we have today, a vast library of linked documents we traverse for the most part in two dimensions. It feeds into and draws from the larger Internet, including more specialized software and communications systems that span the whole Solar System (and which, at its margins, interfaces with the Internet of other stars via slow tightbeam and packet ships), but the metaphor of physical space was always going to be insufficient for so complex and sprawling a medium.
What BCI really revolutionized was the massively multiplayer online game. By overriding sensory input and capturing motor output before it can reach the limbs, a BCI allows a player to totally inhabit a virtual world, limited only by the fidelity of the experience the software can offer. Some setups nowadays even forgo overriding the motor output, having the player instead stand in a haptic feedback enclosure where their body can be scanned in real time, with only audio and visual information being channeled through the BCI--this is a popular way to combine physical exercise and entertainment, especially in environments like space stations without a great deal of extra space.
Ultra-immersive games led directly, I argue, to the rise of the Sodalities, which were, if you recall, originally MMO guilds with persistent legal identities. They also influenced the development of the Moon, not just by inspiring the Sodalities, but by providing a channel, through virtual worlds, for socialization and competition that kept the Moon's political fragmentation from devolving into relentless zero-sum competition or war. And for most people, even for the most ardent players of these games, the BCI of the late 22nd century was sufficient. There would always be improvements in sensory fidelity to be made, and new innovations in the games themselves eagerly anticipated every few years, but it seemed, even for those who spent virtually all their waking hours in these spaces, that there was little more that could be accomplished.
But some dreamers are never satisfied; and, occasionally, such dreamers carry us forward and show us new possibilities. The Mogadishu Group began experimenting with pushing the boundaries of MMI and the ways in which MMI could augment and alter virtual spaces in the 2370s. Mare Moscoviensis Industries (the name is not a coincidence) allied with them in the 2380s to release a new kind of VR interface that was meant to revolutionize science and industry by allowing for more intuitive traversal of higher-dimensional spaces, to overcome some of the limits of three-dimensional VR. Their device, the Manifold, was a commercial disaster, with users generally reporting horrible and heretofore unimagined kinds of motion-sickness. MMI went bankrupt in 2387, and was bought by a group of former Mogadishu developers, who added to their number a handful of neuroscientists and transhumanists. They relocated to Plato City, and languished in obscurity for about twenty years.
The next anybody ever heard of the Plato Group (as they were then called), they had bought an old interplanetary freighter and headed for the Outer Solar System. They converted their freighter into a cramped-but-servicable station around Jupiter, and despite occasionally submitting papers to various neuroscience journals and MMI working groups, little was heard from them. This prompted, in 2410, a reporter from the Lunar News Service to hire a private craft to visit the Jupiter outpost; she returned four years later to describe what she found, to general astonishment.
The Plato Group had taken their name more seriously, perhaps, than anyone expected: they had come to regard the mundane, real, three-dimensional world as a second-rate illusion, as shadows on cave walls. But rather than believing there already existed a true realm of forms which they might access by reason, they aspired to create one. MMI was to be the basis, allowing them to free themselves not only of the constraints of the real world (as generations of game-players had already done), but to free themselves of the constraints imposed on those worlds by the evolutionary legacy of the structures of their mind.
They decided early on, for instance, that the human visual cortex was of little use to them. It was constrained to apprehending three-dimensional space, and the reliance of the mind on sight as a primary sense made higher-dimensional spaces difficult or impossible to navigate. Thus, their interface used visual cues only for secondary information--as weak and nondirectional a sense as smell. They focused on using the neural lace to control the firing patterns of the parts of the brain concerned with spatial perception: the place cells, neurons which periodically fire to map spaces to fractal grides of familiar places, and the grid cells, which help construct a two-dimensional sense of location. Via external manipulation, they found they could quickly accommodate these systems to much more complex spaces--not just higher dimensions, but non-Euclidean geometries, and vast hierarchies of scale from the Planck length to many times the size of the observable universe.
The goal of the Plato Group was not simply to make a virtual space to inhabit, however transcendent; into that space they mapped as much information they could, from the Web, the publicly available internet, and any other database they could access, or library that would send them scans of its collection. They reveled in the possibilities of their invented environment, creating new kinds of incomprehensible spatial and sensory art. When asked what the purpose of all this was--were they evangelists for this new mode of being, were they a new kind of Sodality, were they secessionists protesting the limits of the rest of the Solar System's imagination?--they simply replied, "We are happy."
I do not think anyone, on the Moon or elsewhere, really knew what to make of that. Perhaps it is simply that the world they inhabit, however pleasant, is so incomprehensible to us that we cannot appreciate it. Perhaps we do not want to admit there are other modes of being as real and moving to those who inhabit them as our own. Perhaps we simply have a touch of chauvanism about the mundane. If you wish to try to understand yourself, you may--unlike many other utopian endeavors, the Plato Group is still there. Their station--sometimes called the Academy by outsiders, though they simply call it "home"--has expanded considerably over the years. It hangs in the flux tube between Jupiter and Io, drawing its power from Jupiter's magnetic field, and is, I am told, quite impressive if a bit cramped. You can glimpse a little of what they have built using an ordinary BCI-based VR interface; a little more if your neural lace is up to spec. But of course to really understand, to really see their world as they see it, you must be willing to move beyond those things, to forsake--if only temporarily--the world you have been bound to for your entire life, and the shape of the mind you have thus inherited. That is perhaps quite daunting to some. But if we desire to look upon new worlds, must we not always risk that we shall be transformed?
--Tjungdiawain’s Historical Reader, 3rd edition
81 notes · View notes
nunuslab24 · 4 months
Text
What are AI, AGI, and ASI? And the positive impact of AI
Understanding artificial intelligence (AI) involves more than just recognizing lines of code or scripts; it encompasses developing algorithms and models capable of learning from data and making predictions or decisions based on what they’ve learned. To truly grasp the distinctions between the different types of AI, we must look at their capabilities and potential impact on society.
To simplify, we can categorize these types of AI by assigning a power level from 1 to 3, with 1 being the least powerful and 3 being the most powerful. Let’s explore these categories:
1. Artificial Narrow Intelligence (ANI)
Also known as Narrow AI or Weak AI, ANI is the most common form of AI we encounter today. It is designed to perform a specific task or a narrow range of tasks. Examples include virtual assistants like Siri and Alexa, recommendation systems on Netflix, and image recognition software. ANI operates under a limited set of constraints and can’t perform tasks outside its specific domain. Despite its limitations, ANI has proven to be incredibly useful in automating repetitive tasks, providing insights through data analysis, and enhancing user experiences across various applications.
2. Artificial General Intelligence (AGI)
Referred to as Strong AI, AGI represents the next level of AI development. Unlike ANI, AGI can understand, learn, and apply knowledge across a wide range of tasks, similar to human intelligence. It can reason, plan, solve problems, think abstractly, and learn from experiences. While AGI remains a theoretical concept as of now, achieving it would mean creating machines capable of performing any intellectual task that a human can. This breakthrough could revolutionize numerous fields, including healthcare, education, and science, by providing more adaptive and comprehensive solutions.
3. Artificial Super Intelligence (ASI)
ASI surpasses human intelligence and capabilities in all aspects. It represents a level of intelligence far beyond our current understanding, where machines could outthink, outperform, and outmaneuver humans. ASI could lead to unprecedented advancements in technology and society. However, it also raises significant ethical and safety concerns. Ensuring ASI is developed and used responsibly is crucial to preventing unintended consequences that could arise from such a powerful form of intelligence.
The Positive Impact of AI
When regulated and guided by ethical principles, AI has the potential to benefit humanity significantly. Here are a few ways AI can help us become better:
• Healthcare: AI can assist in diagnosing diseases, personalizing treatment plans, and even predicting health issues before they become severe. This can lead to improved patient outcomes and more efficient healthcare systems.
• Education: Personalized learning experiences powered by AI can cater to individual student needs, helping them learn at their own pace and in ways that suit their unique styles.
• Environment: AI can play a crucial role in monitoring and managing environmental changes, optimizing energy use, and developing sustainable practices to combat climate change.
• Economy: AI can drive innovation, create new industries, and enhance productivity by automating mundane tasks and providing data-driven insights for better decision-making.
In conclusion, while AI, AGI, and ASI represent different levels of technological advancement, their potential to transform our world is immense. By understanding their distinctions and ensuring proper regulation, we can harness the power of AI to create a brighter future for all.
7 notes · View notes
muzzleroars · 1 year
Note
THANK YOU FOR YOUR V1 THOUGHTS I AM EATING THEM! how do you feel v2's creators improved on v1?
aaaaa thank you!!! i always love getting a chance to go on and on about these little bugs :]
v2's mind is really interesting due to its development history - i have to imagine that v1 had been in the works for an exceptionally long time as there was no way it would be created toward the end of the war, and so it must also have been extraordinarily expensive. however, it was no longer needed when the new peace was established so, as hakita mentioned, v2 was quickly conceived in order to recoup the losses of such a massive project. and if i were to take a guess, these major expenses fell into three broad categories: the brand new blood-absorbing plates, the custom pieces used throughout v1's construction, and the computer built to house its mind. unfortunately, the plates couldn't be salvaged for a new, peace-time machine where blood would be far more scarce and durability was far more important, but those were easily replaced with standard armor. additionally, v2 could make full use of the other parts of the project that had taken up an immense amount of time and resources, including that proprietary computer.
the hardware itself isn't an issue - a computer this powerful could be set to virtually any purpose and its inherent intelligence would be massively beneficial to working with humans closely. the software, however, presents obvious problems right away, but it's likely they believed the naive programming could be trained as a peacekeeper rather than a warmachine due to its incredible learning capacity. yes, it's based in violence, combat is its foundation, but v2 would be needed for security and it was certainly meant to make peace if it couldn't be kept....so the base code wasn't changed in their haste. it was modified with add-ons, most importantly the limitations that v1 didn't have, as these became much more expedient if it was going to work in the public sphere. essentially, these additional pieces of code would keep v2 from attempting to learn about EVERY little thing around it since much of its environment would be far less relevant to it than v1's would have - v2 could work anywhere from office spaces to parks to train stations, where a vast majority of the stimuli present would be useless and clog up its queue, whereas v1 would largely be reserved for the battlefield and warzones, places it would need to be aware of almost everything in its vicinity. plus, v2 was given many more modifiers on situational assessment and hostile engagement where it considers a vast array of factors before it attacks compared to v1's much more basic measures - v2 assumes peaceful unless proven otherwise, v1 assumes hostile unless proven otherwise. finally, conflict resolution and non-violent tactics where packaged together to slap on to the end of v2's code...but they were sloppy and poorly optimized, so v2's method of choice remained violence.
after this came learning, which its engineers and programmers HEAVILY relied on as opposed to its coding - it was socialized much more intensely than v1, meeting a variety of people and learning to interact with them through basic greetings, administering verbal assistance, and responding to people in distress. v2 was taught extensively to understand facial expressions, was given a vocalizer so it could easily speak to humans (as well as a TON more language packs than v1 so it could communicate easily), and it learned basic first aid (and to AVOID blood harvesting!!!) v2 was additionally trained intensely on human thought models, allowing it a much higher capacity to empathize and intuitively understand emotion as well as make it much better at predicting human behavior. and in some ways, this worked. v2 wanted to be helpful, it developed a much more sophisticated personality and sense of self than v1, and it obviously wanted to be the best it could be. but. it was all too expensive. the amount of training it needed alone was a nightmare in terms of scaling up production, especially on a mass scale - it was never going to be implemented the way drones had been.
and even worse, all that training, all the hours put into it, didn't even fully take. v2 was unpredictable, it often resorted to violence when it should have implemented its conflict resolution and it regularly harvested blood from its victims. it was given scenarios in which it was meant to apprehend a criminal to save civilians, and it would simply end up killing everyone involved instead. over and over its mind defaulted to cruelty, the legacy of its predecessor haunting it, overtaking it, reminding everyone that its core was still war even when they tried to bury it under peace. and this was extremely confusing for v2. it knew what its job was and it followed its protocols, but constantly it was told it had been wrong. it did everything it could to learn what they wanted it to, it absorbed every detail into its mind, but it continued to assess situations poorly according to its teachers. it worked hard until the project was finally shutdown, v2 considered a failure and logistically unlikely to take in the market anyway. so it was shelved beside v1, the old prototype that was only woken up every now and then to run diagnostics and keep in some kind of shape, just in case. now they were both just in case.
but truly, v2 was alone in it. v1, at this point, didn't have nearly the emotional capacity that v2 did, so it didn't really care about being indefinitely put to sleep. i am warming up more to the idea of playing with v1 and v2 having some pre-canon contact, and this is when it would largely take place. i like to think v2 may find workarounds to waking itself up sometimes and then waking up v1 too...but v1 just isn't quite there yet. v2 tries to talk to it, to make it understand and to connect with it in some way, but v1 doesn't get it. it just tries to ask about the war, if it's finally going to be deployed, and v2 has to tell it no, has to watch as it can see v1 ignore it trying to engage with it, until it puts it back to sleep and goes along with it. but, every now and then, something new is added to the storage room they're kept in and v1 becomes interested in it, programming forever attuned to any change in its environment. so v2 can tell it all about whatever it is (sometimes it makes things up if it doesn't know, v1 100% carries some bullshit information that v2 fed it to this day lol) and they can have a moment where it feels like they're not totally alone
57 notes · View notes