GPT-3

De Mi caja de notas

Révision datée du 10 décembre 2022 à 08:34 par Xtof (discussion | contributions)
(diff) ← Version précédente | Voir la version actuelle (diff) | Version suivante → (diff)
GPT-3
Parte de OpenAI API
Información general
Tipo de programa LLM
Lanzamiento inicial 28 de mayo de 2020
Serie OpenAI API
GPT-2
GPT-3
ChatGPT y GPT-4
Enlaces

Generative Pre-trained Transformer 3 (Transformador generativo preentrenado) , conocida por sus siglas (GPT-3), es un modelo de lenguaje autorregresivo que emplea aprendizaje profundo para producir textos que simulan la redacción humana. Es la tercera generación de los modelos de predicción de lenguaje perteneciente a la serie GPT, creados por OpenAI, un laboratorio de investigación de inteligencia artificial con sede en San Francisco.[1]​ La versión completa de GPT-3 tiene una capacidad de 175.000 millones de parámetros de aprendizaje automatizado, lo cual supera la magnitud de su predecesor, GPT-2. GPT-3 fue introducido en mayo de 2020 y, hasta julio de 2020, se encontraba en fase beta.[2]​ Es parte de una tendencia en sistemas de procesamiento de lenguaje natural (NLP) basados en "representaciones de lenguaje pre-entrenadas".[3]​ Previo a la liberación de GPT-3, el modelo de lenguaje más grande era Turing NLG desarrollado por Microsoft, presentado en febrero de 2020, con una capacidad diez veces menor que el de GPT-3.

GPT-3 fue presentado oficialmente el 28 de mayo de 2020, a través de la publicación de la investigación realizada en coautoría por 31 investigadores e ingenieros de OpenAI y de la Universidad Johns Hopkins,[nota 1]​ titulada Language Models are Few-Shot Learners.[3]

La calidad de los textos generados por GPT-3 es tan alta que es difícil distinguirlos de aquellos escritos por humanos, lo cual ha generado la puntualización de los beneficios y riesgos que esto conlleva. En la publicación del 28 de mayo de 2020, los creadores advierten sobre peligros potenciales de GPT-3 al tiempo que solicitan ayuda para mitigar dichos riesgos. David Chalmers, filósofo australiano, describió a GPT-3 como "uno de los más interesantes e importantes sistemas de inteligencia artificial nunca antes creados.".[4]

Por otro lado, se ha señalado la carencia de coherencia en algunos textos debido a que el procesamiento de palabras llevado a cabo por GPT-3 es meramente sintáctico, sin atender a la semántica del texto.[5]

Contexto

Según The Economist, algoritmos mejorados, computadoras potentes y un aumento en la digitalización de datos han impulsado una revolución en el aprendizaje automático, con nuevas técnicas en la década de 2010 que resultaron en "mejoras rápidas en tareas" que incluyen la manipulación del lenguaje. Los modelos de software se entrenan para aprender utilizando miles o millones de ejemplos en una "estructura... basada vagamente en la arquitectura neural del cerebro". Una arquitectura utilizada en el procesamiento del lenguaje natural (NLP) es una red neural basada en un modelo de aprendizaje profundo que fue introducido por primera vez en 2017: la arquitectura de transformador. Existen varios sistemas de NLP capaces de procesar, extraer, organizar, conectar y contrastar información textual, así como de responder correctamente preguntas.

El 11 de junio de 2018, los investigadores e ingenieros de OpenAI publicaron su artículo original presentando el primer transformador preentrenado generativo (GPT), un tipo de modelo de lenguaje generativo preentrenado con un enorme y diverso corpus de texto a través de conjuntos de datos, seguido de un ajuste discriminativo para enfocarse en una tarea específica. Los modelos GPT son arquitecturas de redes neuronales de aprendizaje profundo basadas en transformadores. Hasta ese momento, los modelos de NLP neurales con mejor rendimiento comúnmente empleaban el aprendizaje supervisado a partir de grandes cantidades de datos etiquetados manualmente, lo que hacía prohibitivamente costoso y lento entrenar modelos de lenguaje extremadamente grandes. El primer modelo GPT se conoce como "GPT-1" y fue seguido por "GPT-2" en febrero de 2019. GPT-2 fue creado como una ampliación directa de GPT-1, con un aumento de diez veces en la cantidad de parámetros y el tamaño del conjunto de datos. Tenía 1.500 millones de parámetros y fue entrenado en un conjunto de datos de 8 millones de páginas web.

En febrero de 2020, Microsoft presentó su modelo de Generación de Lenguaje Natural Turing (T-NLG), que se afirmó ser el "modelo de lenguaje más grande jamás publicado con 17 mil millones de parámetros". Demostró un mejor rendimiento que cualquier otro modelo de lenguaje en una variedad de tareas, que incluyen resumir textos y responder preguntas.

Capacidades y entrenamiento

|title = Un ejemplo de ensayo de un estudiante sobre pedagogía escrito por GPT-3 |quote = La noción de "estilos de aprendizaje" es problemática porque no tiene en cuenta los procesos a través de los cuales se moldean esos estilos de aprendizaje. Algunos estudiantes podrían desarrollar un estilo de aprendizaje particular debido a experiencias específicas. Otros podrían desarrollar un estilo de aprendizaje particular al tratar de adaptarse a un entorno de aprendizaje que no se ajustaba bien a sus necesidades de aprendizaje. En última instancia, necesitamos comprender las interacciones entre los estilos de aprendizaje y los factores ambientales y personales, y cómo estos dan forma a cómo aprendemos y a los tipos de aprendizaje que experimentamos. |source = - Texto generado por Mike Sharples[6]​ |align = right |width = 300px }}

El 28 de mayo de 2020, un preimpreso de arXiv escrito por un grupo de 31 ingenieros e investigadores de OpenAI describió el logro y el desarrollo de GPT-3, un modelo de lenguaje de tercera generación "de última generación". El equipo aumentó la capacidad de GPT-3 en más de dos órdenes de magnitud con respecto a su predecesor, GPT-2, convirtiendo a GPT-3 en el modelo de lenguaje no disperso más grande hasta la fecha. Debido a que GPT-3 es estructuralmente similar a sus predecesores, su mayor precisión se atribuye a su mayor capacidad y a un mayor número de parámetros. La capacidad de GPT-3 es diez veces mayor que la del modelo de NLP más grande conocido en ese momento, el Turing NLG de Microsoft.

Lambdalabs estimó un costo hipotético de alrededor de 4,6 millones de dólares estadounidenses y 355 años para entrenar GPT-3 en una sola GPU en 2020, con un tiempo de entrenamiento real más bajo utilizando más GPUs en paralelo. El 60% del conjunto de datos de preentrenamiento ponderado para GPT-3 proviene de una versión filtrada de Common Crawl que consta de 410 mil millones de tokens codificados con pares de bytes. Otras fuentes son 19 mil millones de tokens de WebText2, que representan el 22% del total ponderado, 12 mil millones de tokens de Books1, que representan el 8%, 55 mil millones de tokens de Books2, que representan el 8%, y 3 mil millones de tokens de Wikipedia, que representan el 3%. GPT-3 fue entrenado con cientos de miles de millones de palabras y también es capaz de codificar en CSS, JSX, Python, entre otros.

Datos de entrenamiento de GPT-3: 9 
Conjunto de datos # tokens Proporción
dentro del entrenamiento
Common Crawl 410 mil millones 60%
WebText2 19 mil millones 22%
Books1 12 mil millones 8%
Books2 55 mil millones 8%
Wikipedia 3 mil millones 3%

Dado que los datos de entrenamiento de GPT-3 son abarcadores, no requiere más entrenamiento para tareas de lenguaje distintas. Sin embargo, el conjunto de datos de entrenamiento contiene ocasionalmente lenguaje tóxico y GPT-3 ocasionalmente genera lenguaje tóxico como resultado de imitar sus datos de entrenamiento. Un estudio de la Universidad de Washington encontró que GPT-3 produjo lenguaje tóxico a un nivel de toxicidad comparable a los modelos similares de procesamiento de lenguaje natural de GPT-2 y CTRL. OpenAI ha implementado varias estrategias para limitar la cantidad de lenguaje tóxico generado por GPT-3. Como resultado, GPT-3 produjo menos lenguaje tóxico en comparación con su modelo predecesor, GPT-1, aunque produjo tanto más generaciones como una toxicidad más alta de lenguaje tóxico en comparación con CTRL Wiki, un modelo de lenguaje entrenado completamente en datos de Wikipedia.

El 11 de junio de 2020, OpenAI anunció que los usuarios podían solicitar acceso a su API GPT-3 fácil de usar, un "conjunto de herramientas de aprendizaje automático", para ayudar a OpenAI a "explorar las fortalezas y limitaciones" de esta nueva tecnología. La invitación describió cómo esta API tenía una interfaz "texto de entrada, texto de salida" de propósito general que puede completar casi "cualquier tarea en inglés", en lugar del caso de uso único habitual. Según un usuario que tuvo acceso a un lanzamiento temprano privado del API de OpenAI GPT-3, GPT-3 era "inquietantemente bueno" al escribir un "texto asombrosamente coherente" con solo algunas indicaciones simples. En un experimento inicial, se pidió a 80 sujetos de EE. UU. que juzgaran si artículos cortos de aproximadamente 200 palabras fueron escritos por humanos o por GPT-3. Los participantes acertaron correctamente el 52% de las veces, solo un poco mejor que el azar.

El 18 de noviembre de 2021, OpenAI anunció que se habían implementado suficientes salvaguardias y que el acceso a su API sería irrestricto. OpenAI proporcionó a los desarrolladores una herramienta de moderación de contenido que les ayuda a cumplir con la política de contenido de OpenAI. El 27 de enero de 2022, OpenAI anunció que sus modelos de lenguaje GPT-3 más nuevos, denominados colectivamente InstructGPT, eran ahora el modelo de lenguaje predeterminado utilizado en su API. Según OpenAI, InstructGPT producía contenido que estaba mejor alineado con las intenciones del usuario, siguiendo mejor las instrucciones, generando menos hechos inventados y produciendo contenido algo menos tóxico.

Debido a que GPT-3 puede "generar artículos de noticias que los evaluadores humanos tienen dificultades para distinguir de los artículos escritos por humanos", GPT-3 tiene el "potencial de avanzar tanto las aplicaciones beneficiosas como las perjudiciales de los modelos de lenguaje". En su artículo del 28 de mayo de 2020, los investigadores describieron detalladamente los posibles "efectos perjudiciales de GPT-3", que incluyen "desinformación, correo no deseado, phishing, abuso de procesos legales y gubernamentales, escritura fraudulenta de ensayos académicos y pretextos de ingeniería social". Los autores llaman la atención sobre estos peligros para solicitar investigaciones sobre mitigación de riesgos.

GPT-3 es capaz de realizar aprendizaje sin muestras y con pocas muestras (incluido un ejemplo). En junio de 2022, Almira Osmanovic Thunström escribió que GPT-3 era el autor principal de un artículo sobre sí mismo que habían enviado para su publicación y que había sido prepublicado mientras esperaba la finalización de su revisión.

InstructGPT

InstructGPT es una versión afinada de GPT-3. Ha sido entrenado en un conjunto de datos de instrucciones escritas por humanos. Este entrenamiento permite a InstructGPT comprender mejor lo que se le pide y generar resultados más precisos y relevantes.

  • InstructGPT puede seguir instrucciones dadas en lenguaje natural.
  • InstructGPT puede responder preguntas formuladas en lenguaje natural.
  • InstructGPT es más preciso y relevante que GPT-3 al seguir instrucciones y responder preguntas.
  • InstructGPT puede ser utilizado en diversas aplicaciones, como servicio al cliente, educación y automatización.

Modelos GPT-3

Hay muchos modelos en la familia GPT-3, algunos con propósitos diferentes. En el artículo de investigación inicial publicado por OpenAI, mencionaron 8 tamaños diferentes del modelo principal de GPT-3:

Nombre del modelo Parámetros Nombre de API
GPT-3 Small 125 M n/a
GPT-3 Medium 350 M ada
GPT-3 Large 760 M n/a
GPT-3 XL 1.3 B babbage
GPT-3 2.7B 2.7 B n/a
GPT-3 6.7B 6.7 B curie
GPT-3 13B 13B n/a
GPT-3 175B 175B davinci

La mitad de los modelos son accesibles a través de la API, a saber, GPT-3-small, GPT-3-xl, GPT-3-6.7B y GPT-3-175b, que se denominan ada, babbage, curie y davinci, respectivamente.

Modelo Parámetros Descripción Serie
ada 350 M Capaz de realizar tareas muy simples, generalmente el modelo más rápido en la serie GPT-3 y de menor costo. Base GPT-3
babbage 1.3 B Capaz de realizar tareas sencillas, muy rápido y de menor costo. Base GPT-3
curie 6.7B Muy capaz, pero más rápido y de menor costo que Davinci. Base GPT-3
davinci 175 B El modelo GPT-3 más capaz. Puede realizar cualquier tarea que los otros modelos puedan hacer, a menudo con mayor calidad. Base GPT-3
text-ada 350 M Capaz de realizar tareas muy simples, generalmente el modelo más rápido en la serie GPT-3 y de menor costo. InstructGPT-3
text-babbage 175B Capaz de realizar tareas sencillas, muy rápido y de menor costo. InstructGPT-3
text-curie 6.7B Muy capaz, más rápido y de menor costo que Davinci. InstructGPT-3
text-davinci-001 175B Versión anterior del modelo más capaz en la serie GPT-3. Puede realizar cualquier tarea que los otros modelos de GPT-3 puedan hacer, a menudo con menos contexto. InstructGPT-3
text-davinci-002 175B Capacidades similares a text-davinci-003, pero entrenado con ajuste supervisado en lugar de aprendizaje por refuerzo. GPT-3.5
text-davinci-003 175B Puede realizar cualquier tarea en lenguaje con mejor calidad, salidas más largas y seguimiento de instrucciones más consistente que los modelos curie, babbage o ada. También admite inserciones de completos dentro del texto. GPT-3.5
gpt-3.5-turbo 175B El modelo GPT-3.5 más capaz y optimizado para chat, con 1/10 del costo de text-davinci-003. GPT-3.5

GPT-3.5

Generative Pre-trained Transformer 3.5 (GPT-3.5)
Parte de OpenAI API
Información general
Tipo de programa LLM
Desarrollador OpenAI
Lanzamiento inicial 28 de mayo de 2020
Licencia licencia privativa
Serie OpenAI API
GPT-2
Generative Pre-trained Transformer 3.5 (GPT-3.5)
ChatGPT y GPT-4
Enlaces

Generative Pre-trained Transformer 3.5 (GPT-3.5) es una subclase de los Modelos GPT-3 creada por OpenAI en 2022.

El 15 de marzo de 2022, OpenAI puso a disposición nuevas versiones de GPT-3 y Codex en su API con capacidades de edición e inserción bajo los nombres "text-davinci-002" y "code-davinci-002". Estos modelos fueron descritos como más capaces que las versiones anteriores y se entrenaron con datos hasta junio de 2021. El 28 de noviembre de 2022, OpenAI presentó text-davinci-003. El 30 de noviembre de 2022, OpenAI comenzó a referirse a estos modelos como pertenecientes a la serie "GPT-3.5" y lanzó ChatGPT, que se ajustó a partir de un modelo de la serie GPT-3.5. OpenAI no incluye GPT-3.5 en GPT-3.

Modelos

Existen cuatro modelos:[7]

  • Chat
    • gpt-3.5-turbo
  • Completado de texto
    • text-davinci-003
    • text-davinci-002

GPT-3.5 con navegación

El 10 de abril de 2023, OpenAI introdujo una nueva variante de su modelo de la serie GPT-3.5, conocida como GPT-3.5 con Navegación (ALPHA).[8]​ Se describió que este modelo actualizado se basa en las capacidades de sus predecesores "text-davinci-002" y "code-davinci-002".[9]​ El modelo GPT-3.5 con Navegación (ALPHA) incorpora la capacidad de acceder y navegar por información en línea. Esto ha llevado a respuestas más precisas y actualizadas a las consultas de los usuarios.

El modelo GPT-3.5 con Navegación (ALPHA) ha sido entrenado con datos hasta septiembre de 2021, lo que le brinda más información en comparación con los modelos anteriores de GPT-3.5, que se entrenaron con datos hasta junio de 2021. El modelo intenta proporcionar a los desarrolladores y usuarios una herramienta avanzada de procesamiento de lenguaje natural que pueda recuperar y sintetizar eficazmente información en línea.

Para habilitar las capacidades de navegación, OpenAI implementó una nueva API que permite al modelo GPT-3.5 con Navegación (ALPHA) acceder a recursos en línea seleccionados durante su funcionamiento.[10]​ Esta función permite a los usuarios hacer preguntas o solicitar información con la expectativa de que el modelo proporcionará respuestas actualizadas, precisas y relevantes basadas en las últimas fuentes en línea disponibles para él.

El 27 de abril de 2023, OpenAI puso el modelo GPT-3.5 con Navegación (ALPHA) a disposición del público para los usuarios de GPT Plus. Esto permitió que más personas accedieran a sus nuevas características.[10]

Revisiones y críticas

El 29 de julio de 2020 el The New York Times publicó la revisión de Farhad Manjoo, que dijo que GPT-3 no es solo "asombrosa", "espeluznante", y "aleccionadora", sino también "un poco más que poco aterradora".[11]

La revista Wired escribió que GPT-3 estaba "provocando escalofríos por Silicon Valley ".[12]

Un artículo en el MIT Technology Review declaró que GPT-3 carece de "comprensión del mundo" por lo que realmente "no se puede confiar en lo que dice",[5]​ refiriéndose a que modelos como los de GPT-3 solo analizan la relación entre palabras (sintaxis) sin un análisis del significado de las palabras (semántica).

Véase también

Notas

  1. Brown, Tom B.; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, Jared; Dhariwal, Prafulla; Neelakantan, Arvind; Shyam, Pranav; Sastry, Girish; Askell, Amanda; Agarwal, Sandhini; Herbert-Voss, Ariel; Krueger, Gretchen; Henighan, Tom; Child, Rewon; Ramesh, Aditya; Ziegler, Daniel M.; Wu, Jeffrey; Winter, Clemens; Hesse, Christopher; Chen, Mark; Sigler, Eric; Litwin, Mateusz; Gray, Scott; Chess, Benjamin; Clark, Jack; Berner, Christopher; McCandlish, Sam; Radford, Alec; Sutskever, Ilya; Amodei, Dario

Referencias

  1. Shead, Sam (23 de julio de 2020). «Why everyone is talking about the A.I. text generator released by an Elon Musk-backed lab». Consultado el 4 de septiembre de 2020.  Se liberaron cuatro preimpresiones entre el 28 de mayo 28 y el 22 julio de 2020
  2. Bussler, Frederik (21 de julio de 2020). «Will GPT-3 Kill Coding?». Towards Data Science. Consultado el 3 de septiembre de 2020. 
  3. a b Brown, Tom B.; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, Jared; Dhariwal, Prafulla; Neelakantan, Arvind; Shyam, Pranav; Sastry, Girish; Askell, Amanda; Agarwal, Sandhini; Herbert-Voss, Ariel; Krueger, Gretchen; Henighan, Tom; Child, Rewon; Ramesh, Aditya; Ziegler, Daniel M.; Wu, Jeffrey; Winter, Clemens; Hesse, Christopher; Chen, Mark; Sigler, Eric; Litwin, Mateusz; Gray, Scott; Chess, Benjamin; Clark, Jack; Berner, Christopher; McCandlish, Sam; Radford, Alec; Sutskever, Ilya; Amodei, Dario (22 de julio de 2020). Language Models are Few-Shot Learners. arXiv:2005.14165. 
  4. Chalmers, David (20 de julio de 2020). «GPT-3 and General Intelligence». En Weinberg, ed. Daily Nous. Consultado el 3 de septiembre de 2020. 
  5. a b Marcus, Gary; Davis, Ernest (22 de agosto de 2020). «GPT-3, Bloviator: OpenAI’s language generator has no idea what it’s talking about». MIT Technology Review. Consultado el 3 de septiembre de 2020. 
  6. Marche, Stephen (6 de diciembre de 2022). «The College Essay Is Dead». The Atlantic. Archivado desde el original el 24 de enero de 2023. Consultado el 8 de diciembre de 2022. 
  7. «OpenAI API». Consultado el 6 de mayo de 2023. 
  8. tingetici (10 de abril de 2023). «Default (GPT-3.5) with browsing ALPHA -- NEW Model showed up just now.». r/OpenAI. Archivado desde el original el 27 de abril de 2023. Consultado el 27 de abril de 2023. 
  9. «Introducing GPT-3.5 Series: text-davinci-002 and code-davinci-002 Models». OPEN AI (en inglés). 15 de marzo de 2022. Archivado desde el original el 20 de marzo de 2023. Consultado el 27 de abril de 2023. 
  10. a b «GPT-3.5 with Browsing (ALPHA) Now Available for GPT Plus Users». OPEN AI (en inglés). 27 de abril de 2023. Archivado desde el original el 20 de marzo de 2023. Consultado el 27 de abril de 2023. 
  11. Manjoo, Farhad (29 de julio de 2020). «How Do You Know a Human Wrote This?». ISSN 0362-4331. Consultado el 4 de agosto de 2020. 
  12. Simonite, Tom (22 de julio de 2020). «Did a Person Write This Headline, or a Machine?». ISSN 1059-1028. Consultado el 31 de julio de 2020. 

Generative Pre-trained Transformer 3 (GPT-3)
Original author(s)OpenAI[1]
Initial releaseMay 28, 2020 (publication); June 11, 2020 (OA API beta)
Repository
PredecessorGPT-2
SuccessorGPT-3.5
GPT-4
Type
Licenseproprietary
Websiteopenai.com/blog/openai-api

Generative Pre-trained Transformer 3 (GPT-3) is a large language model released by OpenAI in 2020.

Like its predecessor, GPT-2, it is a decoder-only[2] transformer model of deep neural network, which supersedes recurrence and convolution-based architectures with a technique known as "attention".[3] This attention mechanism allows the model to focus selectively on segments of input text it predicts to be most relevant.[4] GPT-3 has 175 billion parameters, each with 16-bit precision, requiring 350GB of storage since each parameter occupies 2 bytes. It has a context window size of 2048 tokens, and has demonstrated strong "zero-shot" and "few-shot" learning abilities on many tasks.[2]

On September 22, 2020, Microsoft announced that it had licensed GPT-3 exclusively. Others can still receive output from its public API, but only Microsoft has access to the underlying model.[5]

Background

According to The Economist, improved algorithms, more powerful computers, and a recent increase in the amount of digitized material have fueled a revolution in machine learning. New techniques in the 2010s resulted in "rapid improvements in tasks", including manipulating language.[6]

Software models are trained to learn by using thousands or millions of examples in a "structure ... loosely based on the neural architecture of the brain".[6] One architecture used in natural language processing (NLP) is a neural network based on a deep learning model that was introduced in 2017—the transformer architecture.[7] There are a number of NLP systems capable of processing, mining, organizing, connecting and contrasting textual input, as well as correctly answering questions.[8]

On June 11, 2018, OpenAI researchers and engineers published a paper introducing the first generative pre-trained transformer (GPT)—a type of generative large language model that is pre-trained with an enormous and diverse text corpus in datasets, followed by discriminative fine-tuning to focus on a specific task. GPT models are transformer-based deep-learning neural network architectures. Previously, the best-performing neural NLP models commonly employed supervised learning from large amounts of manually-labeled data, which made it prohibitively expensive and time-consuming to train extremely large language models.[2] The first GPT model was known as "GPT-1," and it was followed by "GPT-2" in February 2019. Created as a direct scale-up of its predecessor, GPT-2 had both its parameter count and dataset size increased by a factor of 10. It had 1.5 billion parameters, and was trained on a dataset of 8 million web pages.[9]

In February 2020, Microsoft introduced its Turing Natural Language Generation (T-NLG), which they claimed was "largest language model ever published at 17 billion parameters."[10] It performed better than any other language model at a variety of tasks, including summarizing texts and answering questions.

Training and capabilities

A sample student essay about pedagogy written by GPT-3

The construct of "learning styles" is problematic because it fails to account for the processes through which learning styles are shaped. Some students might develop a particular learning style because they have had particular experiences. Others might develop a particular learning style by trying to accommodate to a learning environment that was not well suited to their learning needs. Ultimately, we need to understand the interactions among learning styles and environmental and personal factors, and how these shape how we learn and the kinds of learning we experience.

– Text generated by Mike Sharples[11]

On May 28, 2020, an arXiv preprint by a group of 31 engineers and researchers at OpenAI described the achievement and development of GPT-3, a third-generation "state-of-the-art language model".[1][12] The team increased the capacity of GPT-3 by over two orders of magnitude from that of its predecessor, GPT-2,[13] making GPT-3 the largest non-sparse language model to date.[1]: 14[14] Because GPT-3 is structurally similar to its predecessors,[1] its greater accuracy is attributed to its increased capacity and greater number of parameters.[15] GPT-3's capacity is ten times larger than that of Microsoft's Turing NLG, the next largest NLP model known at the time.[12]

Lambdalabs estimated a hypothetical cost of around $4.6 million US dollars and 355 years to train GPT-3 on a single GPU in 2020,[16] with lower actual training time by using more GPUs in parallel.

Sixty percent of the weighted pre-training dataset for GPT-3 comes from a filtered version of Common Crawl consisting of 410 billion byte-pair-encoded tokens. Fuzzy deduplication used Apache Spark's MinHashLSH.[1]: 9  Other sources are 19 billion tokens from WebText2 representing 22% of the weighted total, 12 billion tokens from Books1 representing 8%, 55 billion tokens from Books2 representing 8%, and 3 billion tokens from Wikipedia representing 3%.[1]: 9  GPT-3 was trained on hundreds of billions of words and is also capable of coding in CSS, JSX, and Python, among others.[citation needed]

GPT-3 training data[1]: 9 
Dataset # tokens Proportion
within training
Common Crawl 410 billion 60%
WebText2 19 billion 22%
Books1 12 billion 8%
Books2 55 billion 8%
Wikipedia 3 billion 3%

Since GPT-3's training data was all-encompassing, it does not require further training for distinct language tasks.[citation needed] The training data contains occasional toxic language and GPT-3 occasionally generates toxic language as a result of mimicking its training data. A study from the University of Washington found that GPT-3 produced toxic language at a toxicity level comparable to the similar natural language processing models of GPT-2 and CTRL. OpenAI has implemented several strategies to limit the amount of toxic language generated by GPT-3. As a result, GPT-3 produced less toxic language compared to its predecessor model, GPT-1, although it produced both more generations and a higher toxicity of toxic language compared to CTRL Wiki, a language model trained entirely on Wikipedia data.[17]

On June 11, 2020, OpenAI announced that users could request access to its user-friendly GPT-3 API—a "machine learning toolset"—to help OpenAI "explore the strengths and limits" of this new technology.[18][19] The invitation described how this API had a general-purpose "text in, text out" interface that can complete almost "any English language task", instead of the usual single use-case.[18] According to one user, who had access to a private early release of the OpenAI GPT-3 API, GPT-3 was "eerily good" at writing "amazingly coherent text" with only a few simple prompts.[20] In an initial experiment 80 US subjects were asked to judge if short ~200 word articles were written by humans or GPT-3. The participants judged correctly 52% of the time, doing only slightly better than random guessing.[1]

On November 18, 2021, OpenAI announced that enough safeguards had been implemented that access to its API would be unrestricted.[21] OpenAI provided developers with a content moderation tool that helps them abide by OpenAI's content policy.[22] On January 27, 2022, OpenAI announced that its newest GPT-3 language models (collectively referred to as InstructGPT) were now the default language model used on their API. According to OpenAI, InstructGPT produced content that was better aligned to user intentions by following instructions better, generating fewer made-up facts, and producing somewhat less toxic content.[23]

Because GPT-3 can "generate news articles which human evaluators have difficulty distinguishing from articles written by humans,"[12] GPT-3 has the "potential to advance both the beneficial and harmful applications of language models."[1]: 34  In their May 28, 2020 paper, the researchers described in detail the potential "harmful effects of GPT-3"[12] which include "misinformation, spam, phishing, abuse of legal and governmental processes, fraudulent academic essay writing and social engineering pretexting".[1] The authors draw attention to these dangers to call for research on risk mitigation.[1]: 34 

GPT-3 is capable of performing zero-shot and few-shot learning (including one-shot).[1]

In June 2022, Almira Osmanovic Thunström wrote that GPT-3 was the primary author on an article on itself, that they had submitted it for publication,[24] and that it had been pre-published while waiting for completion of its review.[25]

GPT-3 models

There are many models in the GPT-3 family, some serving different purposes than others. In the initial research paper published by OpenAI, they mentioned 8 different sizes of the main GPT-3 model:

Model name Parameters API name
GPT-3 Small 125 M n/a
GPT-3 Medium 350 M ada
GPT-3 Large 760 M n/a
GPT-3 XL 1.3 B babbage
GPT-3 2.7B 2.7 B n/a
GPT-3 6.7B 6.7 B curie
GPT-3 13B 13B n/a
GPT-3 175B 175B davinci

Half of the models are accessible through the API, namely GPT-3-medium, GPT-3-xl, GPT-3-6.7B and GPT-3-175b, which are referred to as ada, babbage, curie and davinci respectively. While the size of the API models was not originally disclosed by OpenAI, EleutherAI announced the mapping between model sizes and API names in May 2021.[26] These model sizes were later confirmed by OpenAI,[27] but the sizes of subsequent models have not been disclosed.

Model Parameters Description Series
ada 350 M Capable of very simple tasks, usually the fastest model in the GPT-3 series, and lowest cost. Base GPT-3
babbage

babbage-002

1.3 B Capable of straightforward tasks, very fast, and lower cost. Base GPT-3
curie 6.7B Very capable, but faster and lower cost than Davinci. Base GPT-3
davinci

davinci-002

175 B Most capable GPT-3 model. Can do any task the other models can do, often with higher quality. Base GPT-3
text-ada-001 350 M Capable of very simple tasks, usually the fastest model in the GPT-3 series, and lowest cost. InstructGPT
text-babbage-001 1.3B Capable of straightforward tasks, very fast, and lower cost. InstructGPT
text-curie-001 6.7B Very capable, faster and lower cost than Davinci. InstructGPT
text-davinci-001 175B Older version of the most capable model in the GPT-3 series. Can perform any task the other GPT-3 models can, often with less context. InstructGPT
text-davinci-002

code-davinci-002

Undisclosed Similar capabilities to text-davinci-003 but trained with supervised fine-tuning instead of reinforcement learning GPT-3.5
text-davinci-003 Undisclosed Can do any language task with better quality, longer output, and consistent instruction-following than the curie, babbage, or ada models. Also supports inserting completions within text. GPT-3.5
gpt-3.5-turbo

gpt-3.5-turbo-instruct gpt-3.5-turbo-16k

Undisclosed Most capable and cost effective (fastest) GPT-3.5 model and optimized for chat at 1/10th the cost of text-davinci-003. GPT-3.5

GPT-3.5

Generative Pre-trained Transformer 3.5 (GPT-3.5)
Original author(s)OpenAI[1]
Initial releaseMarch 15, 2022; 2 years ago (2022-03-15)
Repositoryn/a
PredecessorGPT-3
SuccessorGPT-4
Type
LicensePrivative
Websiten/a

Generative Pre-trained Transformer 3.5 (GPT-3.5) is a sub class of GPT-3 Models created by OpenAI in 2022.

On March 15, 2022, OpenAI made available new versions of GPT-3 and Codex in its API with edit and insert capabilities under the names "text-davinci-002" and "code-davinci-002".[28] These models were described as more capable than previous versions and were trained on data up to June 2021.[29] On November 28, 2022, OpenAI introduced text-davinci-003.[30] On November 30, 2022, OpenAI began referring to these models as belonging to the "GPT-3.5" series,[29] and released ChatGPT, which was fine-tuned from a model in the GPT-3.5 series.[31] OpenAI does not include GPT-3.5 in GPT-3.[32]

Models

There are three models:[33]

  • Chat
    • gpt-3.5-turbo
  • Text completion
    • text-davinci-003
    • text-davinci-002

GPT-3.5 with browsing

On April 10, 2023, OpenAI introduced a new variant of its GPT-3.5 series model, known as GPT-3.5 with Browsing (ALPHA).[34] This updated model was described to build upon the capabilities of its predecessors "text-davinci-002" and "code-davinci-002".[35] The GPT-3.5 with Browsing (ALPHA) model incorporated the ability to access and browse online information. This has led to more accurate and up-to-date responses to user queries.[34]

The GPT-3.5 with Browsing (ALPHA) model has been trained on data up to September 2021, giving it more information compared to previous GPT-3.5 models, which were trained on data up until June 2021. The model attempted to provide developers and users with an advanced natural language processing tool that can effectively retrieve and synthesize online information.[34]

To enable browsing capabilities, OpenAI implemented a new API that allows the GPT-3.5 with Browsing (ALPHA) model to access selected online resources during operation.[36] This feature allows users to ask questions or request information with the expectation that the model will deliver updated, accurate, and relevant answers based on the latest online sources available to it.

On April 27, 2023, OpenAI made the GPT-3.5 with Browsing (ALPHA) model publicly available to GPT Plus users. This allowed more people to access to its new features.[36]

InstructGPT

InstructGPT is a fine-tuned version of GPT-3.5 trained on a dataset of human-written instructions.[37]

Reception

Applications

  • GPT-3, specifically the Codex model, was the basis for GitHub Copilot, a code completion and generation software that can be used in various code editors and IDEs.[38][39]
  • GPT-3 is used in certain Microsoft products to translate conventional language into formal computer code.[40][41]
  • GPT-3 has been used in CodexDB[42] to generate query-specific code for SQL processing.
  • GPT-3 has been used by Jason Rohrer in a retro-themed chatbot project named "Project December", which is accessible online and allows users to converse with several AIs using GPT-3 technology.[43]
  • GPT-3 was used by The Guardian to write an article about AI being harmless to human beings. It was fed some ideas and produced eight different essays, which were ultimately merged into one article.[44]
  • GPT-3 was used in AI Dungeon, which generates text-based adventure games. Later it was replaced by a competing model after OpenAI changed their policy regarding generated content.[45][46]
  • GPT-3 is used to aid in writing copy and other marketing materials.[47]
  • A 2022 study from Drexel University suggested that GPT-3-based systems could be used to screen for early signs of Alzheimer's disease.[48][49]

Reviews

  • In a July 2020 review in The New York Times, Farhad Manjoo said that GPT-3's ability to generate computer code, poetry, and prose is not just "amazing", "spooky", and "humbling", but also "more than a little terrifying".[50]
  • Daily Nous presented a series of articles by nine philosophers on GPT-3.[51] Australian philosopher David Chalmers described GPT-3 as "one of the most interesting and important AI systems ever produced".[52]
  • A review in Wired said that GPT-3 was "provoking chills across Silicon Valley".[53]
  • The National Law Review said that GPT-3 is an "impressive step in the larger process", with OpenAI and others finding "useful applications for all of this power" while continuing to "work toward a more general intelligence".[54]
  • An article in the MIT Technology Review, co-written by Deep Learning critic Gary Marcus,[55] stated that GPT-3's "comprehension of the world is often seriously off, which means you can never really trust what it says."[56] According to the authors, GPT-3 models relationships between words without having an understanding of the meaning behind each word.
  • Jerome Pesenti, head of the Facebook AI lab, said GPT-3 is "unsafe," pointing to the sexist, racist and other biased and negative language generated by the system when it was asked to discuss Jews, women, black people, and the Holocaust.[57]
  • Nabla, a French start-up specializing in healthcare technology, tested GPT-3 as a medical chatbot, though OpenAI itself warned against such use. As expected, GPT-3 showed several limitations. For example, while testing GPT-3 responses about mental health issues, the AI advised a simulated patient to commit suicide.[58]
  • Noam Chomsky expressed his skepticism about GPT-3's scientific value: "It's not a language model. It works just as well for impossible languages as for actual languages. It is therefore refuted, if intended as a language model, by normal scientific criteria. [...] Perhaps it's useful for some purpose, but it seems to tell us nothing about language or cognition generally."[59]
  • Luciano Floridi and Massimo Chiriatti highlighted the risk of "cheap production of good, semantic artefacts".[60]
  • OpenAI's Sam Altman himself criticized what he called "GPT-3 hype", acknowledging GPT-3 "has serious weakness and sometimes makes very silly mistakes... AI is going to change the world, but GPT-3 is just a very early glimpse."[61]

Criticism

GPT-3's builder, OpenAI, was initially founded as a non-profit in 2015.[62] In 2019, OpenAI broke from its usual open-source standards by not publicly releasing GPT-3's predecessor model, citing concerns that the model could facilitate the propagation of fake news. OpenAI eventually released a version of GPT-2 that was 8% of the original model's size.[63] In the same year, OpenAI restructured to be a for-profit company.[64] In 2020, Microsoft announced the company had exclusive licensing of GPT-3 for Microsoft's products and services following a multi-billion dollar investment in OpenAI. The agreement permits OpenAI to offer a public-facing API such that users can send text to GPT-3 to receive the model's output, but only Microsoft will have access to GPT-3's source code.[5]

Large language models, such as GPT-3, have come under criticism from a few of Google's AI ethics researchers for the environmental impact of training and storing the models, detailed in a paper co-authored by Timnit Gebru and Emily M. Bender in 2021.[65]

The growing[when?] use of automated writing technologies based on GPT-3 and other language generators, has raised concerns regarding academic integrity[66] and raised the stakes of how universities and schools will gauge what constitutes academic misconduct such as plagiarism.[67]

OpenAI's GPT series was built with data from the Common Crawl dataset,[68] a conglomerate of copyrighted articles, internet posts, web pages, and books scraped from 60 million domains over a period of 12 years. TechCrunch reports this training data includes copyrighted material from the BBC, The New York Times, Reddit, the full text of online books, and more.[69] In its response to a 2019 Request for Comments on Intellectual Property Protection for Artificial Intelligence Innovation from the United States Patent and Trademark Office (USPTO), OpenAI argued that "Under current law, training AI systems [such as its GPT models] constitutes fair use," but that "given the lack of case law on point, OpenAI and other AI developers like us face substantial legal uncertainty and compliance costs."[70]

See also

References

  1. ^ a b c d e f g h i j k l m Brown, Tom B.; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, Jared; Dhariwal, Prafulla; Neelakantan, Arvind; Shyam, Pranav; Sastry, Girish; Askell, Amanda; Agarwal, Sandhini; Herbert-Voss, Ariel; Krueger, Gretchen; Henighan, Tom; Child, Rewon; Ramesh, Aditya; Ziegler, Daniel M.; Wu, Jeffrey; Winter, Clemens; Hesse, Christopher; Chen, Mark; Sigler, Eric; Litwin, Mateusz; Gray, Scott; Chess, Benjamin; Clark, Jack; Berner, Christopher; McCandlish, Sam; Radford, Alec; Sutskever, Ilya; Amodei, Dario (May 28, 2020). "Language Models are Few-Shot Learners". arXiv:2005.14165 [cs.CL].
  2. ^ a b c Radford, Alec; Narasimhan, Karthik; Salimans, Tim; Sutskever, Ilya (June 11, 2018). "Improving Language Understanding by Generative Pre-Training" (PDF). p. 12. Archived (PDF) from the original on January 26, 2021. Retrieved July 31, 2020.
  3. ^ Vaswani, Ashish; Shazeer, Noam; Parmar, Niki; Uszkoreit, Jakob; Jones, Llion; Gomez, Aidan N; Kaiser, Łukasz; Polosukhin, Illia (2017). "Attention is All you Need" (PDF). Advances in Neural Information Processing Systems. 30. Curran Associates, Inc.
  4. ^ Bahdanau, Dzmitry; Cho, Kyunghyun; Bengio, Yoshua (September 1, 2014). "Neural Machine Translation by Jointly Learning to Align and Translate". arXiv:1409.0473 [cs.CL].
  5. ^ a b Hao, Karen (September 23, 2020). "OpenAI is giving Microsoft exclusive access to its GPT-3 language model". MIT Technology Review. Archived from the original on February 5, 2021. Retrieved September 25, 2020. The companies say OpenAI will continue to offer its public-facing API, which allows chosen users to send text to GPT-3 or OpenAI's other models and receive its output. Only Microsoft, however, will have access to GPT-3's underlying code, allowing it to embed, repurpose, and modify the model as it pleases.
  6. ^ a b "An understanding of AI's limitations is starting to sink in". The Economist. June 11, 2020. ISSN 0013-0613. Archived from the original on July 31, 2020. Retrieved July 31, 2020.
  7. ^ Polosukhin, Illia; Kaiser, Lukasz; Gomez, Aidan N.; Jones, Llion; Uszkoreit, Jakob; Parmar, Niki; Shazeer, Noam; Vaswani, Ashish (June 12, 2017). "Attention Is All You Need". arXiv:1706.03762 [cs.CL].
  8. ^ "Natural Language Processing". Archived from the original on August 22, 2020. Retrieved July 31, 2020.
  9. ^ "Archived copy" (PDF). Archived (PDF) from the original on February 6, 2021. Retrieved April 28, 2023.{{cite web}}: CS1 maint: archived copy as title (link)
  10. ^ Sterling, Bruce (February 13, 2020). "Web Semantics: Microsoft Project Turing introduces Turing Natural Language Generation (T-NLG)". Wired. ISSN 1059-1028. Archived from the original on November 4, 2020. Retrieved July 31, 2020.
  11. ^ Marche, Stephen (December 6, 2022). "The College Essay Is Dead". The Atlantic. Archived from the original on January 24, 2023. Retrieved December 8, 2022.
  12. ^ a b c d Sagar, Ram (June 3, 2020). "OpenAI Releases GPT-3, The Largest Model So Far". Analytics India Magazine. Archived from the original on August 4, 2020. Retrieved July 31, 2020.
  13. ^ "Language Models are Unsupervised Multitask Learners" (PDF). openai.com. Archived (PDF) from the original on December 12, 2019. Retrieved December 4, 2019. GPT-2, is a 1.5B parameter Transformer
  14. ^ Shead, Sam (July 23, 2020). "Why everyone is talking about the A.I. text generator released by an Elon Musk-backed lab". CNBC. Archived from the original on July 30, 2020. Retrieved July 31, 2020. Four preprints were released between May 28 and July 22, 2020.
  15. ^ Ray, Tiernan (June 1, 2020). "OpenAI's gigantic GPT-3 hints at the limits of language models for AI". ZDNet. Archived from the original on June 1, 2020. Retrieved July 31, 2020.
  16. ^ Li, Chuan (June 3, 2020), OpenAI's GPT-3 Language Model: A Technical Overview, archived from the original on March 27, 2023, retrieved March 27, 2023
  17. ^ Gehman, Samuel; Gururangan, Suchin; Sap, Maarten; Choi, Yejin; Smith, Noah A. (November 16–20, 2020), REALTOXICITYPROMPTS: Evaluating Neural Toxic Degeneration in Language Models, Association for Computational Linguistics, pp. 3356–3369, arXiv:2009.11462
  18. ^ a b "OpenAI API". OpenAI. June 11, 2020. Archived from the original on June 11, 2020. Retrieved July 31, 2020.
  19. ^ Coldewey, Devin (June 11, 2020). "OpenAI makes an all-purpose API for its text-based AI capabilities". TechCrunch. Archived from the original on October 27, 2021. Retrieved July 31, 2020. If you've ever wanted to try out OpenAI's vaunted machine learning toolset, it just got a lot easier. The company has released an API that lets developers call its AI tools in on "virtually any English language task."
  20. ^ Arram (July 9, 2020). "GPT-3: An AI that's eerily good at writing almost anything". Arram Sabeti. Archived from the original on July 20, 2020. Retrieved July 31, 2020.
  21. ^ "OpenAI's API Now Available with No Waitlist". OpenAI. November 18, 2021. Archived from the original on November 5, 2022. Retrieved November 5, 2022.
  22. ^ "OpenAI API". beta.openai.com. Archived from the original on December 23, 2022. Retrieved November 5, 2022.
  23. ^ "Aligning Language Models to Follow Instructions". OpenAI. January 27, 2022. Archived from the original on November 5, 2022. Retrieved November 5, 2022.
  24. ^ Thunström, Almira Osmanovic (June 30, 2022). "We Asked GPT-3 to Write an Academic Paper about Itself – Then We Tried to Get It Published". Scientific American. Archived from the original on June 30, 2022. Retrieved June 30, 2022.
  25. ^ Transformer, Gpt Generative Pretrained; Thunström, Almira Osmanovic; Steingrimsson, Steinn (June 21, 2022). "Can GPT-3 write an academic paper on itself, with minimal human input?". Archive ouverte HAL (in French). Archived from the original on June 30, 2022. Retrieved June 30, 2022.
  26. ^ Gao, Leo (May 24, 2021). "On the Sizes of OpenAI API Models". EleutherAI Blog. EleutherAI. Retrieved November 23, 2023.
  27. ^ "Model index for researchers". OpenAI. Retrieved November 23, 2023.
  28. ^ "New GPT-3 Capabilities: Edit & Insert". OpenAI. March 15, 2022. Archived from the original on January 13, 2023. Retrieved January 13, 2023.
  29. ^ a b "OpenAI API". platform.openai.com. Archived from the original on March 20, 2023. Retrieved March 15, 2023.
  30. ^ "Check out OpenAI's new text-davinci-003! Same underlying model as text-davinci-002 but more aligned. Would love to hear feedback about it! / Twitter". Archived from the original on March 15, 2023. Retrieved May 6, 2023.
  31. ^ "ChatGPT: Optimizing Language Models for Dialogue". OpenAI. November 30, 2022. Archived from the original on November 30, 2022. Retrieved January 13, 2023.
  32. ^ "OpenAI API". Archived from the original on March 17, 2023. Retrieved May 6, 2023.
  33. ^ "OpenAI API". Archived from the original on May 6, 2023. Retrieved May 6, 2023.
  34. ^ a b c tingetici (April 10, 2023). "Default (GPT-3.5) with browsing ALPHA -- NEW Model showed up just now". r/OpenAI. Archived from the original on April 27, 2023. Retrieved April 27, 2023.
  35. ^ "Introducing GPT-3.5 Series: text-davinci-002 and code-davinci-002 Models". OPEN AI. March 15, 2022. Archived from the original on March 20, 2023. Retrieved April 27, 2023.
  36. ^ a b "GPT-3.5 with Browsing (ALPHA) Now Available for GPT Plus Users". OPEN AI. April 27, 2023. Archived from the original on March 20, 2023. Retrieved April 27, 2023.
  37. ^ Gilson A, Safranek CW, Huang T, Socrates V, Chi L, Taylor RA, Chartash D (February 2023). "How Does ChatGPT Perform on the United States Medical Licensing Examination (USMLE)? The Implications of Large Language Models for Medical Education and Knowledge Assessment". JMIR Med Educ. 9: e45312. doi:10.2196/45312. PMC 9947764. PMID 36753318.
  38. ^ "OpenAI Codex". OpenAI. August 10, 2021. Archived from the original on February 3, 2023. Retrieved December 23, 2022.
  39. ^ Thompson, Clive (March 15, 2022). "How an AI Became My Code-Writing Genie". Wired. Archived from the original on December 23, 2022. Retrieved December 23, 2022.
  40. ^ "Microsoft announced its first customer product features powered by GPT-3 and @Azure". The AI Blog. May 25, 2021. Archived from the original on May 26, 2021. Retrieved May 26, 2021.
  41. ^ Vincent, James (May 25, 2021). "Microsoft has built an AI-powered autocomplete for code using GPT-3". The Verge. Archived from the original on December 23, 2022. Retrieved December 23, 2022.
  42. ^ "CodexDB - SQL Processing Powered by GPT-3". CodexDB - SQL Processing Powered by GPT-3. Archived from the original on December 7, 2022. Retrieved December 7, 2022.
  43. ^ Fagone, Jason (July 23, 2021). "The Jessica Simulation: Love and loss in the age of A.I." San Francisco Chronicle. Archived from the original on July 28, 2021. Retrieved July 29, 2021.
  44. ^ GPT-3 (September 8, 2020). "A robot wrote this entire article. Are you scared yet, human? | GPT-3". The Guardian. ISSN 0261-3077. Archived from the original on September 8, 2020. Retrieved September 15, 2020.{{cite news}}: CS1 maint: numeric names: authors list (link)
  45. ^ "Update: Language Models and Dragon". Latitude blog. December 8, 2021. Archived from the original on April 25, 2022. Retrieved March 22, 2022.
  46. ^ "This Mystical Book Was Co-Authored by a Disturbingly Realistic AI". www.vice.com. 2022. Archived from the original on December 23, 2022. Retrieved December 23, 2022.
  47. ^ GPT-3 (February 24, 2023). "38 Prompt Examples in 10 Different Categories | GPT-3". GiPiTi Chat. Archived from the original on April 8, 2023. Retrieved February 24, 2023.{{cite news}}: CS1 maint: numeric names: authors list (link)
  48. ^ "Can ChatGPT AI chatbot spot early stages of Alzheimer's? - study". The Jerusalem Post. 2022. Archived from the original on February 10, 2023. Retrieved February 10, 2023.
  49. ^ Agbavor, Felix; Liang, Hualou (December 22, 2022). "Predicting dementia from spontaneous speech using large language models". PLOS Digital Health. 1 (12): e0000168. doi:10.1371/journal.pdig.0000168. PMC 9931366. PMID 36812634. S2CID 255029590.
  50. ^ Manjoo, Farhad (July 29, 2020). "How Do You Know a Human Wrote This?". The New York Times. ISSN 0362-4331. Archived from the original on October 29, 2020. Retrieved August 4, 2020.
  51. ^ Weinberg, Justin, ed. (July 30, 2020). "Philosophers On GPT-3 (updated with replies by GPT-3)". Daily Nous. Archived from the original on October 30, 2020. Retrieved July 31, 2020.
  52. ^ Chalmers, David (July 30, 2020). Weinberg, Justin (ed.). "GPT-3 and General Intelligence". Daily Nous. Philosophers On GPT-3 (updated with replies by GPT-3). Archived from the original on August 4, 2020. Retrieved August 4, 2020.
  53. ^ Simonite, Tom (July 22, 2020). "Did a Person Write This Headline, or a Machine?". Wired. ISSN 1059-1028. Archived from the original on November 1, 2020. Retrieved July 31, 2020.
  54. ^ Claypoole, Theodore (July 30, 2020). "New AI Tool GPT-3 Ascends to New Peaks, But Proves How Far We Still Need to Travel". The National Law Review. Archived from the original on October 30, 2020. Retrieved August 4, 2020.
  55. ^ Marcus, Gary (December 1, 2018). "The deepest problem with deep learning". Medium. Archived from the original on August 1, 2019. Retrieved September 29, 2020.
  56. ^ Marcus, Gary; Davis, Ernest (August 22, 2020). "GPT-3, Bloviator: OpenAI's language generator has no idea what it's talking about". MIT Technology Review. Archived from the original on August 23, 2020. Retrieved August 23, 2020.
  57. ^ Metz, Cade (November 24, 2020). "Meet GPT-3. It Has Learned to Code (and Blog and Argue)". The New York Times. ISSN 0362-4331. Archived from the original on December 6, 2020. Retrieved November 24, 2020.
  58. ^ "Medical chatbot using OpenAI's GPT-3 told a fake patient to kill themselves". AI News. October 28, 2020. Archived from the original on January 10, 2021. Retrieved January 8, 2021.
  59. ^ Chomsky on Terence McKenna, Sam Harris, GPT3, Cryptocurrencies, Kierkegaard, Neuralink, & Hofstadter. March 24, 2021. Event occurs at 1:11:44. Archived from the original on April 29, 2021. Retrieved April 29, 2021.
  60. ^ Floridi, Luciano; Chiriatti, Massimo (November 1, 2020). "GPT‑3: Its Nature, Scope, Limits, and Consequences". Minds and Machines. 30 (4): 681–694. doi:10.1007/s11023-020-09548-1. S2CID 228954221.
  61. ^ Vincent, James (July 30, 2020). "OpenAI's latest breakthrough is astonishingly powerful, but still fighting its flaws". The Verge. Archived from the original on July 30, 2020. Retrieved November 9, 2022.
  62. ^ Olanoff, Drew (December 11, 2015). "Artificial Intelligence Nonprofit OpenAI Launches With Backing From Elon Musk And Sam Altman". Tech Crunch. Archived from the original on October 20, 2022. Retrieved May 31, 2021.
  63. ^ Hao, Karen (August 29, 2019). "OpenAI has released the largest version yet of its fake-news-spewing AI". MIT Technology Review. Archived from the original on May 9, 2021. Retrieved May 31, 2021.
  64. ^ Coldewey, Devin (March 11, 2019). "OpenAI shifts from nonprofit to 'capped-profit' to attract capital". Tech Crunch. Archived from the original on January 4, 2023. Retrieved May 31, 2021.
  65. ^ Bender, Emily M.; Gebru, Timnit; McMillan-Major, Angelina; Shmitchell, Shmargaret (March 3, 2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. pp. 610–623. doi:10.1145/3442188.3445922.
  66. ^ Mindzak, Michael; Eaton, Sarah Elaine. "Artificial intelligence is getting better at writing, and universities should worry about plagiarism". The Conversation. Archived from the original on November 7, 2021. Retrieved November 6, 2021.
  67. ^ Rogerson, Ann M.; McCarthy, Grace (December 2017). "Using Internet based paraphrasing tools: Original work, patchwriting or facilitated plagiarism?". International Journal for Educational Integrity. 13 (1): 1–15. doi:10.1007/s40979-016-0013-y. ISSN 1833-2595. S2CID 9473217.
  68. ^ Ver Meer, Dave. "ChatGPT Statistics". NamePepper. Archived from the original on June 5, 2023. Retrieved June 21, 2023.
  69. ^ Here are a few ways GPT-3 can go wrong. TechCrunch. Archived from the original on November 26, 2021. Retrieved November 26, 2021.
  70. ^ Comment Regarding Request for Comments on Intellectual Property Protection for Artificial Intelligence Innovation (PDF). USPTO. Archived (PDF) from the original on October 16, 2021. Retrieved November 30, 2021.