Adapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (2024)

Adapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (2)

Advanced Search

aimlsystems

research-article

  • Authors:
  • Jaykumar Kasundra Thomson Reuters, IN

    Thomson Reuters, IN

    Adapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (3)0009-0000-3405-1592

    Search about this author

    ,
  • Shreyans Dhankhar Thomson Reuters, IN

    Thomson Reuters, IN

    Adapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (4)0009-0007-0543-427X

    Search about this author

AIMLSystems '23: Proceedings of the Third International Conference on AI-ML SystemsOctober 2023Article No.: 32Pages 1–8https://doi.org/10.1145/3639856.3639888

Published:17 May 2024Publication HistoryAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (5)

  • 0citation
  • 0
  • Downloads

Metrics

Total Citations0Total Downloads0

Last 12 Months0

Last 6 weeks0

  • Get Access

AIMLSystems '23: Proceedings of the Third International Conference on AI-ML Systems

Adapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation

Pages 1–8

PreviousChapterNextChapter

Adapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (6)

ABSTRACT

Large-scale language models, such as ChatGPT[3] and GPT-4[32], have demonstrated remarkable capabilities in generating human-like text for various applications. In this paper, we focus on two key aspects: (1) adapting open-source large language models (LLMs) for specific use cases like contract drafting using instruction tuning and parameter-efficient fine-tuning, and (2) analyzing the difference in ChatGPT’s behavior in single-role prompts compared to multi-role prompts for synthetic data generation tasks. We present a method for aligning open-source LLMs to follow instructions for customized contract drafting scenarios using parameter-efficient fine-tuning on synthetic data. Furthermore, we investigate the data quality of the synthetically generated instructions data by ChatGPT with single-role vs. multi-role prompts. Our findings reveal that the model performs better when given single-role prompts, highlighting the importance of strategically designing prompting strategy to generate better quality data using LLMs. By combining the insights from these two aspects, we explore potential implications and opportunities for enhancing generative AI solutions for practical implementations. The Contract Drafting model 1 and data 2 are released.

References

  1. 2023. bitsandbytes slow inference issue. https://github.com/TimDettmers/bitsandbytes/issues/388.Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (7)
  2. 2023. GitHub - databrickslabs/dolly: Databricks’ Dolly, a large language model trained on the Databricks Machine Learning Platform — github.com. https://github.com/databrickslabs/dolly.Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (8)
  3. 2023. Introducing ChatGPT — openai.com. https://openai.com/blog/chatgpt.Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (9)
  4. 2023. NousResearch/Nous-Hermes-13b. https://huggingface.co/NousResearch/Nous-Hermes-13b.Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (10)
  5. Vinay Aggarwal, Aparna Garimella, BalajiVasan Srinivasan, Rajiv Jain, 2021. CLAUSEREC: A Clause Recommendation Framework for AI-aided Contract Authoring. arXiv preprint arXiv:2110.15794 (2021).Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (11)
  6. StephenH Bach, Victor Sanh, Zheng-Xin Yong, Albert Webson, Colin Raffel, NihalV Nayak, Abheesht Sharma, Taewoon Kim, MSaiful Bari, Thibault Fevry, 2022. Promptsource: An integrated development environment and repository for natural language prompts. arXiv preprint arXiv:2202.01279 (2022).Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (12)
  7. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, 2022. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073 (2022).Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (13)
  8. Łukasz Borchmann, Dawid Wiśniewski, Andrzej Gretkowski, Izabela Kosmala, Dawid Jurkiewicz, Łukasz Szałkiewicz, Gabriela Pałka, Karol Kaczmarek, Agnieszka Kaliska, and Filip Graliński. 2019. Contract discovery: Dataset and a few-shot semantic retrieval challenge with competitive baselines. arXiv preprint arXiv:1911.03911 (2019).Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (14)
  9. Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis, Nikolaos Aletras, and Ion Androutsopoulos. 2020. LEGAL-BERT: The muppets straight out of law school. arXiv preprint arXiv:2010.02559 (2020).Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (15)
  10. PaulF Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. Advances in neural information processing systems 30 (2017).Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (16)
  11. HyungWon Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416 (2022).Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (17)
  12. Leo Gao, John Schulman, and Jacob Hilton. 2023. Scaling laws for reward model overoptimization. In International Conference on Machine Learning. PMLR, 10835–10866.Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (18)
  13. Demi Guo, AlexanderM Rush, and Yoon Kim. 2020. Parameter-efficient transfer learning with diff pruning. arXiv preprint arXiv:2012.07463 (2020).Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (19)
  14. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin DeLaroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In International Conference on Machine Learning. PMLR, 2790–2799.Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (20)
  15. EdwardJ Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685 (2021).Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (21)
  16. Sagar Joshi, Sumanth Balaji, Aparna Garimella, and Vasudeva Varma. 2022. Graph-based Keyword Planning for Legal Clause Generation from Topics. In Proceedings of the Natural Legal Language Processing Workshop 2022. Association for Computational Linguistics, Abu Dhabi, United Arab Emirates (Hybrid), 276–286. https://aclanthology.org/2022.nllp-1.26Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (22)Cross Ref
  17. Jaehun Jung, Lianhui Qin, Sean Welleck, Faeze Brahman, Chandra Bhagavatula, RonanLe Bras, and Yejin Choi. 2022. Maieutic prompting: Logically consistent reasoning with recursive explanations. arXiv preprint arXiv:2205.11822 (2022).Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (24)
  18. NitishShirish Keskar, Bryan McCann, LavR Varshney, Caiming Xiong, and Richard Socher. 2019. Ctrl: A conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858 (2019).Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (25)
  19. Nikita Kitaev, Steven Cao, and Dan Klein. 2018. Multilingual constituency parsing with self-attention and pre-training. arXiv preprint arXiv:1812.11760 (2018).Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (26)
  20. Nikita Kitaev and Dan Klein. 2018. Constituency parsing with a self-attentive encoder. arXiv preprint arXiv:1805.01052 (2018).Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (27)
  21. Takeshi Kojima, ShixiangShane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. Advances in neural information processing systems 35 (2022), 22199–22213.Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (28)
  22. Spyretta Leivaditi, Julien Rossi, and Evangelos Kanoulas. 2020. A benchmark for lease contract review. arXiv preprint arXiv:2010.10386 (2020).Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (29)
  23. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems 33 (2020), 9459–9474.Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (30)
  24. XiangLisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190 (2021).Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (31)
  25. Chin-Yew Lin. 2004. ROUGE: A Package for Automatic Evaluation of Summaries. In Text Summarization Branches Out. Association for Computational Linguistics, Barcelona, Spain, 74–81. https://aclanthology.org/W04-1013Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (32)
  26. Jiacheng Liu, Alisa Liu, Ximing Lu, Sean Welleck, Peter West, RonanLe Bras, Yejin Choi, and Hannaneh Hajishirzi. 2021. Generated knowledge prompting for commonsense reasoning. arXiv preprint arXiv:2110.08387 (2021).Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (33)
  27. Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. 2021. GPT understands, too. arXiv preprint arXiv:2103.10385 (2021).Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (34)
  28. Jieyi Long. 2023. Large Language Model Guided Tree-of-Thought. arXiv preprint arXiv:2305.08291 (2023).Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (35)
  29. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, BodhisattwaPrasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdanbakhsh, and Peter Clark. 2023. Self-Refine: Iterative Refinement with Self-Feedback. arXiv:arXiv:2303.17651Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (36)
  30. Sewon Min, Mike Lewis, Luke Zettlemoyer, and Hannaneh Hajishirzi. 2021. Metaicl: Learning to learn in context. arXiv preprint arXiv:2110.15943 (2021).Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (37)
  31. Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. 2021. Cross-task generalization via natural language crowdsourcing instructions. arXiv preprint arXiv:2104.08773 (2021).Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (38)
  32. OpenAI. 2023. GPT-4 Technical Report. arXiv:arXiv:2303.08774Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (39)
  33. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35 (2022), 27730–27744.Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (40)
  34. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Philadelphia, Pennsylvania, USA, 311–318. https://doi.org/10.3115/1073083.1073135Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (41)Digital Library
  35. Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. 2023. Instruction tuning with gpt-4. arXiv preprint arXiv:2304.03277 (2023).Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (43)
  36. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, 2019. Language models are unsupervised multitask learners. OpenAI blog 1, 8 (2019), 9.Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (44)
  37. Victor Sanh, Albert Webson, Colin Raffel, StephenH Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, TevenLe Scao, Arun Raja, 2021. Multitask prompted training enables zero-shot task generalization. arXiv preprint arXiv:2110.08207 (2021).Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (45)
  38. John Schulman, Barret Zoph, Christina Kim, Jacob Hilton, Jacob Menick, Jiayi Weng, Juan FelipeCeron Uribe, Liam Fedus, Luke Metz, Michael Pokorny, 2022. ChatGPT: Optimizing language models for dialogue. OpenAI blog (2022).Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (46)
  39. Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and PaulF Christiano. 2020. Learning to summarize with human feedback. Advances in Neural Information Processing Systems 33 (2020), 3008–3021.Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (47)
  40. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and TatsunoriB Hashimoto. 2023. Stanford alpaca: An instruction-following llama model.Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (48)
  41. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023).Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (49)
  42. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, AidanN Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems 30 (2017).Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (50)
  43. Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, NoahA Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2022. Self-instruct: Aligning language model with self generated instructions. arXiv preprint arXiv:2212.10560 (2022).Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (51)
  44. Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Anjana Arunkumar, Arjun Ashok, ArutSelvan Dhanasekaran, Atharva Naik, David Stap, 2022. Super-naturalinstructions: Generalization via declarative instructions on 1600+ nlp tasks. arXiv preprint arXiv:2204.07705 (2022).Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (52)
  45. Jason Wei, Maarten Bosma, VincentY Zhao, Kelvin Guu, AdamsWei Yu, Brian Lester, Nan Du, AndrewM Dai, and QuocV Le. 2021. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652 (2021).Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (53)
  46. Tongshuang Wu, Ellen Jiang, Aaron Donsbach, Jeff Gray, Alejandra Molina, Michael Terry, and CarrieJ Cai. 2022. Promptchainer: Chaining large language model prompts through visual programming. In CHI Conference on Human Factors in Computing Systems Extended Abstracts. 1–10.Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (54)Digital Library
  47. Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, ThomasL Griffiths, Yuan Cao, and Karthik Narasimhan. 2023. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601 (2023).Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (56)
  48. EladBen Zaken, Shauli Ravfogel, and Yoav Goldberg. 2021. Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. arXiv preprint arXiv:2106.10199 (2021).Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (57)
  49. Haoxi Zhong, Chaojun Xiao, Cunchao Tu, Tianyang Zhang, Zhiyuan Liu, and Maosong Sun. 2020. How does NLP benefit legal system: A summary of legal artificial intelligence. arXiv preprint arXiv:2004.12158 (2020).Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (58)
  50. Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, and Ed Chi. 2022. Least-to-most prompting enables complex reasoning in large language models. arXiv preprint arXiv:2205.10625 (2022).Google ScholarAdapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (59)

Cited By

View all

Adapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (60)

    Index Terms

    1. Adapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation

      1. Computing methodologies

        1. Artificial intelligence

          1. Natural language processing

            1. Natural language generation

      Recommendations

      • Read More

      • Improving theReadability ofGenerated Tests Using GPT-4 andChatGPT Code Interpreter

        Search-Based Software Engineering

        Abstract

        A major challenge in automated test generation is the readability of generated tests. Emerging large language models (LLMs) excel at language analysis and transformation tasks. We propose that improving test readability is such a task and explore ...

        Read More

      • Instructor Perceptions of AI Code Generation Tools - A Multi-Institutional Interview Study

        SIGCSE 2024: Proceedings of the 55th ACM Technical Symposium on Computer Science Education V. 1

        Much of the recent work investigating large language models and AI Code Generation tools in computing education has focused on assessing their capabilities for solving typical programming problems and for generating resources such as code explanations ...

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      Get this Publication

      • Information
      • Contributors
      • Published in

        Adapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (61)

        AIMLSystems '23: Proceedings of the Third International Conference on AI-ML Systems

        October 2023

        381 pages

        ISBN:9798400716492

        DOI:10.1145/3639856

        Copyright © 2023 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [emailprotected].

        Sponsors

          In-Cooperation

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 17 May 2024

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Adapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (62)

            Author Tags

            • Automated Evaluation
            • Generative AI
            • Large Language Models
            • Natural Language Generation

            Qualifiers

            • research-article
            • Research
            • Refereed limited

            Conference

            Funding Sources

            • Adapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (63)

              Other Metrics

              View Article Metrics

            • Bibliometrics
            • Citations0
            • Article Metrics

              • Total Citations

                View Citations
              • Total Downloads

              • Downloads (Last 12 months)0
              • Downloads (Last 6 weeks)0

              Other Metrics

              View Author Metrics

            • Cited By

              This publication has not been cited yet

            PDF Format

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader

            Digital Edition

            View this article in digital edition.

            View Digital Edition

            HTML Format

            View this article in HTML Format .

            View HTML Format

            • Figures
            • Other

              Close Figure Viewer

              Browse AllReturn

              Caption

              View Table of Contents

              Export Citations

                Your Search Results Download Request

                We are preparing your search results for download ...

                We will inform you here when the file is ready.

                Download now!

                Your Search Results Download Request

                Your file of search results citations is now ready.

                Download now!

                Your Search Results Download Request

                Your search export query has expired. Please try again.

                Adapting Open-Source LLMs for Contract Drafting and Analyzing Multi-Role vs. Single-Role Behavior of ChatGPT for Synthetic Data Generation | Proceedings of the Third International Conference on AI-ML Systems (2024)
                Top Articles
                Latest Posts
                Article information

                Author: Eusebia Nader

                Last Updated:

                Views: 5444

                Rating: 5 / 5 (60 voted)

                Reviews: 83% of readers found this page helpful

                Author information

                Name: Eusebia Nader

                Birthday: 1994-11-11

                Address: Apt. 721 977 Ebert Meadows, Jereville, GA 73618-6603

                Phone: +2316203969400

                Job: International Farming Consultant

                Hobby: Reading, Photography, Shooting, Singing, Magic, Kayaking, Mushroom hunting

                Introduction: My name is Eusebia Nader, I am a encouraging, brainy, lively, nice, famous, healthy, clever person who loves writing and wants to share my knowledge and understanding with you.