Abstract:Currently,generative AI technologies and services worldwideare experiencing explosive growth.While driving technological innovationandproductivityadvancement inthesocial economy,theyalsoprecipitatemultiplelegal risks,ethical breaches in technology,and social governance challnges.Distinctregulatory pathways have emerged internationally:theEU promotes arigid governance system through aunified regulatory framework and centralized oversight mechanisms,thoughconcurrentlyexhibitingatrendofdeferred legal application;the United States adopts an advocacy-based regulatory strategycombining principled guidance withcorporate self-compliance;the United Kingdom implementsa non-mandatory principled framework establishing acompromise-based governance model. Grounded in China's strategic imperative toengage inglobalAIcompetitionand informed by international experiences,the legal governance framework for generativeAImust incorporate practical legislative imperatives,anchored in thedynamicadaptationbetween technological iterationandlegalregulation,alongsidetherecalibrationofdevelopmental effcacy against securityrisks.This necesitates establishing tiered safety thresholds and controllabilityrequirements within the govermance architecture.Accordingly,there isan urgent need to enhance institutional provision and policy coodination,constructamulti-stakeholderlong-term mechanism integrating administrative supervision,industryself-regulation,and technical govermance,and formulatescenario-specific liabilityrules covering theentirelife cycle from Ramp;D to deployment—therebyavoiding arbitrary legislative uniformity.The ultimate objective is to forge a comprehensive governanceecosystem characterized by trustworthiness andsecurityas its foundation,and prudence,inclusiveness,and dynamic adaptability as its defining features.
KeyWords: generative AI; risk; extraterritorial rule oflaw; rule of law consideration; practice architecture CLC:D 922.16;D 922.294 DC: A Article:2096-9783(2024)04-0130-19
1TheRaiseoftheProblem
Various generative AItechnologies and products such as Alpaca, GPT-4,PaLM-E,Wenxin Yiyan,and Security Copilothave entered into social and economic life.With thedevelopmentof popular practices such asthe explosion of ChatGPT,whichbrought generative AIinto thepublic spotlight,the applicationof generative AI hasgradually changed froma \"tool\"toa \"decision-maker\",whose important mision is in medical care,entertainmentamp;recreation,livelihood,finance,andotherindustries.However,duetothewiderandwiderpenetrationof generativeAIintosocial productionandlife,itsethicalandlegalriskshavebeenamplifiedaccordingly.Moreover,generativeAIappearsasthefeature ofalgorithmic black boxes,which makes itdificult to supervise.Asaresult,society'sdoubts about generative AI are sharply increasing,and there is even a senseof suspending operation services conditions.Though this lack of trustisactualllegitimate,inthelongrunitwillseriouslyhinderthedevelopmentof generativeAI,holdingthedigital economy—and even therealeconomyback.Therefore,howtochoose governance toolsandcontrol the limitsof governance isof keysignificancein the internationalcompetitionof artificial intelligencedevelopmentand international
economic and trade competition.
Reducing the risk ofAIis notonlya task that should be solved in the technical aspectbut also a goalthat should befaced from the institutional perspectivel2l.Inorder to promote the healthydevelopmentand standardized application of generativeAItechnology,onJuly10,2023,sevenofChina'sdepartments including theCyberspaceAdministration of China jointly issuedthe Interim Measures fortheAdministrationof GenerativeArtificial Intellgence Services (hereinafterreferredtoasthe\"InterimMeasures\".TheInterimMeasures,whichclearlyputforwardtheprinciplesof \"attaching equal importance to developmentandsecurity\"and \"combining promoting innovation and ruling by law\",clarify thefollowing:(1)implementing incusive,prudent,clasified,and hierarchical supervisionof generativeAIservices, (2)stipulating therequirements oflawsandregulations,social moralityandethicsof generativeAIproductsorservice, (3)clarifying theobligationsand responsibilities that providersof generativeAI services (hereinafterreferred toas \"providers\")needtobear.InMay2O24,China includedthe\"DraftLawonArtificialInteligence”inthe\"Legislative Work Plan of the StateCouncil\",preparing to submit itfor review bythe Standing Committeeof the NationalPeople's Congress.This indicates that China'slegislative work in the field ofartificial intelligence is steadily advancing.
Furthermore,scholars have highlighted thatthe inherent complexityof artificial inteligence technology,coupled withthe uncertaintyof its developmentaltrajectoryand the heterogeneityofitsapplicationscenarios,poses significant challenges to thepursuitof unified legislation.Suchanapproach risks encounteringa quartetof pitfalls:legal obsolescence,regulatoryigidityoversightmismatch,and innovationsuppression.Consequently,theadvancementof dedicated AI legislation necessitates a measured and prudent approach[3].
A discernibleshift inthelegislative articulationconcerningAI isevident withintheLegislativeWork Planof the StandingCommiteeof theNationalPeople'sCongress for2O25andtheLegislative Work Planofthe StateCouncil for 2025,both promulgatedonMay14,2O25.Notablyabsent is theexplicitreference toan \"Artificial Intelligence Law\". Instead,these documents employ formulations suchas \"legislative projectsconcerning the healthydevelopment of artificialintelligence\"and\"advancinglegislativeworkforthehealthydevelopmentofartificialinteligence\".This termnological evolutionreflectsasubstantive shift in China'scontemporary legislative strategy forAI governance.Consequently,thereisaview thatthelegislative processforadedicatedArtificial InteligenceLawinChina hasbeenhalted. This suspension stemsfrom several underlyingfactors,such as inauspicioustiming for comprehensive legislation,the selectionof inappropriate referencemodels,and concerns regarding the suitabilityofboth the proposed legaltitle and its intended position within the legal hierarchy[4].
The promulgation of the Interim Measures has brought the following three critical topics,how to apply the rules beter to co-ordinatedevelopmentandsecurity,howtoprovideatolerant,credible,andcontrolableinstitutionalenvironment for innovation and development based onconsolidated safe development,and how to provide scientific,powerful,and robust institutionalsupportfor therealizationof China'sAItechnologyandindustryovertaking in thecurve withhigh-quality institutional opening.
Therefore,this paper intends to explore the risks posed by generative AI,combine the current governance of generative AIinChina,andcompare itto foreignexperience.Inaddition,itdiscusses the interventionandrestrictions methods of therule oflawon generative AI,analyzes the internal goals and basic requirementsof the healthy developmentof generativeAI,and thenputsforward relevantsuggestionsfortheruleoflaw promotionof thedevelopmentof generative AI.
2 TheVariousRisks to theDevelopmentofGenerative AI
The mission of jurisprudence is notto appreciatethe brilliant achievements brought about by the development of science and technology,buttoexaminetheirationalconsequences thattechnologymay bring,and to reducetherisks of scientificand technological development through theruleof law[5].Atpresent,generative AI,asanalgorithmfor training large data models,hasgraduallydecreased itscontrollabilityand increased autonomy,bringing convenience andrisks tohumansociety.TheriskfactorliesingenerativeAImayviolatelawsandregulations,resulting innegative social effects,and thereis room fordebateon whetherlaws and regulations should be amended.By exploring the types of risks andthe generation paths,wecan provide theoretical supportfor the practiceof theruleoflawandoutlinethe framework for its implementation.Specifically,there maybe the following legal risks.
2.1MajorLegalRisks
2.1.1LegalRiskofIntellectualPropertyRights
The training of generative AIrequires alargeamount ofdata,including not onlycontent that has lost prior rights suchas copyrightsand trademark rights,but alsocontentthat is stillprotectedby intellctual property rights such as copyrights.Inthecontextoftheapplicationofartificialintelligence technologyandthedevelopmentofindustrial innovation,especiallyinthe training of generative artificial intellgence,if teactofautomaticallycrawlingand parsing learning other people'sworksand content involves thereproductionoftheoriginal work,thentheactis likely to infringethecopyrightofothers.Inthe UnitedStates,therearealreadypainters whobroughtasuitagainstasoftwarecompanyfor its AI-generated works,posing bothphysicaland proceduralchallenges tothecaptureanduseofata in generative AItraining7l.However,ifAIcrawling isincludedinthescopeoffairuse,itwillnotonly infringeonthelegitimate rights of the prior rights holders 18] but also crowd out their living space.It is worth considering how to design a path that balances the interests of the two.
In addition,the content generated bygenerative AImay be similar to other people's content orhave intellectual property infringement such as plagiarism.As for whetherAI-generatedcontent should be regarded aswork and granted copyright protection9],whether the liability for infringement shouldbe bore bythe service provideror the service userol,there is currentlyno clearlegal provision, which hascaused great controversyin theacademic community.
In terms of law,a work protectedbycopyright (copyright)mustbecreatedbyahuman being,beoriginal,and bea formof expresioncontaining certain ideological content,and not be excluded bythe CopyrightLaw such as laws and regulations,general numbertables,formulas,etc.Atpresent,therearethre formsof generativeAI,whicharecreated entirely independentlybyAI,created with the asistanceof naturalpersons,andgeneratedaccording topromptwords entered by natural persons.Onlyoneofthe three forms described above directly involveshuman involvement,in which case generativeAI-generatedcontentmaybecopyrightable.\"Withtheremaining two forms,itis problematic todefine generative AIasa'work'.\"This is because artificial intelligence does nothave independent thoughts,andcannot \"create\"independently,letalonehaveacopyright.Infact,thecontent generatedisoften generatedbylearningandanalyzinglargeamountsof existingdata,lacking originality.Therefore,there is noclearlegal definitionas to whethercontent generated bygenerativeAIshould beconsidereda work and granted copyright protection11. There isacertain amount ofcontroversy in academic and legal circles,which needs further study and discussion.
Whenit comes to tort issues,theatributionofliabilityisan importanttheoreticaland practical problem.According tothegeneralprinciplesof tortliabilitylaw,thepartyresponsiblefortheactualtortisliableforthetort.Therefore, inthecase ofinfringementofAI-generatedcontent,iftheinfringementresultsfromtheserviceuser'sownoperationor instructions,theservice user typicallyassumes liability.However,insome cases,the serviceprovider mayalso be liable if there is a technical or platform design defects thatlead to infringement.
Article9of the Interim Measures explicitly stipulates that \"the provider shallbear the responsibilityof an online informationcontent producerinaccordance withthelawandfulfilltheobligationofensuring network information security\".Furthermore,whenpersonal informationisinvolved,\"thepersonal informationprocessorshallassume theresponsibilitiesofapersonal informationprocesorinaccordancewith thelawandfulfilltheobligationtoprotectpersonalinformation\".These provisions significantly increase the product liabilityof the provider.When this requirement was solicited forcomments intheInterim Measures,itsparkeddiscusionsfromallwalksoflife,arguingthat itwas inappropriate to increase theresponsibilities of providers and wasnotconducive tothedevelopment andutilizationof AI technologies and products[12].
In summary,there arecontroversies aboutcopyright protection and infringement liabilityfor generative AI-generatedcontent.Due tothecomplexityof thisareaand thefactthattherelevantlaws are notyetcomplete,furtherresearch and discussion are needed.
2.1.2LegalRiskofDataSecurityandPersonalInformation
GenerativeAIrequires alargeamountof data to formatrainingdatabase,andafter itisofficiallyoperational,it continuouslycollects personal information about users.Artificial intellgenceusuallyrequires the user's general authorization,andiftheuserdoes notpayspecificandcarefulatentiontotheauthorizationrequiredbytheartificial intelligence,therelevant private information willbe invisibly crawledduring the operation oftheartificial inteligence, resulting intheriskofinformationleakage.Notonly that,evenafter theuseragrees totherelevantauthorization,if the dataand informationcrawledbytheuserexceeds thescopeofauthorization,therewillbeariskof informationleakage.
In this proces,AInotonlycollcts theuser's personal information,but also portrays the persona according to the frequency,purpose,andmethodof theuser'suseofartificial intellgence,andprivate information initiallyshared in confidential conversations with the AI may be incorporated into the training database.Whetherthis data entry process constitutes abreachof privacyisalsoasubjectof debate[13l.Inaddition,duetotheexistenceof theblack boxofartificial intelligncealgorithms,itisdiffcult toclearlyunderstandtheinternal process ofartificial intellgenceoperation, which has a greater risk of personal information leakage.
The privacy policy of OpenAI,the company that developed ChatGPT,indicates that when users use ChatGPT,informationaboutuseraccess,use,or interaction willbecollcted,andtherelevant person incharge saidthatChatGPT willuse asmallsample of data from each customer to improve model performance,and users who do not want the data to be used to improve performance need to sendarequest toOpenAIbyemail4l.This means that data containing user privacy and userconversations may be collcted and stored in OpenAI's data centers,and as the number of ChatGPT users skyrockets,the amount of user data it collects and stores will also be huge.
Although,in recent years,data security protection technology has become more mature,and operators providing generative AI services have promised to ensure data security.However,on March 2O,2023,OpenAIoffcially stated that there are 12% of ChatGPT Plus users'data may have been compromised.Some users may see snippets of other people'schats,as wellas informationsuch asthelastfourdigitsofotherusers'credit cards,expiration dates,names, email addresses,and paymentaddresses[lItcanbe seen thatdata securityproblemsareunavoidable,andifdata securityis notefectivelyguaranteed,itwillediffcultforAItechnolgytogainpeople'strust,whichwillinderhedevelopment and application of AI.
The Interim Measures stipulate in the chapter \"Technology Development and Governance\"that generative AI service providers shallusedataand underlying models with legal sources,and inthe chapteron \"Service Specifications\", theystipulate thatproviders shall assumetheresponsibilitiesofonline informationcontent producersand fulfill network information securityobligations inaccordance with the law.Where personal information is involved,theymust bear theresponsibilityof personal information processors inaccordance withlaw,and performobligations toprotect personal information.Allof this reflects the importance attached to data security issues.
2.1.3LegalRiskofFairCompetitionintheMarket
The rise of generative AI raises another important question,namely the potential to strengthen the monopoly of techgiants inthemarket,creatingahugedigitaldivide.Thedigitaldividerefers tothefactthat\"theremaybeadeper hierarchical divide in the skills required to use the network effctively than just accessing it' 116] ,and this is particularly evident in thedevelopmentof generative AI.Inaddition,thecontiued developmentofthe digital divide may bring about anequal gap,and wealth and information willquickly gather in the technology giants,and the market structure will e further solidified7].Although thereare many generativeAIapplications on the markettoday,mostof themare made bycompaniessuchasMicrosoft,Google,Facebook,DeepMind,OpenAI,etc.Considering thaticrosoftistila major shareholder inOpenAIand Google is incontrolofDepMind,the entireAImarket is justastage for tech giants. Judging fromthestatementsofseveral giantssofar,the mainreasonfortheirstrong supportfor generativeAIis tointegrate them with their existing businesses,so as to strengthen their business advantages and market power.
Afterthe formation of an oligopolymarket,the market entryof artificial intellgence has formed extremely high commercial barriers.This commercial barierstems from thecostofoperation,which isastronomical eventhough core technolgiessuchastheTransformerarchitecturearefulldisclosed.Inreality,thereareveryfewcompaniesthatcan afrdtoenterthis high-yield market,andfewcompanies canaffrdthehighcostoftraining models.Afterthe market barriersaregenerated,thepotential monopolyrisks generatedbythe technology giants using the transmisson ffctof the platform will also continue to emerge.
2.2Ethical Risksin Scienceamp; TechnologyatDifferent Stages
Science and technology ethics arethe values and behavioralnorms that need to be follwed in carrying out scientificresearch,technological development,andotherscientificand technologicalactivities,including theacademic norms that scientists mustabidebyintheirresearchbehaviors,as wellas the boundaries ofbasic principles and norms
between scientific and technological achievements andthereal societyl8l.Theyarean important guarantee for promoting the healthy development of scientific and technological undertakings.
2.2.1IntheDevelopmentPhase
Generative AIis adeep learning model trainedonlarge amounts of textualdata,which often includes the work of humans.Asaresult,itis likelyto inherit thediscriminatoryfactorscontainedinhuman works.Asaresult,thecontent of the output may conflict withthe currnt mainstream values,and may even produce discrimination,insults,and other content.Based on theself-learning nature of the algorithm model,such problematic outputs can quicklyand deeply penetrateintothecontent generatedbyartificial intellgence,resultinginawiderangeof1value transmision.If leftunchecked,itmaybeperceivedasacquiescing intherecognitionof theexistenceofsuch illegal methods,undermining thecredibilityoftheruleoflaw.Whetherornotto imposeobligationsonproviders to eliminate immoralordiscriminatory content requires careful consideration ofthe currnt status and costof technological development,as well as the balance between development and security.
2.2.2IntheApplication Stage
Theapplicationof artificial intelligence to generate content iscommon,but artificial intelligence,as a non-humansubject,hasathinking logicclose tothatof humanbeings,and whetherthereuseof its generatedcontentis in line withtheethicsofscience and technology is highlycontroversial.For example,ChatGPTcanhelpusers withavarietyof tasks such as writing news stories and essys,and hasbecomea tool used by some people to create rumors and fake papers.Recently,theauthor who won first prize in the Sony Photo Contest publiclyacknowledged that his photographs weregenerated byartificialintelligence,and indicated that heconfirmed that humans donotcurrentlyhavetheability to recognize and discern whether content has been generated by AI[19].
2.2.3IntheReliefPhase
Asartificial intelligence gradually becomes intellgent and autonomous,there is controversy over how toattribute theliabilityforvarious infringements arising fromitsoperation.Inpractice,machines assisthumansinreasoning,decision-making,andactions through artificial intelligence algorithms under preset goals,andtherelevant entitiesof liabilitymay involveAIalgorithmdesigners,producers,distributors,users,etc.makingitdificulttodefinethesubjectof tortliability2ol.WhetherAIshouldbethesubjectofresponsibilityl2,andwhethernonlegalmoralresponsibilityshould beattributed toartificial intellgence,which hasno emotional value2l,aremattersofcontroversyAnimbalance nthe attribution ofblame notonlytriggers acrisis of trustinsociety,but also hinders the development of generative AI.
Inaddition,intheinvestigationandcollctionof evidenceinAI-relatedlitigation,theauthenticityandreliability of theevidence maybedoubted inthefaceof algorithmic black boxes,whetheritistherelevantdata provided bythe operatoror therelevant informationobtained bythe judicial authorities inaccordance with the law.Itisdiffcult for even professonal and technical personnel tofullyanalyze theAI algorithm,and itcannot be ruledoutthatthe designersandproducersofAIhaveastrong willand motivationtocooperate withAItocomplete thisseries ofoperations.The algorithmblackboxinduces theriskofman-machinecollusion,andthecausalrelationshipoftortliabilityis moredifficult to judge,which aggravates the dilemma of AI infringement relief.
2.3Major Risksin Social Governance
2.3.1TheNon-AuthenticityofGenerativeAl
It is difficult for generativeAI to guarantee thatthe information itoutputs is true andaccurate.In practice,for example,when asked about topics that ChatGPThas not yetbeentrained or that do not have relevant information in its database,generative AI willoften choose to fabricate 1 informationortransplant othercontent,resulting in misuse and further dissemination of 1 information by users.
Infact,evenbefore the emergence of artificial intellgence,disinformationcould stillbe extremelyconfusing and cause socialchaosl23],andthe1content generatedby generativeAI,basedonlarge-scaledatasets is even moredifficult toidentify,whichnotonlyundermines thecredibilityofAI,butalsogreatly increase thecostof in-formation credibility detection,resulting in a serious waste of social resources.
2.3.2TheNon-ReliabilityofGenerativeAI
Generative AI faces challenges in guaranteeing the quality of all generated content.For example,for relatively complex matersand valuejudgments,suchascourt decisions,thecurrent trustinAI-generated contentisstillnot as good as that ruling madeby natural persons.However,with the continuous developmentof technology,the useof artificialintelligence toassistdecision-making hasbcome thegeneral trend,butifartificial intellgence hasalwaysappeared corresponding defects and errors,therecognitionand trustof society inartificial intelligence willbe greatlyreduced,formingaviciouscireleandhinderingthedevelopmentofartificialintellgence.Inaddition,duetothewidespread disseminationofAI,\"it is easier to see smallomissions thatocurinunforeseen sequences insuccession,which canbecomelargerandmoredevastatingacidents\"4landthegreatestrisksposedbyimmatureAtechnologiesactually stem from their high permeability and high integration to society,resulting in a knock ?- on effect.
Atthe same time,the non-reliabilityof generative AI willalsobring risks tousers.In thecase that mostof the contentcanbe generated byAI,itisdiffcult forusers todistinguish which contentcanbedirectlyappliedand which cannotbedirectlyapplied.Inaddition,thedetectioncostofthis partisquitehigh,anditisdiffcultforusers toafford once the benefits are damaged.
2.3.3TheWeak-ControllabilityofGenerativeAI
Generative AI has a great breakthrough compared to previous artificial intelligence,and its autonomy has been improved afterdeep learning.Human intervention is no longerarule definerbutratheracorrctorof errors in the process of generative AI programming.AI developers cannot predict what results the model willproduce under corpus training.
At the same time,as AI gradually moves from a professional field to a general-purpose AI,the broadening scope of its application places higherdemandsonknowledge reserves,and theroleof human beings init is weakened,so that it is dificult to strike a balance between the controllability and ability of AI 1251 ,and once itis out of control,itis difficult to countonpeople'sblindtrustd,itcan evencausefearinsociety.Therefore,howtobalance thecontrolabilityof artificial intelligence and the limits of functional development is of profound significance.
2.4AnalysisoftheRiskCauses
At present,partof the risk problem of generative AIbelongs to the inherent risk of naturalperson-generated content,andtheapplicationscopeofAIhasincreasedonthebasis ofartificial generation,fromone-to-onerisk toone-tomany risk; The other part belongs to the special risks of AI itself. Under the risk of natural persons as the main cause, it is debatablewhetherAIshould beheldaccountable,orhowto graspthe limitsof therequirements forAI,soas to balance the developmentofAIandAIsecurity.Inthecaseof non-natural personrisks,how to prevent themand which methods to adopt require continuous innovation in regulatory methods and improvement of innovation capabilities.
In fact,the clasification and attributionof risks (Table 1) show that the fundamental contradiction of risk lies in how to controlthelimits between safetysupervisionandsupport for technological development,and whatconcepts and methods to uphold to balance the contradiction between the two,which has becomean important research direction for generative AI in the future.
3 The Extraterritorial Investigations into the Development of Normative Generative AI
At present,although theregulationandsupervisionofartificial intellgence outside the territoryareinthedevelopment stage,some governance experience hasbeenformed,and has acertainscaleof research in terms of governance principles,implementationrules,andresponsibilityrules,so itcanbe localizedandusedforreferenceincombination withthecurrnt situation offoreign supervision.Since generativeAI is mostlynotdiscussedseparately from thecontextofartificial intelligence,thefollowingdiscussionisalsobasedonthecontextoftheoverall regulationofartificial intelligence.
3.1 The European Union
In April 2O21, the European Commission presented a proposal for an AI Act 127] and released officially in the \"Official Journal of the European Union\"on July12,2O24.The European Union's main regulatoryapproach toartificial intelligenceis horizontal supervision,andthe Artificial Intellgence Acthas beencalled \"the world'sfirstattemptto horizontallyregulateartificial intelligencesystems\".Thebillfocusesonrisk managementandcompliance,withafocus on threats to personal safety and fundamental rights①.
Table1TheTypesandAttributionofRisksofGenerativeAI

The AI Act classifies AI risks into four main levels: unacceptable,high risk,limited risk,and minimal risk28lAI systems withunaceptableriskstriggerafullorpartial ban,whilehigh-risk systemsaresubjecttoEUproductsecurity method forregulation.The Act focuses on \"high risk\"and imposes specific requirementson establishing and maintaining risk managementsystems,addresing biases and issues intraining data,ensuring system traceability,providing comprehensive instructions to users,requiring manual supervision,and enhancing the security and robustness of the network.
On September28,2022,the European Commission published aproposed Directive on the Responsibilityof Artificial Intelligence.The European Commission considers that existing liability legislationat the level ofEU Member States is notappropriate toregulate liabilityclaims fordamagecausedbyAI products and services[29l.The proposedAI Liability Directive introduces two additional measures specificto AI to complementthese rules,namely reducing the burdenof proof on victims through a \"presumption of causation\"andempoweringacourt to order a supplierof highrisk AI systems to disclose relevant information.
The EU hasadopted astrictregulatory model,trying to coordinate issues related to AI systems with anew separatebodybasedonanewlycreatedlaw.However,ontheonehand,itimposestoomanyrestrictionsonthedevelopment of AI,andthere is aviewthattheAIlaw promulgatedbytheEUwillmakeAIcompanies bear toohighcosts inEurope, withmostof thecompliance requirements being technicallyachievable[2l.On theotherhand,comparedto strictregulation,the compliance assssmentobligations imposed onAI providers bythe Actonly involve internal procedures and lack externalconstraints,and the self-assessment by providers to prove thatAI with highrisks iscomplying with the Act,which may also reduce the effectiveness and enforceability of this governance tool.
3.2TheUnitedStates
In October 2O22,the United States released TheBlueprintforanAI Billof Rights:Making Automated Systems Work forTheAmericanPeople(hereinafterreferredtoas theBillofRights Blueprint).Iidentifies five principles:the securityandeffectivenessofthesystem,freedom fromalgorithmicdiscrimination,ensuringdata privacy,notificationof theuse of AI systemsand their potential impact on users,and theability to exit AIsystems30l.However,the Billof Rights Blueprint is nota mandatoryand binding U.S.system or policy,and does not have the force of laws and regulations, but only provides a guide in principle.
On October30,2023,Biden signed the \"Executive Orderon Safe,Secure,and TrustworthyArtificial Inteligence\",aiming to ensure thatthe United States remains at the forefront in grasping the prospects of AI and managing its risks.Aspartofthecomprehensive strategyforresponsible innovationoftheUS government,this executiveorder is based onthe previousactions taken bythe US president,includingthe initiative to prompt15leading enterprises to voluntarily commit to promoting the development of safe,reliable and trustworthy AI.
In general,the United States pursues theconcept and principleof non-intervention unless necessry,and exerts maximum tolerance forthedevelopmentofAI inenterprises.Itcanbe seen thatthere is nosystematic generativeAI governance billin the United States,and manydocuments,including the Regulatory Guidance (Draft)and the Billof Rights Blueprint,Biden'sEO14110,and the practical guideof theAIRisk ManagementFramework released by the National Instituteof Standardsand TechnologyinFebruary2024,have been issued inthe form of guidelinesand do nothaveamandatory effect.Interms ofregulatoryrules,althoughtheregulatoryprinciples forartificialintellgenceare proposed,the waytoachieve them isnot through the formulation ofadditional laws andregulations,butthrough the non-regulationor transformationof existing lawsandregulations,which has greatuncontrollbilityandisdificult to ensuretheimplementationof theprinciples.Intermsofregulatorystrategy,theUnited Statesreliesmoreonlocalpolicy regulationand corporate self-discipline tocontrol risks,while the governmentchooses to focus onsupport and encouragement in policymaking.
3.3The United Kingdom
On March 29,2O23,the UK government published a proposal foranew regulatory framework forAI-A Pro-Innovation Approach toAIRegulation (WhitePaper).Incontrast to the USandEUapproaches to AIregulation,the UK government has proposed a \"common-sense,results
oriented approach\" that seeks to balance the goal of becomingan \"AI superpower\"by2O3O withtheserious risks posedby \"proportionalregulation\"of AI,all whilebuilding aconcreteregulatory framework around soft \"principles\".
Specifically,the UK has adopted a principled approach to eclectic regulation,which does not impose specific additional obligationsand rights,butrather supports theregulationanddevelopmentofAI through explicit regulatory principles.Although this method has flexible adjustment space andcan quicklyadapt to a variety of problems arising from generalAI,it willinevitably bring problems such as unclear regulatory scope andambiguous regulatory rules.For instance,themethodofflexiblyassesing therisk level inspecificscenarios isessentially inoperableorrequires extremely high observation costs.
In summary,even though thereare diferent regulatory atitudesand methods outside the teritory,theyare actually reasonablechoices made basedon their respective realities.The curentrelaxed supervision model and strict supervision model are not absolutelyoposed.With the continuous developmentof artificial intellgence,the scope and mpact of riskscontinue toexpand,and the two models also havea gradual convergence trend.TheUnited States is also proposing relevant specific regulatory bils,and the European Union is also paying more attention to the development trendofartificialintelligenceandmakingpolicyadjustments.Inshort,theruleoflawforthedevelopmentofartificial intellgence should notonlyfocus onreasonablypreventing thepotentialrisksof artificial intellgence,ensuing thatit develops along areasonable,compliant,and legal path,but also considerthe actual enforceabilityandoperability of various systems.It should notimpose overlyharshregulationsonenterprises,as this maystifle inovationand hinder the development of artificial intelligence technology.
4 The Considerations of the Promotion of the Rule of Law in the Development of Generative AI
In view ofthe development risks of generative AI and the current status of extrateritorial governance,the law should intervene in atimely manner to promote the standardized and healthy developmentof generative AIunder the frameworkofthesocialistruleoflawwith Chinesecharacteristics.Atthesametime,itmustalsobeclearlyrecognized thattheruleoflawisanimportanttoolforgoverning thecountrynotanomnipotentartifact,andthatitisnecessaryto follw the ruleoflaw thinking and adhere to the rules and methods of therule oflaw when governing generative AI technology and applications.However,the regulation and management of generative AIbylawsand regulations should remain modest,respect thelaws of scientificand technological development,balance development and security,andrealize the safe,reliable,and controllable development of generative AI.
4.1TheMeasuresoftheRuleofLawtoPromotetheDevelopmentofGenerative AI
Since generativeAIcan exhibit self-learning characteristics to a certain extent and has high intelligence and adaptability,inviewofthecurrentproblemsandrisks,itisnecessarytoobservevariousvaluesandexpressionsinthe development of generativeAIfrommultipledimensionsunderthepremiseoffolowing the basic principlesofthe rule oflawandthebasic lawsof scientificand technological innovation,seekingabalance between various values and expressions,with safety as the bottom line and innovation as the main line.
4.1.1TheMutualPromotionofTechnologyandLaw
The developmentofgenerativeAIisascientificandtechnological issue,whilethelegal,social,andethicalrisks arising from generativeAIbelong to the social sciences.When using generative AI intheruleoflaw to govern generativeAI,itislegal scholarswhoputforwardrequirementsforthetechnicalissesofgenerativeAIintermsofsocial risks.It is questionable whether this requires legalscholars to fully grasp the professional knowledgeof AIor to what extent theyneed tounderstand the technical principlesof generativeAI.
Infact,lawisa discipline that touches almostallareas of society,andiflegal scholarsare required tohave complete professonal knowledge ofevery industry,then legalscholars can hardly proposeanylegal provisions thataddress therelevant risks.Thecorrect point isundoubtedly thatif the focus ison \"legal industry issues\",industry experts shouldhaveagreatersay;if t'sa\"legalissuefortheindustry\",thenlegaltalentistheultimateauthority,notthedustry expert31l.However,evenlegal scholars can put forward relevant opinions on theprofessional issues of the industry through theembodimentoflegalconcepts,legal interpretation,analogy,andertainvaluejudgment methods[32l.Theissueof the specificsofcertain technologies and the acesibilityof legalrequirements remains unresolved,especially in the case of generative AI, where even AI experts themselves cannot predict what the AI will do.
Therefore,in the governance ofartificial intelligence,the concept of technology and law interaction should be upheld,and theinteractionsystembetween technologyandlaw shouldbedesigned.Countermeasuresatthelegal science and natural science levels should be put forward forlegal risks,so as to properly addressthe relevant risks.
4.1.2TheBalanceBetweenDevelopmentand Security
The regulation of AI is intended to reduce various risks,thus protecting the interests and safety of operators,providers,users,andother parties,as wellas thesecurity interestsofsociety.Theultimate goalofreducing orexempting certainresponsibilities forAIattheregulatorylevel is toremoveobstacles tothedevelopmentofAIas muchas possibleand promote its further development.The above-mentioned disagreements onAI governance among countries outsidetheregion have essentially formed asecurity-centered model of heavyregulationandadevelopment-centered modelemphasizing enterpriseautonomy.Theultimate goalof legalintervention inAI is tomaintainabalance between developmentand security.Therefore,in the process of governance,both developmentand securityshould be taken into account,and indicators should be reasonably set to control the intensity of supervision.
In view ofthe development status of generative atificial intelligence in China,inadition tocontrolling theuseof safety principles,we should alsopayatention to promoting the development progressof generative artificial inteligence andfoster the sharing and innovation of artificial intellgence enterprises onthe botom lineof ensuring safety. Theultimategoalofregulationis to maintain the healthyandorderlydevelopmentofanew roundofscienceand technology,including generative AI.Therefore,the country needs touphold theconcept of balanced development and security,createamarketatmosphereandinstitutionaliovationforsienceandtechologyforgood,innovation,and competition,and strengthen institutional incentivesandpolicysupportfor the preventionof various risksinthedevelopment of artificial intelligence.
4.2TheRequirementsoftheRuleofLawtoPromotetheDevelopmentofGenerativeAl
The so-called ruleof lawrequirements referto thesubstantive requirements forAI,thatis,the state that generative AI should ultimatelyachievethrough legalregulationand governance,andthese requirements in turn guide the formulationand implementationof regulatorymeasures.Drawing onthe principles of extrateritorialAI governance and China's currentdevelopmentplan,the authorintends toputforwardthree requirements fortheruleoflawobjectives of generative AI: security,reliability,and controllability.
4.2.1 Security
At present,there is no clear and unified meaning of safety,and some scholars believe that \"safety refers to the state of arationalperson'sbodyand mind in acertaintimeand space from exteralhazards\"3], while others are more absolute,holding that \"safety is aneventin which no accident occurs,andan accidentis an event involving accidents andunacceptablelosses\"34l.Butinsort,inthecontextof safetyscience,safety is highlyrelated toaccidentsand hazardsinthe outside world.The U.S.Billof Rights Blueprint defines a \"safe and effctive system\" as \"a system that should not be intentionallyor reasonably foreseen to endanger the safetyof youor your community\".It should bedesigned to proactively protect you from damage caused bythe acidental use or impact of an automated system[30l. The purposeis to require theconceptofsecurityto beembedded in the designofAIalgorithms,toprovide security guarantes,andto preventdamagefromoccuring.Thatis,itbelieves thatsecurityshouldbethe goalofcombining the security of artificial intelligence itself with the security of the subject dimension.
Combined with the interpretationof the meaning ofrelevant security,in thecontextof China’s rule oflaw goals, the securityofartificialintellgenceshouldhavethefollowingmeanings:First,thesecurityofartificial inteligenceitself.Thatis,aseriesof processessuchasdata colection,desensitization,modeltraining,and manualan-notationcarriedoutbygenerative AIshould be \"free from threat\" and \"without danger\" to prevent the occurrenceof data leakage, privacy breaches,andthe dissminationof dangerous social information bythesystem itself.Second,thesecurityof thesubject dimensionofartificial intellgence.Providersof artificial intellgence should establish asecurityconcept training system,set encryptionprotection measures fordatabases and information systems,form asystem security guarantee architecture,and assume corresponding security responsibilities.The government should provide legal guarantes for the construction ofdata infrastructureandthe subject compliance system framework ofAI,clarify the rights andresponsibilities of AI providers and users,and provide a security system guarantee for the development of AI.
Of course,under the above definition of security,allitsrequirements should notbe absolute,and it is necesary to preserve the necessary institutional responsibility space forAI providers,and establish exemption orreduction clauses in the case of force majeure and malicious attacks.
4.2.2 Reliability
Generative AIisstillinthedevelopmentstage,andthe goalof AI governance is stillunderdevelopment.Reliability means thatAIdoesnotdeviatefrom ethicalandlegal requirementsand isable to generate decisions accurately,fairly,andsafely.Peopleareceding somedecision-making powertoAI inthe expectation thatAI willbeableto solve problems more rationallandprecisely.AI should beable toovercome human irationality,bias,andlimitations,and make acurate decisions withaslitlebias as possble,inawaythatismore inline withrealistic requirements.This requires enterprises to continue to invest inRamp;D resources,exploreand improve various AI algorithmsand models,participateinopen-source communities,contribute toand benefit from open-source AI projects,and also requires governments to formulate regulations ondata privacy protection,algorithm transparency,ethical principles,etc.,andsetup special institutions ordepartments to supervise the developmentandapplicationof AItechnologies,promote technical cooperation between the public and private sectors,and support AI-related Ramp;D and innovation.
Current realities show thatthecomplexityand uncertaintyofAI technology led to its potentialflawsand errors. Although weareconstantlystrivingforperfection inalgorithms,perfectioncanbedifculttoachieve.Therefore,we need to be grounded inreality,recognize the limitations ofAI,and provide it withthespace for interpretabilitywithin
theframework of legal objectives.
The reality is thatAImaynever be perfect,but it willalways be flawed24l.Theability to produce accurate and reliableresultsdependsontheextenttowhich generativeAIisdeveloped,andintermsof legalobjectives,itshouldbe given room to interpret and relevant standards updated in a timely manner.
4.2.3 Controllability
The governance of generative AI should ensure that it is controlled by humans,rather than humans being at the mercyofAI.Thisrequires that theright todecide whetherand howAIis generatedshouldbevested inhumans,whether theyare providers,users,or regulators.Controllability is notatransient requirement,but a procedural one.In he regulation ofcontrolabilityforlanguage models likeChatGPT,residual risk and hierarchical management are two key concepts.
Residualrisk refers to the potential risks or problems in the developmentand use of AI systems,despitea range of regulatory measures.InthecaseofChatGPT,although itexhibits excellent performanceonmanytasks,there maybe errors or potentially harmful outputs in some cases.These risks may include inaccurate information,misleading answers,discriminatoryremarks,etc.Therefore,theexistenceofresidualrisks needstobecontrolled through regulatory means.
Hierarchical management is a management approach that ensures control over the system's behavior by dividing the useandaccessofAIsystems intodiffrent levels.Taking ChatGPTasanexample,itcanbe managed hierarchically based on the user's identityand purpose.For example,a general user may only be able touse basic features,while a user with expertise andresponsibilities maybe grantedahigherlevelof accessInthis way,theriskof thesystembeing misused or misled can be reduced,andthe supervision and control of the system's behavior can be ensured.
In order to effectively manage residual risks and implement hierarchical management,taking ChatGPTas an example, the following aspects should be considered:
Thefirst is residual risk assessment.A comprehensive residual risk assessment framework should be established to conducta detailed risk analysis of the ChatGPTsystem.This includes considerations such as model weaknesses,potential bias,privacyrisks,and the potential forabuse.Basedontheresultsof theassessment,measures canbe developed to mitigate the residual risks.
The second is residual risk management measures.In order to reduce residual risks,controllable supervision requires a series of management measures.This may include continuously monitoring the performance and behavior of the system,fixing vulnerabilities and defects inatimelymanner,improving the diversityandbalance of modeltraining data,and establishing mechanisms for users to report issues and provide feedback.
The third is hierarchical access control.The ChatGPT system should adopt a hierarchical accesscontrol mechanismto restrict access to the system based on theuser'sbackground,purpose,and usage needs.This can be achieved through authentication,user auditing,and authorization mechanisms.Professionaland trained users shouldbe granted a higher level of access, while general users should be subject to tighter controls and restrictions.
Fourth,outputauditing and filtering.To ensure the accuracy and compliance of the output,controllable supervision canestablish audit andfiltering mechanisms.This mayincludereal-timechecking,screening,and correctionof generatedresponses to remove misleading,ofensive,orinappropriate content.Atthe same time,mechanisms foruser feedback and complaints should be encouraged to further improve the output quality and compliance of the system.
5 The Practical Frameworks for the Facilitation of the Development of GenerativeAI
At present,the \"Interim Measures\" jointly issued bythe Cyberspace Administration of China and the other seven departments are the firstdepartmental regulations in China to govern generative artificial inteligence.These measures condensethemulti-dimensionalevaluationoftheriskscaused bycurrent generativeartificial intelligenceand put forward governance requirements in terms of pre-prevention,regulation,and relief during and after the event.Basedon the above-mentionedrisksinlegal norms,social governance,and science and technology ethics,combined with the experience of extrateritorial governance,andunder the guidanceoftheconceptand goaloftheruleoflaw,thefollowing interpretations and prospectsare made to improve the legal guarantee of generativeAIandpromote the further implementation of the Interim Measures.
5.1Strengthen the Supply of Regulatory Institutions
5.1.1SystemDesignwithSafetyastheBottomLine
(1) Complete the regime of ex-ante censorship
Article17of the Interim Measures stipulates that \"those who provide generative AI services with public opinion atributes or social mobilizationcapabilities shall carryout securityassessments inaccordance with relevantnational regulationsand perform algorithm filing,modification,andcancelltion formalities in accordance withthe Provisions on the Administration of Algorithmic Recommendations for Internet Information Services\".That is,some specific generative AI products need to undergo security assessment and algorithm filing before theyare released.Increasing exantereview can notonly improvethelegitimacyofusing generative AIproducts to provide services to the public and increaseteircredibility,suchassafety,reliability,explainability,andaccountability,butalsohelptobetterrealizethe inclusivenessofAIproductsandtechnologies,preventrisks more effectively,ensuresafety,andpromptlyidentifyproblems in their operating procedures and service content,thereby improving the acceptabilityof generative AI products for public service.
Of course,it must also be recognized that ex-ante review willincrease thecompliance cost ofenterprises toacertain extent,andifthescopeoftheex-antereviewisnotproperlyset,itmayinhibittheRamp;Dandtrainingeffciencyof generative AIproducts,objectivelyleading toaslowdown inthedevelopmentof generativeAI.Inother words,if the scope of the priorreviewistoolarge,itwilleadtoan increase inthecostofenterprise review.Clarifyingthescopeand methods ofreviewcan encourageand guide enterprises to self-examine,promote enterprises to standardize the application of generative AI,and,on the other hand,reduce the possibility of risks and avoid adverse effects.
(2)Detail the content ofdisclosure information
Paragraph 1of Article19of the Interim Measures stipulates: \"Therelevant competent authorities shall supervise and inspect generativeAI services in accordance with their duties,and the providers shallcooperate inaccordance withthelaw,explainthesource,scale,type,labelingrules,algorithm mechanism,etc.of the trainingdataasrequired, and provide necessary technical and datasupport and assistance\".In fact,this regulation puts forward new requirements forthetransparencyof generativeAIalgorithms,sothattheprotectionofprivacyand personalinformationis no longerlimitedtopassive,after-the-factremedies.Withtheprotectionchains moving forward,moreactiveand effective standardizationoftheAItraining process willhelpenhanceusertrustand enablethebetterdevelopmentof generative AI products.
It is important to notethatthepursuitof absolute transparency is notdesirable for thedevelopmentof generative AIand is notin line with thebasic positionofthe state to support andencourage innovation.Therefore,onthe basis of theAdministrative Measures for Generative AIServices (Draftfor Comments)(hereinafter referred toas the \"ConsultationPaper\"),theInterim Measuresaddtheconfidentialityobligationsofrelevantinstitutionsandpersonnelinvolvedin the security aessment,supervision,and inspectionof generative AIservices tostate secrets,trade secrets,personal privacy,and personal information,balancing the tensionbetween theprotectionandsupervisionof AIinnovation.
In practice,the extent and limits of disclosure should be further clarified to avoid unreasonable requirements that forcethedisclosureofkeyinformationof generativeAIalgorithms,resultinginthedisclosureoftechnicalsecretsofenterprisesand ireparable losses to the innovation and development of enterprises.Inaddition,the disclosure of copyrighted data isalso closelyrelated todatascraping infringement.At present,theactof crawling copyrighted works is quite controversial,andsome scholars believethat this kindofcrawling should beregardedasfaling undertheapplicable scopeofthefairuserule,and it should be giventhe status ofunconditional crawling35l.Some scholars believe that the interests ofcopyright holders and AI service providers can bebalanced through statutory licensing and collective managementsystems[6l.Themajorityofscholars stillhope to grantan exemptionforthe developmentofartificial intellgence tocrawlothers'copyrighteddata,butitisstillnecessarytopayatention toprotecting thelegitimateinteests of copyright holders.The path of protection can be balanced by refining the disclosure obligation and allowing copyright owners to claim without crawling or requiring removal.
(3) Concretize the obligation of information accuracy
Although Article 4of the previous Draft for Comments stipulates that the content generatedbygenerative AI should be true andaccurate,from the perspective of current mainstream technology,generative AIcannot distinguish the authenticityofcontentlike humans,soitis theobligationof the provider tomakethecontentgeneratedbygenerativeAItrue andaccurate,which means thatdomestic generativeAI providers must meettheobligation through manual review.This willaffecttheeficiencyof generativeAIoperations and generated content,greatlyreducing theconsumer user experience,and willalso greatly increase the burdenonbusineses,spending alotofhumanand technical resources on reviewing information.
With this in mind,intheoffcially released Interim Measures,theobligation to ensure accurate information has benrevised to \"take effective measures to improvethe transparencyof generative AIservicesand improve the accuracyand reliability of generated content based on the characteristics of the type of service\".
(4)Strengthen the guarantee of data security
In thechapter on \"Technology Development and Governance\",the Interim Measures propose the establishmentof a publictrainingdata resource platform,andat thesame time,stipulate thatgenerative AIservice providers should ensurethelegitimacyofdata sources.Data securityistheunderlyingrequirementof digital technology,including generativeAI technology,especiallpublicdata with richdimensions,wide-use scenarios,and manyusersubjects.Preventingdatarisks is key to ensuringthecoexistenceof generativeAIdevelopmentandsecurity.Itis necessary tocombine thecharacteristicsandfuctions ofthedatarequiredbytheunderlying technologyofgenerativeAItoestablishand improve thedata clasificationand hierarchical protection system,suchas theclassification and managementof data in the training database.
First,datacan be classified based on the data subject,such as personal data,enterprise data,govermment data, etc.Secondly,thedata canbe classified according tothe degree ofdata processing,including raw data,procesed data, derived data,etc.Inaddition,therightatributesofdatacanalsobeconsidered,suchaspersonal privacydata,trade secret data,public data,etc.Also, we can integrate into account the efective clasification and management of data.
On the basis of data classification and grading,data protection standards and sharing mechanisms should be establishedthat match thedata typeandsecuritylevel.This meansthat diferent types of dataand levelsof security shouldbe protectedaccordingly.Atthesame time,inordertopromote thesharing andrationaluseofdata,itis necessary to develop corrspondingdata-sharing mechanisms to ensure thatdatacan be shared legallyand effectivelyunder the premise of meeting privacy and security needs.
In addition,generative AI also involves the cross-border flow of data,and reasonable cross-border data security lawenforcement rules should be formulated on thebasis of considering international standardsand practices.The convergence with rulesof other countries and regions should be strengthened topromote cross-borderlaw enforcement cooperation on data security.By establishing a cross-border data securitylaw enforcement cooperation mechanism,nternational information sharingandcollaborationcanbe strengthened to jointly addresscross-borderdata security challenges[36].
In summary,toensure data security,data subjects,data processing degrees,datarights atributes,etc.,should be considered in the data clasification and hierarchical protectionsystem.Data protection standardsand sharing mechanisms should beestablished that match thedata type andsecuritylevel,and reasonablecross-borderdata securitylaw enforcement rules should be formulated to strengthen international cooperation and promote the sustainable development and application of digital technology.
5.1.2PolicySupportsasOrientedtoDevelopment
(1) Overall policy support
Objectively speaking,there is a lag in the development of generative AI technology in Chinal37l.Although China has put forward theslogan of \"leading artificial intellgence\",thecorresponding hardwareconditions and soft support have notbeen put in placein time.Although theexplosion of generative AIin this roundis dueto theupdateof trainingarchitectureand models,itisesentiallytheimprovementofcomputing power thathasledto theimprovementand developmentofrelated technologies.Itcouldeven beargued thattherisksof generativeAIcanbe greatlyreduced if there is enough computing power,because theamount of computation and training itcan carry,as well as the human standards,can be developed on a large scale.
In fact,onlybyindependently developing artificial intelligence,mastering core technologies,andadvancing artificialintellgence technologyaheadoftheworld'sartificial intellgence technology,canartificialintelligence governance trulydevelop independently.The independent development of generativeAI willinevitably involve longterm,wide-ranging,andin-depthscientificand technologicalinnovation,whichisanarduous taskthatcannotbeaccomplishedbyanyindividual ororganization alone38l.Forthis reason,it shouldbeatthebotomof thesafety zone to promote development above the lineand provide policy incentives for the improvementof generative AI computing power,data,and other technologies.At present,the Interim Measures mainly follow the principlesof attaching equal importanceto development and security,promoting innovationand governingaccording to law,and making provisions in terms of encouraging theRamp;Dandapplication innovation of generative AI and the requirements for generative AI technology itself.Intermsofsupportingapplicationandpromotion,industry-university-researchcolaboration,independentinnovation,internationalcooperation,infrastructure,andthconstructionofpublic trainingdataresourceplatforms,thestate'ssupportand encouragement measures forgenerativeAIarerefined,which fullyreflects the state's supportfortheartificialintelligence industry.Theopenness,sharing,and improvementofdata,algorithms,andmodels should be further taken as the starting point to build a more solid technical and resource foundation for the developmentof generativeAI enterprises,give fullplayto theroleof aservice-oriented government,and promotesharing and cooperation among enterprises.
(2)Pilot regulatory sandbox
The regulatory sandbox is a pilot modelof regulatory policies first applied to the fieldoffinancial supervision.It specificallyrefers tothepromotionofregionalfinancial innovationandfinancialtechnologydevelopment,where financialegulatoryauthoritiesalowsome licensed financialinstitutions orstart-uptechnologyenterprises totestnewfinancial products,new financial models,ornew businessproceses within acertain time and limited scope,while lowering theentrythreshold forthetestprojectsorderegulating regulatoryrestrictions9].However,thecurrent regulatory boundariesof generativeAIareblurred,andmanyrisks have notyetbeenclearlydetermined,which hascertainsimilarities with financial development supervision,so it has reference significance.
On the one hand,through the regulatory sandbox,law enforcement agencies can pilot relevant regulatory measures in advance,provide early warning of possble risks,andfullyaddress the problemof information asymmetry.Concentrating regulatoryresources to fulycommunicate with theenterprises in thesandbox improves thequalityof supervision,avoids the occurrenceof \"one tubeand die\",and then promotesandapplies the verifiedand efective regulatory planwhenthetimeisripe.Ontheotherhand,enterprises intheregulatorysandboxcanfullycommunicatewithregulators,provide feedback onthecurrent statusof technical practices,and participatein the formulationofregulatory boundaries for generative AI,so thatthe regulatory plan aligns with the development goals ofthe enterprise.
5.2Establisha Multi-FacetedandLong-TermRegulatoryMechanism
The life of the system lies in its implementation,and the efcient implementation of the regulatory system and rules needs tobestrengthened,andtheimplementationof thegenerativeAIregulatorysystemnds toestablishadiversified and long-termregulatory mechanism.
5.2.1ReplenishProvisionstoVerifytheRectificationandOptimizationMethods
The Consultation Paper stipulates that for the generated content that is found and reported to be non-compliant, inaddition tomeasuressuchascontent filtering,itshallbeprevented from beingregenerated within threemonths through model optimization and training.Theregulations addressthe keyisues of generative AI and enrich and mprove the handlingof generativeAIviolations.Intheoffciallpromulgated \"Interim Measures\",thisarticlewas deleted due to thelack of operability of the provision.
In fact,if the legal provisions or practice departments can supplement the verification of the ways and means of rectificationandoptimization,the generativeAIviolation handling process willbe improved.Theverificationof theresults of rectification andoptimization should be implemented intechnical practice,supplemented byregulatory guarantees.Itis necessarytosetthe basic thresholdfortheuseof technology,coupled with regularandirregular monitoring, to ensurethesafetyand credibilityof itsrectificationandoptimizationresults.After theoptimizationandrectification, it can beused to check whether there is stillnon-compliant generated content after rectification andoptimization by means of manual testing and simulation and setting up a regulatory transition period.
5.2.2RationalizeAllocationsontheCostofGenerativeAIRegulation
Due to the large varietyand scale of generative AI,and the limited regulatory resources of the government,the allocationofregulatory resourcesshouldbeoptimized,andtheregulatoryauthoritiesshouldsetdifferentregulatorycosts according to the scale and business activities of different generative AI service operators.
On the basis ofcost-effectiveness,the regulator establishes the criteria for incurring costsand determines the costs that the regulator needs to pay in thecourse of carrying out its mandate.At the same time,the fee standards shouldbereviewedand updated regularlyto improve the efciencyand transparencyofsupervisionand ensure thereasonablenessandfairnessof thefees.Accordingtothecharacteristicsofthesupervisedobjectsandthediferent tasksof supervision,thesupervisedobjects shouldbe divided into diferentcategories,and differentregulatory fee standards shouldbeformulatedaccording tothediferentcategoriesof supervised objects.Inaddition,itisalso necessary to introduce market competition factors to carryoutcertain market-oriented competition for generative AI service providers,and achieve the purpose of reducing costs through price comparison.At the same time,the regulatory process should beoptimized,andmore eficientand convenient management methods and technical tools should be adopted to improve regulatory efficiency and reduce costs.
The implementation of this idea requires aclear premise:thatis,it is necessaryto fullyunderstand and recognize the risks ofAIsystems,andtothisend,itisnecessarytoimprovethefilingandreviewsystemofrelevant inforation technologyanddata resources,and strengthen the government's technicalsupervision to improve theregulatorycapacity and efficiency.
5.2.3PayAttentiontotheDigitizationofRuleofLawSupervision
Inthe next stepofregulatorycapacity building,atention should be paid to thefurther integration of rule of law supervisionand digital technology.Specifically,policymakers should turntheir focus from regulating thecodeand the resultsof generativeAIalgorithms to thecodeand thealgorithm processitself.Thatis,byconverting legal rules into code,and using code toregulate the\"legal technicalization\"ofcode,andthe \"technicallegalization\"of industryrules that attach importance to \"code is law\".
In thetechnologization of law,legal rules are transformed into machine-readable forms so that computer systems canunderstandand enforce them.Through technical means,the subjectivityanduncertaintyof artificial interpretation and operation oflegal ruleshave been eliminated,and the predictabilityand consistencyoflaws have been improved. Legal rulescanbe moreeasilycommunicated,understod,andapplied,reducing thepotentialforhumanerroranddisputes.Suchanapproachcould strengthen industry regulationand self-regulation of generativeAI,providing greater transparencyand credibilitytoitsapplications.Therefore,theuseofautomation technologytocountertherisk generatedbyautomation should becomethe mainstream ofsupervision in thedigital era,soitis necessary to furtherstrengthenthe construction of digital government and improve the supervision ability of government digitalization.
5.2.4GiveFullPlaytotheRegulatoryRoleofEnterprisesandUsers
Enterprises are more motivated to implement safety supervision.For example,there is a technical term for discriminatorylanguage,hatespeech,and insulting speechcaled toxicity.Infact,peopleinscientificandtechnological circlesare more concerned about this isse than legal professionals,andonce large-scalelanguage model products are exposed to this toxicity—often generating content that contains insultsand discriminatory tendencies—the products will be boycotedbythepublic,which willdirectlyafectcommerial interests[40l.Asmanufacturersandprovidersof generative AI,enterprisesarealsomorecapableofverifying whether theirproducts havelegalandsocialrisks atthe institutionallevel.Therefore,atthelevelofenterprisesystemimplementation,enterprisesshould improvetheconstruction of self-examination systems and increase the intensity of self-examination.
On the one hand,it is necessary to strengthen users'awarenessof self-protection,increase reporting andre-view chanels foruser supervision,letusers knowand understand the generative process of generative AI,and layrelevant policy and institutionalfoundations forstrengthening users'trust.Onthe other hand,users arealsoanimportant training nourishmentfor generativeAI.Therequirementsforusers'moralandethicalrisksshould bestrengthened,theethicalrulesandrequirements forproductuse should be clarified tousers,andthechannels forobtaining ilegalinformationby artificial intelligence should be prevented from the source level.
5.3Clarify and Refine the Responsibility System
Paragraph 1of Article 9of the Interim Measures provides detailed provisions for the responsible entity: \"The providershallbear theresponsibilityof the producerofnetwork informationcontent inaccordance with the law and performtheobligationof network informationsecurity.\"Whenpersonalinformationis involved,the provider mustbearthe responsibilityof personal information processors inaccordance withthelawand perform personal informationprotectionobligations.This provisionis,inefect,adesignationof theproducerof generatedcontent.Iftheproviderof generative AI products istheproducerof the generatedcontent,the producer hereisresponsible fortheentire generation proces,suchastheprocessof generating thecontent,theauthenticityofthespecificdatainformation,andtheappropriatenessof theuseof thealgorithm.Insuch cases,the providershould,ofcourse,be held responsible.However,if the useroftheproductdeliberately induces theproductionofrelevant1information,illegal information,orinfrnging information,theusershouldberegardedas thecontent producerandbearlegalresponsibilityfortherelevant infringement and illegal acts.
In view of the similarity between the current situation of generative AIand Internet informationservice providers, the scopeofliabilityoftheprovidershouldbereasonablysetwithreference tothe \"safeharbor\"principle,andthe reasons fortheprovider's exemption should beclarified incombination withthe characteristics of generative AI.Specificaly,iftheproviderfulfilltecorrespondingdutyofare,intermsofliability,givenitsontrbutiontoinovatioand development,it is not appropriate to impose strictliability.Otherwise,this would notbeconducive tothe development and future commercialization of the technology[10].
How to determine whether theprovider has fulfilld itsdutyofcareshould bedistinguished basedon the nature of thecontent itproduces.First,despitethe technicaldificulties,the providerhas theabilitytocontrol whether the content produced complies with thefourbasic principles,whetheritissuspectedofracialdiscrimination,and whether it is transmitingorsowingcults.This is notonlyarealitybutshouldalsobeanexpectation41l.Therefore,whensuch content violates the botom line,the providershouldbe directly responsible forit,and there shouldbeno reason to exemptit from liability.Second,whengenerating general infringing content,the \"safe harbor\"principle shouldbeused as the standard approach.SincegenerativeAI is even moreautonomous than traditional Internet information service providers,andtraditional informationserviceproviders canstillapplythe\"safe harbor\" principle,generativeAI providersshouldalsoapplyit.Thatis,theyshouldonly beartortliabilityinthecaseoffailuretodeleteandnotifyusers todelete relevant content in a timely manner.
Finally,since generative AI generatescontent ina dynamic rather than static way,the way of responsibility should be toremove the generated contentor makea promise thatthesamecontent willnotappear again[42l.There is controversyat present,and this should be discussed onacase-by-case basis.For high-risk generated content,it should berequiredtocommit to nothaving \"thesame\"or \"similar\"content again,whichis notonly within its competence butshould alsobe within its responsibility.For non-high-risk content,the \"safe harbor\"rule should be applied first,afterclarifyingthespecificmeaningofthe \"notice-takedown\"ruleinAIscenarios,providedthatthe \"safeharbor\" rule can be applied.
Itcan beargued that the \"notice-takedown\"underthe \"safe harbor\"rule should beunderstoodas theself-removal of existing contentandthenotificationto theuser todelete the generatedcontent.If theuserbelieves there is no infringementoriftheproviderdeterminesonitsowninitiative,therelevantcontentcanberestored.Itshouldnotbeconstruedasacommitmentatthistime.However,if itisfinalldetermined thatthegeneratedcontentisinfringing,the provider who made thedeletion,notified the deletion,anddid not restoreitonitsown shallnotbeliable,and the user who refuses todelete itor the providerwhorestores itonitsown shallbeliable.Insuchcases,themannerin whichthe provider assumes responsibilityshould be undertaken in light of the efforts madeby the provider and the current state of technologicaldevelopmentandcostconsiderations,inthe formofacommitmentnotto repeat the \"same\"content.
On the one hand,this is because thecontent generated by generative AI is not publiclyavailable,does notremain onapublic platform,and is sometimes treatedas private information between the providerand theuser.Onthe other hand,the \"similar\" promise istoobroad and vague in scope,and with thecurrent stage of generative AI development, thereis no need tobearsuchanonerous liabilityforgeneral infringement.Ofcourse,withthedevelopmentof technology,if theverificationpathcanbereducedtoareasonablecost,this partof theresponsibilityshould graduallchange froma \"same\"toa \"similar\" commitment.Inaddition,regarding whether the operating company needs to bear the corresponding supplementaryliability,thescopeof the infringement,thedegreeof damage,andtheplatform'sability to prevent the expansion of losses should be comprehensively considered.
6Conclusion
The innovation and developmentof generative AI technologies and products are on the rise,and the legislative system should be gradual and basedonthe premise of clearinformationand suficient evidence,soas to avoid unduly hindering the development of emerging technologies.It is also necessryto regulate ina timely manner to prevent the occurrenceof large-scale social risksand ireversible damage.Balancing the relationship between them is an importantmissionforlegalresearchers,soitisnecessarytofullysummarizethecurrentrisks,examinetheexistingregulato ry situation,uphold areasonableconceptand goalof theruleoflaw,andrespond toand optimize the legal supervision of generative AIfrom the perspectiveof strengthening system supply,establishing along-term supervision mechanism, andimproving responsibility,soas toefectively promotethebalance betweenthesafe applicationof AIand innovative development.For example,inthefieldof generativeartificial intellgence technologyand productdevelopment,where there is stillalotofroomfordevelopmentandthe technologyisstilltobedeveloped,thereisstilla certainmountof flexibilityatthelegal and policylevel,andthesetingof various obligations needs tobeconsidered scientificallyand carefully.
References:
[1]ZHUX.ItalyannounesbanonChatGPN/OL].ChinaEconomic Net,(2023-04-03)2025-04-28].htp://ntl.ce.cqss/ 202304/03/t20230403_38476962.shtml.
[2]ZHANGX.Fromalgorithmcrisis toalgorithmtrust:Multipleschemesandlocalizationpathsofalgorithmgoverance[J]. Journal of East China University of Political Science and Law,2O19(6): 98-112.
[3]FUX.Uniformlegislation on artificial intelligence should be deferred[J]. Oriental Law,2O25(3): 45-58.
[4]SHOUB.OntetermnationofthelgislativeprocessofChina'sartificialintellgencelawEB/OL].AtificialItellgnce and Cyberspace Governance,(2025-05-16)[2025-06-17]. htps://p.weixin.qq.com/s/btT6t3VaXfurEtvZ1_dZHQ.
[5]HAND.Theconstitutionalboundariesofcontemporaryscienceandtechnologydevelopment[J].ResearchontheModernization of theRule ofLaw,2018(5): 33-47.
[6]ANDERSENV. Stability AI Ltd[Z]. District of California,2023-01-13.
[7]LINX.Thereshapigofthecopyightfairusesysteinheeraofartificial intellgene[J].LegalResearch,2021(6):1- 123.
[8]MAZ,XIAOY.Theinfringementdilemmaandwayoutofartificial intellgencelearningandcreation[J].WulingAcademic Journal,2019(5): 67-79.
[9]YANGL.Explorationof teopyightssuesofartificialintellgencegeneratedmaterials[J]ModernLegalScience,1(4): 101-115.
[10]ZHOUX.Challengesandcountermeasuresofartificialintelligencetothetraditionalcivilliabilitysystem[J].RuleofLaw Forum,2021(3): 88-102.
[11]PEIC,WUC.ThecopyightofA-generatedcontentisnotclearlydefinedN].ScienceandTechnologyDaily,023-6 19(2).
[12]CHENB.a)Buildingascientificand prudentruleoflawframework forthehigh-qualitydevelopmentof AIGC[N].China Business News,2023-04-19(A11).b)Facing thecrisisof trustinartificialintellgenceandaccelerating thedevelopment of trusted AIGC[N]. China Business News,2023-O4-25(A11).
[13]ZHENGZ.Privacyprotectionintheageofartificial intellgenceJ].LegalScience (JouralofNorthwestUniversityofo litical Science and Law),2019(2): 45-58.
[14]Product safety standards[EB/OL]. (2024-10-04)[2025-06-17]. https://penai.com/safety-standards.
[15]March20ChatGPToutage:here'swhat happened[EB/OL].(2023-03-24)[2025-06-17].htps:/openai.com/blog/march20-chatgpt-outage.
[16]SINDMAN M.The myth of digital democracy[M].Princeton,NJ: Princeton University Press,2008.
[17]MAC.ThesocialisksofarticialintellgenceanditslgalregulationJ].LegalScience(JouralofNorthwest Univsityof Political Science and Law),2018(6): 70-85.
[18]FANC.Theoryand practiceofethical governanceof science and technologylJ].Scienceand Society,2O21(4):25-36.
[19] GU H. AI-generated image wins Sony world photography award[N]. Youth Reference,2O23-04-28(5).
[20]ZHAOZ,XUF,GAOF,etal.Someunderstandingsontheethicalrisksofartificialinteligence[J].ChinaSoftScience, 2021(6): 88-102.
[21]FENGJ.JurisprudencereflectiononthelegalsubjectstatusofartificialintellgenebodyJ].OrientalJurisprudence,019 (4): 110-125.
[22]YUX,DUANW.Theethical constructionof artificial intelligence[J].Theoretical Exploration,2019(6):30-42.
[23]MinistryofCivilAairsof thePeople'sRepublicofChina.Statementoncautioningagainstilegalactivitiesinvolving the forgeryofministryof civilaffirs documentsandotherviolationsEB/OL].(202-03-15)[2023-04-28].htps://www.mca. gov. cn/n152/n164/c36672/content. html.
[24]JOFERSON EB,MCAFEE A.The second machine revolution: Howdigital technology willchangeour economy and society[M]. translated by JIANG YJ. [S.1.]: CITIC Press,2016: 340.
[25]JANPOLSKYR V,BARTEN O.ChatGPTand otherlanguage models may pose existential risks[N].China Social Science News,2023-03-06(8).
[26]YUAN K.Legal regulation of trusted algorithms[J]. Oriental Jurisprudence,2021(3): 90-105.
[27]ZENG X,LIANGZ,andZHANGH.Theregulatorypathofartificial intellgencein theEuropean Unionand itsenighten ment to China: Taking the artificial inteligence actas theobjectof analysis[J].E-Government,2O22(9): 12-26.
[28]FAND X, WEIY,ZHANGY,etal.Overviewof theEUartificialintellgenceact[J].ComputerTimes,2022(5):35-49.
[29]EuropeanCommission.Proposalforaregulationlayngdownharmonisedrulesonarificialintellgence(ArtificialIteligence Act)[EB/OL]. (202-12-06)[2025-06-17].hps://ata.consilium.europa.eu/doc/document/ST-15698-202-INIT/ EN/pdf.
[30]The blueprintforanAIbillofrights:Makingautomatedsystems work fortheAmerican people[EB/OL].(2O22-10-04) [2025-06-17]. https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bil-of-Rights.pdf.
[31]CHENJ.Legalatitudeinthefaceofgeneticallymodifiedissues:howshouldlegalpersonsthinkaboutscientificiues[J]. Law Science,2015(9): 75-90.
[32] CHENJ.Thedoctrinalizationof departmental lawand its limits[J].China LawReview,2O18(3):112-128.
[33]WUC,YANGM,WANGB.Thescientificdefinitionof securityanditsimplications,extensions,and inferencesJ].Jounal of Zhengzhou University(Engineering Edition),2018(3): 55-70.
[34]LEVESONN. A new accident model for engineering safer systems[J]. Safety Science,2004,42(4): 237-270.
[35]JIAOH.CopyrightisksandmitigationpathsfordataacquisitionandutilzationinartificialinteligencecreationJ].Contemporary Legal Science,2022(4):100-115.
[36]CHEN B,MA X.The governance dilemmaand ruleof lawresponse tocross-borderdataflowunder thesystemconcept[J]. Journal of Anhui University (Philosophy and Society Edition),2O23(2): 25-40.
[37]CHEN Y.Beyond ChatGPT: Opportunities,risksand chalengesof generativeAI[J].JournalofShandong University(Philosophy and Social Science Edition),2023(3): 88-103.
[38]PUQ,XIANG W.Generative artificial intellgence:thtransformativeimpact,isks,challengesandcopingstrategiesof ChatGPT[J]. Journal of Chongqing University (Social Science Edition),2023(3): 50-65.
[39]ZHANGJ.Theinternational modelof the\"regulatorysandbox\"and thedevelopmentpathof Chinese mainland[J].Financial Supervision Research,2017(5): 30-45.
[40]YUX,ZHENGG,DING X.Six issues of generative artificial intellgenceandlaw:Acase studyof ChatGPTJ].China Law Review,2023(2):72-92.
[41]XUW.Onthelegal statusandresponsibilitiesof generativeAIservice providers:Acase studyof ChatGPTJ].Legal Science (Journal of Northwest University of Political Science and Law),2O23(4): 77-89.
[42]POPYEK.Cache-22: Thefine line between informationanddefamation in Google'sautocomplete function[J].Cardozo Arts and Entertainment Law Journal,2016,34: 835-860.
生成式人工智能規范發展的法治框架及實踐趨態
陳兵,李國楨
(南開大學法學院,天津,300350)
摘要:全球生成式人工智能技術及其服務呈現爆發式增長態勢,在驅動社會經濟技術革新與生產力躍升的同時,亦引致多重法律風險、科技倫理失范及社會治理挑戰。國際層面形成差異化監管路徑:歐盟推行統一規則框架與集中監管機制構建剛性治理體系,但同時出現法律適用暫緩趨勢;美國采取原則性指引與企業自主合規相結合的倡導式監管策略;英國則實施非強制性原則框架確立折衷治理模式。基于我國參與全球人工智能競爭的戰略訴求并觀照域外經驗,生成式人工智能的法治建構需考慮結合現實需求、立足技術迭代與法律規制的動態適配、發展效能與安全風險的再平衡,在治理架構中確立分級安全閾值與可控性要求。據此,亟須加強制度供給與政策協同,構建行政監管、行業自律與技術治理相結合的多元長效機制,制定覆蓋研發部署全周期的場景化責任規則,避免一刀切式立法,最終形成以安全可信為基石、以審慎包容與動態調適為特征的立體化治理生態。
關鍵詞:生成式人工智能;規范發展;法治架構;法治實踐