Search results for: cloud service models
310 Motivational Profiles of the Entrepreneurial Career in Spanish Businessmen
Authors: Magdalena Suárez-Ortega, M. Fe. Sánchez-García
Abstract:
This paper focuses on the analysis of the motivations that lead people to undertake and consolidate their business. It is addressed from the framework of planned behavior theory, which recognizes the importance of the social environment and cultural values, both in the decision to undertake business and in business consolidation. Similarly, it is also based on theories of career development, which emphasize the importance of career management competencies and their connections to other vital aspects of people, including their roles within their families and other personal activities. This connects directly with the impact of entrepreneurship on the career and the professional-personal project of each individual. This study is part of the project titled Career Design and Talent Management (Ministry of Economy and Competitiveness of Spain, State Plan 2013-2016 Excellence Ref. EDU2013-45704-P). The aim of the study is to identify and describe entrepreneurial competencies and motivational profiles in a sample of 248 Spanish entrepreneurs, considering the consolidated profile and the profile in transition (n = 248).In order to obtain the information, the Questionnaire of Motivation and conditioners of the entrepreneurial career (MCEC) has been applied. This consists of 67 items and includes four scales (E1-Conflicts in conciliation, E2-Satisfaction in the career path, E3-Motivations to undertake, E4-Guidance Needs). Cluster analysis (mixed method, combining k-means clustering with a hierarchical method) was carried out, characterizing the groups profiles according to the categorical variables (chi square, p = 0.05), and the quantitative variables (ANOVA). The results have allowed us to characterize three motivational profiles relevant to the motivation, the degree of conciliation between personal and professional life, and the degree of conflict in conciliation, levels of career satisfaction and orientation needs (in the entrepreneurial project and life-career). The first profile is formed by extrinsically motivated entrepreneurs, professionally satisfied and without conflict of vital roles. The second profile acts with intrinsic motivation and also associated with family models, and although it shows satisfaction with their professional career, it finds a high conflict in their family and professional life. The third is composed of entrepreneurs with high extrinsic motivation, professional dissatisfaction and at the same time, feel the conflict in their professional life by the effect of personal roles. Ultimately, the analysis has allowed us to line the kinds of entrepreneurs to different levels of motivation, satisfaction, needs and articulation in professional and personal life, showing characterizations associated with the use of time for leisure, and the care of the family. Associations related to gender, age, activity sector, environment (rural, urban, virtual), and the use of time for domestic tasks are not identified. The model obtained and its implications for the design of training actions and orientation to entrepreneurs is also discussed.Keywords: motivation, entrepreneurial career, guidance needs, life-work balance, job satisfaction, assessment
Procedia PDF Downloads 303309 Geomorphology and Flood Analysis Using Light Detection and Ranging
Authors: George R. Puno, Eric N. Bruno
Abstract:
The natural landscape of the Philippine archipelago plus the current realities of climate change make the country vulnerable to flood hazards. Flooding becomes the recurring natural disaster in the country resulting to lose of lives and properties. Musimusi is among the rivers which exhibited inundation particularly at the inhabited floodplain portion of its watershed. During the event, rescue operations and distribution of relief goods become a problem due to lack of high resolution flood maps to aid local government unit identify the most affected areas. In the attempt of minimizing impact of flooding, hydrologic modelling with high resolution mapping is becoming more challenging and important. This study focused on the analysis of flood extent as a function of different geomorphologic characteristics of Musimusi watershed. The methods include the delineation of morphometric parameters in the Musimusi watershed using Geographic Information System (GIS) and geometric calculations tools. Digital Terrain Model (DTM) as one of the derivatives of Light Detection and Ranging (LiDAR) technology was used to determine the extent of river inundation involving the application of Hydrologic Engineering Center-River Analysis System (HEC-RAS) and Hydrology Modelling System (HEC-HMS) models. The digital elevation model (DEM) from synthetic Aperture Radar (SAR) was used to delineate watershed boundary and river network. Datasets like mean sea level, river cross section, river stage, discharge and rainfall were also used as input parameters. Curve number (CN), vegetation, and soil properties were calibrated based on the existing condition of the site. Results showed that the drainage density value of the watershed is low which indicates that the basin is highly permeable subsoil and thick vegetative cover. The watershed’s elongation ratio value of 0.9 implies that the floodplain portion of the watershed is susceptible to flooding. The bifurcation ratio value of 2.1 indicates higher risk of flooding in localized areas of the watershed. The circularity ratio value (1.20) indicates that the basin is circular in shape, high discharge of runoff and low permeability of the subsoil condition. The heavy rainfall of 167 mm brought by Typhoon Seniang last December 29, 2014 was characterized as high intensity and long duration, with a return period of 100 years produced 316 m3s-1 outflows. Portion of the floodplain zone (1.52%) suffered inundation with 2.76 m depth at the maximum. The information generated in this study is helpful to the local disaster risk reduction management council in monitoring the affected sites for more appropriate decisions so that cost of rescue operations and relief goods distribution is minimized.Keywords: flooding, geomorphology, mapping, watershed
Procedia PDF Downloads 230308 De-Densifying Congested Cores of Cities and Their Emerging Design Opportunities
Authors: Faith Abdul Rasak Asharaf
Abstract:
Every city has a threshold known as urban carrying capacity based on which it can withstand a particular density of people, above which the city might need to resort to measures like expanding its boundaries or growing vertically. As a result of this circumstance, the number of squatter communities is growing, as is the claustrophobic feeling of being confined inside a "concrete jungle." The expansion of suburbs, commercial areas, and industrial real estate in the areas surrounding medium-sized cities has resulted in changes to their landscapes and urban forms, as well as a systematic shift in their role in the urban hierarchy when functional endowment and connections to other territories are considered. The urban carrying capacity idea provides crucial guidance for city administrators and planners in better managing, designing, planning, constructing, and distributing urban resources to satisfy the huge demands of an evergrowing urban population. An ecological footprint is a criterion of urban carrying capacity, which is the amount of land required to provide humanity with renewable resources and absorb its trash. However, as each piece of land has its unique carrying capacity, including ecological, social, and economic considerations, these metropolitan areas begin to reach a saturation point over time. Various city models have been tried throughout the years to meet the increasing urban population density by moving the zones of work, life, and leisure to achieve maximum sustainable growth. The current scenario is that of a vertical city and compact city concept, in which the maximum density of people is attempted to fit into a definite area using efficient land use and a variety of other strategies, but this has proven to be a very unsustainable method of growth, as evidenced by the COVID-19 period. Due to a shortage of housing and basic infrastructure, densely populated cities gave rise to massive squatter communities, unable to accommodate the overflowing migrants. To achieve optimum carrying capacity, planning measures such as polycentric city and diffuse city concepts can be implemented, which will help to relieve the congested city core by relocating certain sectors of the town to the city periphery, which will help to create newer spaces for design in terms of public space, transportation, and housing, which is a major concern in the current scenario. The study's goal is focused on suggesting design options and solutions in terms of placemaking for better urban quality and urban life for the citizens once city centres have been de-densified based on urban carrying capacity and ecological footprint, taking the case of Kochi as an apt example of a highly densified city core, focusing on Edappally, which is an agglomeration of many urban factors.Keywords: urban carrying capacity, urbanization, urban sprawl, ecological footprint
Procedia PDF Downloads 80307 Geospatial Modeling Framework for Enhancing Urban Roadway Intersection Safety
Authors: Neeti Nayak, Khalid Duri
Abstract:
Despite the many advances made in transportation planning, the number of injuries and fatalities in the United States which involve motorized vehicles near intersections remain largely unchanged year over year. Data from the National Highway Traffic Safety Administration for 2018 indicates accidents involving motorized vehicles at traffic intersections accounted for 8,245 deaths and 914,811 injuries. Furthermore, collisions involving pedal cyclists killed 861 people (38% at intersections) and injured 46,295 (68% at intersections), while accidents involving pedestrians claimed 6,247 lives (25% at intersections) and injured 71,887 (56% at intersections)- the highest tallies registered in nearly 20 years. Some of the causes attributed to the rising number of accidents relate to increasing populations and the associated changes in land and traffic usage patterns, insufficient visibility conditions, and inadequate applications of traffic controls. Intersections that were initially designed with a particular land use pattern in mind may be rendered obsolete by subsequent developments. Many accidents involving pedestrians are accounted for by locations which should have been designed for safe crosswalks. Conventional solutions for evaluating intersection safety often require costly deployment of engineering surveys and analysis, which limit the capacity of resource-constrained administrations to satisfy their community’s needs for safe roadways adequately, effectively relegating mitigation efforts for high-risk areas to post-incident responses. This paper demonstrates how geospatial technology can identify high-risk locations and evaluate the viability of specific intersection management techniques. GIS is used to simulate relevant real-world conditions- the presence of traffic controls, zoning records, locations of interest for human activity, design speed of roadways, topographic details and immovable structures. The proposed methodology provides a low-cost mechanism for empowering urban planners to reduce the risks of accidents using 2-dimensional data representing multi-modal street networks, parcels, crosswalks and demographic information alongside 3-dimensional models of buildings, elevation, slope and aspect surfaces to evaluate visibility and lighting conditions and estimate probabilities for jaywalking and risks posed by blind or uncontrolled intersections. The proposed tools were developed using sample areas of Southern California, but the model will scale to other cities which conform to similar transportation standards given the availability of relevant GIS data.Keywords: crosswalks, cyclist safety, geotechnology, GIS, intersection safety, pedestrian safety, roadway safety, transportation planning, urban design
Procedia PDF Downloads 110306 Examining the Influence of Firm Internal Level Factors on Performance Variations among Micro and Small Enterprises: Evidence from Tanzanian Agri-Food Processing Firms
Authors: Pulkeria Pascoe, Hawa P. Tundui, Marcia Dutra de Barcellos, Hans de Steur, Xavier Gellynck
Abstract:
A majority of Micro and Small Enterprises (MSEs) experience low or no growth. Understanding their performance remains unfinished and disjointed as there is no consensus on the factors influencing it, especially in developing countries. Using a Resource-Based View (RBV) as the theoretical background, this cross-sectional study employed four regression models to examine the influence of firm-level factors (firm-specific characteristics, firm resources, manager socio-demographic characteristics, and selected management practices) on the overall performance variations among 442 Tanzanian micro and small agri-food processing firms. Study results confirmed the RBV argument that intangible resources make a larger contribution to overall performance variations among firms than that tangible resources. Firms' tangible and intangible resources explained 34.5% of overall performance variations (intangible resources explained the overall performance variability by 19.4% compared to tangible resources, which accounted for 15.1%), ranking first in explaining the overall performance variance. Firm-specific characteristics ranked second by influencing variations in overall performance by 29.0%. Selected management practices ranked third (6.3%), while the manager's socio-demographic factors were last on the list, as they influenced the overall performance variability among firms by only 5.1%. The study also found that firms that focus on proper utilization of tangible resources (financial and physical), set targets, and undertake better working capital management practices performed higher than their counterparts (low and average performers). Furthermore, accumulation and proper utilization of intangible resources (relational, organizational, and reputational), undertaking performance monitoring practices, age of the manager, and the choice of the firm location and activity were the dominant significant factors influencing the variations among average and high performers, relative to low performers. The entrepreneurial background was a significant factor influencing variations in average and low-performing firms, indicating that entrepreneurial skills are crucial to achieving average levels of performance. Firm age, size, legal status, source of start-up capital, gender, education level, and total business experience of the manager were not statistically significant variables influencing the overall performance variations among the agri-food processors under the study. The study has identified both significant and non-significant factors influencing performance variations among low, average, and high-performing micro and small agri-food processing firms in Tanzania. Therefore, results from this study will help managers, policymakers and researchers to identify areas where more attention should be placed in order to improve overall performance of MSEs in agri-food industry.Keywords: firm-level factors, micro and small enterprises, performance, regression analysis, resource-based-view
Procedia PDF Downloads 87305 Boredom in the Classroom: Sentiment Analysis on Teaching Practices and Related Outcomes
Authors: Elisa Santana-Monagas, Juan L. Núñez, Jaime León, Samuel Falcón, Celia Fernández, Rocío P. Solís
Abstract:
Students’ emotional experiences have been a widely discussed theme among researchers, proving a central role on students’ outcomes. Yet, up to now, far too little attention has been paid to teaching practices that negatively relate with students’ negative emotions in the higher education. The present work aims to examine the relationship between teachers’ teaching practices (i.e., students’ evaluations of teaching and autonomy support), the students’ feelings of boredom and agentic engagement and motivation in the higher education context. To do so, the present study incorporates one of the most popular tools in natural processing language to address students’ evaluations of teaching: sentiment analysis. Whereas most research has focused on the creation of SA models and assessing students’ satisfaction regarding teachers and courses to the author’s best knowledge, no research before has included results from SA into an explanatory model. A total of 225 university students (Mean age = 26.16, SD = 7.4, 78.7 % women) participated in the study. Students were enrolled in degree and masters’ studies at the faculty of Education of a public university of Spain. Data was collected using an online questionnaire students could access through a QR code they completed during a teaching period where the assessed teacher was not present. To assess students’ sentiments towards their teachers’ teaching, we asked them the following open-ended question: “If you had to explain a peer who doesn't know your teacher how he or she communicates in class, what would you tell them?”. Sentiment analysis was performed with Microsoft's pre-trained model. For this study, we relied on the probability of the students answer belonging to the negative category. To assess the reliability of the measure, inter-rater agreement between this NLP tool and one of the researchers, who independently coded all answers, was examined. The average pairwise percent agreement and the Cohen’s kappa were calculated with ReCal2. The agreement reached was of 90.8% and Cohen’s kappa .68, both considered satisfactory. To test the hypothesis relations a structural equation model (SEM) was estimated. Results showed that the model fit indices displayed a good fit to the data; χ² (134) = 351.129, p < .001, RMSEA = .07, SRMR = .09, TLI = .91, CFI = .92. Specifically, results show that boredom was negatively predicted by autonomy support practices (β = -.47[-.61, -.33]), whereas for the negative sentiment extracted from SET, this relation was positive (β = .23[.16, .30]). In other words, when students’ opinion towards their instructors’ teaching practices was negative, it was more likely for them to feel bored. Regarding the relations among boredom and student outcomes, results showed a negative predictive value of boredom on students’ motivation to study (β = -.46[-.63, -.29]) and agentic engagement (β = -.24[-.33, -.15]). Altogether, results show a promising future for sentiment analysis techniques in the field of education as they proved the usefulness of this tool when evaluating relations among teaching practices and student outcomes.Keywords: sentiment analysis, boredom, motivation, agentic engagement
Procedia PDF Downloads 99304 Doctor-Patient Interaction in an L2: Pragmatic Study of a Nigerian Experience
Authors: Ayodele James Akinola
Abstract:
This study investigated the use of English in doctor-patient interaction in a university teaching hospital from a southwestern state in Nigeria with the aim of identifying the role of communication in an L2, patterns of communication, discourse strategies, pragmatic acts, and contexts that shape the interaction. Jacob Mey’s Pragmatic Acts notion complemented with Emanuel and Emanuel’s model of doctor-patient relationship provided the theoretical standpoint. Data comprising 7 audio-recorded doctors-patient interactions were collected from a University Hospital in Oyo state, Nigeria. Interactions involving the use of English language were purposefully selected. These were supplemented with patients’ case notes and interviews conducted with doctors. Transcription was patterned alongside modified Arminen’s notations of conversation analysis. In the study, interaction in English between doctor and patients has the preponderance of direct-translation, code-mixing and switching, Nigerianism and use of cultural worldviews to express medical experience. Irrespective of these, three patterns communication, namely the paternalistic, interpretive, and deliberative were identified. These were exhibited through varying discourse strategies. The paternalistic model reflected slightly casual conversational conventions and registers. These were achieved through the pragmemic activities of situated speech acts, psychological and physical acts, via patients’ quarrel-induced acts, controlled and managed through doctors’ shared situation knowledge. All these produced empathising, pacifying, promising and instructing practs. The patients’ practs were explaining, provoking, associating and greeting in the paternalistic model. The informative model reveals the use of adjacency pairs, formal turn-taking, precise detailing, institutional talks and dialogic strategies. Through the activities of the speech, prosody and physical acts, the practs of declaring, alerting and informing were utilised by doctors, while the patients exploited adapting, requesting and selecting practs. The negotiating conversational strategy of the deliberative model featured in the speech, prosody and physical acts. In this model, practs of suggesting, teaching, persuading and convincing were utilised by the doctors. The patients deployed the practs of questioning, demanding, considering and deciding. The contextual variables revealed that other patterns (such as phatic and informative) are also used and they coalesced in the hospital within the situational and psychological contexts. However, the paternalistic model was predominantly employed by doctors with over six years in practice, while the interpretive, informative and deliberative models were found among registrar and others below six years of medical practice. Doctors’ experience, patients’ peculiarities and shared cultural knowledge influenced doctor-patient communication in the study.Keywords: pragmatics, communication pattern, doctor-patient interaction, Nigerian hospital situation
Procedia PDF Downloads 180303 Re-Evaluation of Field X Located in Northern Lake Albert Basin to Refine the Structural Interpretation
Authors: Calorine Twebaze, Jesca Balinga
Abstract:
Field X is located on the Eastern shores of L. Albert, Uganda, on the rift flank where the gross sedimentary fill is typically less than 2,000m. The field was discovered in 2006 and encountered about 20.4m of net pay across three (3) stratigraphic intervals within the discovery well. The field covers an area of 3 km2, with the structural configuration comprising a 3-way dip-closed hanging wall anticline that seals against the basement to the southeast along the bounding fault. Field X had been mapped on reprocessed 3D seismic data, which was originally acquired in 2007 and reprocessed in 2013. The seismic data quality is good across the field, and reprocessing work reduced the uncertainty in the location of the bounding fault and enhanced the lateral continuity of reservoir reflectors. The current study was a re-evaluation of Field X to refine fault interpretation and understand the structural uncertainties associated with the field. The seismic data, and three (3) wells datasets were used during the study. The evaluation followed standard workflows using Petrel software and structural attribute analysis. The process spanned from seismic- -well tie, structural interpretation, and structural uncertainty analysis. Analysis of three (3) well ties generated for the 3 wells provided a geophysical interpretation that was consistent with geological picks. The generated time-depth curves showed a general increase in velocity with burial depth. However, separation in curve trends observed below 1100m was mainly attributed to minimal lateral variation in velocity between the wells. In addition to Attribute analysis, three velocity modeling approaches were evaluated, including the Time-Depth Curve, Vo+ kZ, and Average Velocity Method. The generated models were calibrated at well locations using well tops to obtain the best velocity model for Field X. The Time-depth method resulted in more reliable depth surfaces with good structural coherence between the TWT and depth maps with minimal error at well locations of 2 to 5m. Both the NNE-SSW rift border fault and minor faults in the existing interpretation were reevaluated. However, the new interpretation delineated an E-W trending fault in the northern part of the field that had not been interpreted before. The fault was interpreted at all stratigraphic levels and thus propagates from the basement to the surface and is an active fault today. It was also noted that the entire field is less faulted with more faults in the deeper part of the field. The major structural uncertainties defined included 1) The time horizons due to reduced data quality, especially in the deeper parts of the structure, an error equal to one-third of the reflection time thickness was assumed, 2) Check shot analysis showed varying velocities within the wells thus varying depth values for each well, and 3) Very few average velocity points due to limited wells produced a pessimistic average Velocity model.Keywords: 3D seismic data interpretation, structural uncertainties, attribute analysis, velocity modelling approaches
Procedia PDF Downloads 59302 Experimental and Numerical Investigations on the Vulnerability of Flying Structures to High-Energy Laser Irradiations
Authors: Vadim Allheily, Rudiger Schmitt, Lionel Merlat, Gildas L'Hostis
Abstract:
Inflight devices are nowadays major actors in both military and civilian landscapes. Among others, missiles, mortars, rockets or even drones this last decade are increasingly sophisticated, and it is today of prior manner to develop always more efficient defensive systems from all these potential threats. In this frame, recent High Energy Laser weapon prototypes (HEL) have demonstrated some extremely good operational abilities to shot down within seconds flying targets several kilometers off. Whereas test outcomes are promising from both experimental and cost-related perspectives, the deterioration process still needs to be explored to be able to closely predict the effects of a high-energy laser irradiation on typical structures, heading finally to an effective design of laser sources and protective countermeasures. Laser matter interaction researches have a long history of more than 40 years at the French-German Research Institute (ISL). Those studies were tied with laser sources development in the mid-60s, mainly for specific metrology of fast phenomena. Nowadays, laser matter interaction can be viewed as the terminal ballistics of conventional weapons, with the unique capability of laser beams to carry energy at light velocity over large ranges. In the last years, a strong focus was made at ISL on the interaction process of laser radiation with metal targets such as artillery shells. Due to the absorbed laser radiation and the resulting heating process, an encased explosive charge can be initiated resulting in deflagration or even detonation of the projectile in flight. Drones and Unmanned Air Vehicles (UAVs) are of outmost interests in modern warfare. Those aerial systems are usually made up of polymer-based composite materials, whose complexity involves new scientific challenges. Aside this main laser-matter interaction activity, a lot of experimental and numerical knowledge has been gathered at ISL within domains like spectrometry, thermodynamics or mechanics. Techniques and devices were developed to study separately each aspect concerned by this topic; optical characterization, thermal investigations, chemical reactions analysis or mechanical examinations are beyond carried out to neatly estimate essential key values. Results from these diverse tasks are then incorporated into analytic or FE numerical models that were elaborated, for example, to predict thermal repercussion on explosive charges or mechanical failures of structures. These simulations highlight the influence of each phenomenon during the laser irradiation and forecast experimental observations with good accuracy.Keywords: composite materials, countermeasure, experimental work, high-energy laser, laser-matter interaction, modeling
Procedia PDF Downloads 263301 Enhancing Industrial Wastewater Treatment: Efficacy and Optimization of Ultrasound-Assisted Laccase Immobilized on Magnetic Fe₃O₄ Nanoparticles
Authors: K. Verma, v. S. Moholkar
Abstract:
In developed countries, water pollution caused by industrial discharge has emerged as a significant environmental concern over the past decades. However, despite ongoing efforts, a fully effective and sustainable remediation strategy has yet to be identified. This paper describes how enzymatic and sonochemical treatments have demonstrated great promise in degrading bio-refractory pollutants. Mainly, a compelling area of interest lies in the combined technique of sono-enzymatic treatment, which has exhibited a synergistic enhancement effect surpassing that of the individual techniques. This study employed the covalent attachment method to immobilize Laccase from Trametes versicolor onto amino-functionalized magnetic Fe₃O₄ nanoparticles. To comprehensively characterize the synthesized free nanoparticles and the laccase-immobilized nanoparticles, various techniques such as X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FT-IR), scanning electron microscope (SEM), vibrating sample magnetometer (VSM), and surface area through Brunauer-Emmett-Teller (BET) were employed. The size of immobilized Fe₃O₄@Laccase was found to be 60 nm, and the maximum loading of laccase was found to be 24 mg/g of nanoparticle. An investigation was conducted to study the effect of various process parameters, such as immobilized Fe₃O₄ Laccase dose, temperature, and pH, on the % Chemical oxygen demand (COD) removal as a response. The statistical design pinpointed the optimum conditions (immobilized Fe₃O₄ Laccase dose = 1.46 g/L, pH = 4.5, and temperature = 66 oC), resulting in a remarkable 65.58% COD removal within 60 minutes. An even more significant improvement (90.31% COD removal) was achieved with ultrasound-assisted enzymatic reaction utilizing a 10% duty cycle. The investigation of various kinetic models for free and immobilized laccase, such as the Haldane, Yano, and Koga, and Michaelis-Menten, showed that ultrasound application impacted the kinetic parameters Vmax and Km. Specifically, Vmax values for free and immobilized laccase were found to be 0.021 mg/L min and 0.045 mg/L min, respectively, while Km values were 147.2 mg/L for free laccase and 136.46 mg/L for immobilized laccase. The lower Km and higher Vmax for immobilized laccase indicate its enhanced affinity towards the substrate, likely due to ultrasound-induced alterations in the enzyme's confirmation and increased exposure of active sites, leading to more efficient degradation. Furthermore, the toxicity and Liquid chromatography-mass spectrometry (LC-MS) analysis revealed that after the treatment process, the wastewater exhibited 70% less toxicity than before treatment, with over 25 compounds degrading by more than 75%. At last, the prepared immobilized laccase had excellent recyclability retaining 70% activity up to 6 consecutive cycles. A straightforward manufacturing strategy and outstanding performance make the recyclable magnetic immobilized Laccase (Fe₃O₄ Laccase) an up-and-coming option for various environmental applications, particularly in water pollution control and treatment.Keywords: kinetic, laccase enzyme, sonoenzymatic, ultrasound irradiation
Procedia PDF Downloads 68300 Toward Understanding the Glucocorticoid Receptor Network in Cancer
Authors: Swati Srivastava, Mattia Lauriola, Yuval Gilad, Adi Kimchi, Yosef Yarden
Abstract:
The glucocorticoid receptor (GR) has been proposed to play important, but incompletely understood roles in cancer. Glucocorticoids (GCs) are widely used as co-medication of various carcinomas, due to their ability to reduce the toxicity of chemotherapy. Furthermore, GR antagonism has proven to be a strategy to treat triple negative breast cancer and castration-resistant prostate cancer. These observations suggest differential GR involvement in cancer subtypes. The goal of our study has been to elaborate the current understanding of GR signaling in tumor progression and metastasis. Our study involves two cellular models, non-tumorigenic breast epithelial cells (MCF10A) and Ewing sarcoma cells (CHLA9). In our breast cell model, the results indicated that the GR agonist dexamethasone inhibits EGF-induced mammary cell migration, and this effect was blocked when cells were stimulated with a GR antagonist, namely RU486. Microarray analysis for gene expression revealed that the mechanism underlying inhibition involves dexamenthasone-mediated repression of well-known activators of EGFR signaling, alongside with enhancement of several EGFR’s negative feedback loops. Because GR mainly acts primarily through composite response elements (GREs), or via a tethering mechanism, our next aim has been to find the transcription factors (TFs) which can interact with GR in MCF10A cells.The TF-binding motif overrepresented at the promoter of dexamethasone-regulated genes was predicted by using bioinformatics. To validate the prediction, we performed high-throughput Protein Complementation Assays (PCA). For this, we utilized the Gaussia Luciferase PCA strategy, which enabled analysis of protein-protein interactions between GR and predicted TFs of mammary cells. A library comprising both nuclear receptors (estrogen receptor, mineralocorticoid receptor, GR) and TFs was fused to fragments of GLuc, namely GLuc(1)-X, X-GLuc(1), and X-GLuc(2), where GLuc(1) and GLuc(2) correspond to the N-terminal and C-terminal fragments of the luciferase gene.The resulting library was screened, in human embryonic kidney 293T (HEK293T) cells, for all possible interactions between nuclear receptors and TFs. By screening all of the combinations between TFs and nuclear receptors, we identified several positive interactions, which were strengthened in response to dexamethasone and abolished in response to RU486. Furthermore, the interactions between GR and the candidate TFs were validated by co-immunoprecipitation in MCF10A and in CHLA9 cells. Currently, the roles played by the uncovered interactions are being evaluated in various cellular processes, such as cellular proliferation, migration, and invasion. In conclusion, our assay provides an unbiased network analysis between nuclear receptors and other TFs, which can lead to important insights into transcriptional regulation by nuclear receptors in various diseases, in this case of cancer.Keywords: epidermal growth factor, glucocorticoid receptor, protein complementation assay, transcription factor
Procedia PDF Downloads 228299 Biophysical Analysis of the Interaction of Polymeric Nanoparticles with Biomimetic Models of the Lung Surfactant
Authors: Weiam Daear, Patrick Lai, Elmar Prenner
Abstract:
The human body offers many avenues that could be used for drug delivery. The pulmonary route, which is delivered through the lungs, presents many advantages that have sparked interested in the field. These advantages include; 1) direct access to the lungs and the large surface area it provides, and 2) close proximity to the blood circulation. The air-blood barrier of the alveoli is about 500 nm thick. The air-blood barrier consist of a monolayer of lipids and few proteins called the lung surfactant and cells. This monolayer consists of ~90% lipids and ~10% proteins that are produced by the alveolar epithelial cells. The two major lipid classes constitutes of various saturation and chain length of phosphatidylcholine (PC) and phosphatidylglycerol (PG) representing 80% of total lipid component. The major role of the lung surfactant monolayer is to reduce surface tension experienced during breathing cycles in order to prevent lung collapse. In terms of the pulmonary drug delivery route, drugs pass through various parts of the respiratory system before reaching the alveoli. It is at this location that the lung surfactant functions as the air-blood barrier for drugs. As the field of nanomedicine advances, the use of nanoparticles (NPs) as drug delivery vehicles is becoming very important. This is due to the advantages NPs provide with their large surface area and potential specific targeting. Therefore, studying the interaction of NPs with lung surfactant and whether they affect its stability becomes very essential. The aim of this research is to develop a biomimetic model of the human lung surfactant followed by a biophysical analysis of the interaction of polymeric NPs. This biomimetic model will function as a fast initial mode of testing for whether NPs affect the stability of the human lung surfactant. The model developed thus far is an 8-component lipid system that contains major PC and PG lipids. Recently, a custom made 16:0/16:1 PC and PG lipids were added to the model system. In the human lung surfactant, these lipids constitute 16% of the total lipid component. According to the author’s knowledge, there is not much monolayer data on the biophysical analysis of the 16:0/16:1 lipids, therefore more analysis will be discussed here. Biophysical techniques such as the Langmuir Trough is used for stability measurements which monitors changes to a monolayer's surface pressure upon NP interaction. Furthermore, Brewster Angle Microscopy (BAM) employed to visualize changes to the lateral domain organization. Results show preferential interactions of NPs with different lipid groups that is also dependent on the monolayer fluidity. Furthermore, results show that the film stability upon compression is unaffected, but there are significant changes in the lateral domain organization of the lung surfactant upon NP addition. This research is significant in the field of pulmonary drug delivery. It is shown that NPs within a certain size range are safe for the pulmonary route, but little is known about the mode of interaction of those polymeric NPs. Moreover, this work will provide additional information about the nanotoxicology of NPs tested.Keywords: Brewster angle microscopy, lipids, lung surfactant, nanoparticles
Procedia PDF Downloads 180298 Improving the Biocontrol of the Argentine Stem Weevil; Using the Parasitic Wasp Microctonus hyperodae
Authors: John G. Skelly, Peter K. Dearden, Thomas W. R. Harrop, Sarah N. Inwood, Joseph Guhlin
Abstract:
The Argentine stem weevil (ASW; L. bonariensis) is an economically important pasture pest in New Zealand, which causes about $200 million of damage per annum. Microctonus hyperodae (Mh), a parasite of the ASW in its natural range in South America, was introduced into New Zealand to curb the pasture damage caused by the ASW. Mh is an endoparasitic wasp that lays its eggs in the ASW halting its reproduction. Mh was initially successful at preventing ASW proliferation and reducing pasture damage. The effectiveness of Mh has since declined due to decreased parasitism rates and has resulted in increased pasture damage. Although the mechanism through which ASW has developed resistance to Mh has not been discovered, it has been proposed to be due to the different reproductive modes used by Mh and the ASW in New Zealand. The ASW reproduces sexually, whereas Mh reproduces asexually, which has been hypothesised to have allowed the ASW to ‘out evolve’ Mh. Other species within the Microctonus genus reproduce both sexually and asexually. Strains of Microctonus aethiopoides (Ma), a species closely related to Mh, reproduce either by sexual or asexual reproduction. Comparing the genomes of sexual and asexual Microctonus may allow for the identification of the mechanism of asexual reproduction and other characteristics that may improve Mh as a biocontrol agent. The genomes of Mh and three strains of Ma, two of which reproduce sexually and one reproduces asexually, have been sequenced and annotated. The French (MaFR) and Moroccan (MaMO) reproduce sexually, whereas the Irish strain (MaIR) reproduces asexually. Like Mh, The Ma strains are also used as biocontrol agents, but for different weevil species. The genomes of Mh and MaIR were subsequently upgraded using Hi-C, resulting in a set of high quality, highly contiguous genomes. A subset of the genes involved in mitosis and meiosis, which have been identified though the use of Hidden Markov Models generated from genes involved in these processes in other Hymenoptera, have been catalogued in Mh and the strains of Ma. Meiosis and mitosis genes were broadly conserved in both sexual and asexual Microctonus species. This implies that either the asexual species have retained a subset of the molecular components required for sexual reproduction or that the molecular mechanisms of mitosis and meiosis are different or differently regulated in Microctonus to other insect species in which these mechanisms are more broadly characterised. Bioinformatic analysis of the chemoreceptor compliment in Microctonus has revealed some variation in the number of olfactory receptors, which may be related to host preference. Phylogenetic analysis of olfactory receptors highlights variation, which may be able to explain different host range preferences in the Microctonus. Hi-C clustering implies that Mh has 12 chromosomes, and MaIR has 8. Hence there may be variation in gene regulation between species. Genome alignment of Mh and MaIR implies that there may be large scale genome structural variation. Greater insight into the genetics of these agriculturally important group of parasitic wasps may be beneficial in restoring or maintaining their biocontrol efficacy.Keywords: argentine stem weevil, asexual, genomics, Microctonus hyperodae
Procedia PDF Downloads 158297 Elucidation of Dynamics of Murine Double Minute 2 Shed Light on the Anti-cancer Drug Development
Authors: Nigar Kantarci Carsibasi
Abstract:
Coarse-grained elastic network models, namely Gaussian network model (GNM) and Anisotropic network model (ANM), are utilized in order to investigate the fluctuation dynamics of Murine Double Minute 2 (MDM2), which is the native inhibitor of p53. Conformational dynamics of MDM2 are elucidated in unbound, p53 bound, and non-peptide small molecule inhibitor bound forms. With this, it is aimed to gain insights about the alterations brought to global dynamics of MDM2 by native peptide inhibitor p53, and two small molecule inhibitors (HDM201 and NVP-CGM097) that are undergoing clinical stages in cancer studies. MDM2 undergoes significant conformational changes upon inhibitor binding, carrying pieces of evidence of induced-fit mechanism. Small molecule inhibitors examined in this work exhibit similar fluctuation dynamics and characteristic mode shapes with p53 when complexed with MDM2, which would shed light on the design of novel small molecule inhibitors for cancer therapy. The results showed that residues Phe 19, Trp 23, Leu 26 reside in the minima of slowest modes of p53, pointing to the accepted three-finger binding model. Pro 27 displays the most significant hinge present in p53 and comes out to be another functionally important residue. Three distinct regions are identified in MDM2, for which significant conformational changes are observed upon binding. Regions I (residues 50-77) and III (residues 90-105) correspond to the binding interface of MDM2, including (α2, L2, and α4), which are stabilized during complex formation. Region II (residues 77-90) exhibits a large amplitude motion, being highly flexible, both in the absence and presence of p53 or other inhibitors. MDM2 exhibits a scattered profile in the fastest modes of motion, while binding of p53 and inhibitors puts restraints on MDM2 domains, clearly distinguishing the kinetically hot regions. Mode shape analysis revealed that the α4 domain controls the size of the cleft by keeping the cleft narrow in unbound MDM2; and open in the bound states for proper penetration and binding of p53 and inhibitors, which points to the induced-fit mechanism of p53 binding. P53 interacts with α2 and α4 in a synchronized manner. Collective modes are shifted upon inhibitor binding, i.e., second mode characteristic motion in MDM2-p53 complex is observed in the first mode of apo MDM2; however, apo and bound MDM2 exhibits similar features in the softest modes pointing to pre-existing modes facilitating the ligand binding. Although much higher amplitude motions are attained in the presence of non-peptide small molecule inhibitor molecules as compared to p53, they demonstrate close similarity. Hence, NVP-CGM097 and HDM201 succeed in mimicking the p53 behavior well. Elucidating how drug candidates alter the MDM2 global and conformational dynamics would shed light on the rational design of novel anticancer drugs.Keywords: cancer, drug design, elastic network model, MDM2
Procedia PDF Downloads 130296 Statistical Optimization of Adsorption of a Harmful Dye from Aqueous Solution
Abstract:
Textile industries cater to varied customer preferences and contribute substantially to the economy. However, these textile industries also produce a considerable amount of effluents. Prominent among these are the azo dyes which impart considerable color and toxicity even at low concentrations. Azo dyes are also used as coloring agents in food and pharmaceutical industry. Despite their applications, azo dyes are also notorious pollutants and carcinogens. Popular techniques like photo-degradation, biodegradation and the use of oxidizing agents are not applicable for all kinds of dyes, as most of them are stable to these techniques. Chemical coagulation produces a large amount of toxic sludge which is undesirable and is also ineffective towards a number of dyes. Most of the azo dyes are stable to UV-visible light irradiation and may even resist aerobic degradation. Adsorption has been the most preferred technique owing to its less cost, high capacity and process efficiency and the possibility of regenerating and recycling the adsorbent. Adsorption is also most preferred because it may produce high quality of the treated effluent and it is able to remove different kinds of dyes. However, the adsorption process is influenced by many variables whose inter-dependence makes it difficult to identify optimum conditions. The variables include stirring speed, temperature, initial concentration and adsorbent dosage. Further, the internal diffusional resistance inside the adsorbent particle leads to slow uptake of the solute within the adsorbent. Hence, it is necessary to identify optimum conditions that lead to high capacity and uptake rate of these pollutants. In this work, commercially available activated carbon was chosen as the adsorbent owing to its high surface area. A typical azo dye found in textile effluent waters, viz. the monoazo Acid Orange 10 dye (CAS: 1936-15-8) has been chosen as the representative pollutant. Adsorption studies were mainly focused at obtaining equilibrium and kinetic data for the batch adsorption process at different process conditions. Studies were conducted at different stirring speed, temperature, adsorbent dosage and initial dye concentration settings. The Full Factorial Design was the chosen statistical design framework for carrying out the experiments and identifying the important factors and their interactions. The optimum conditions identified from the experimental model were validated with actual experiments at the recommended settings. The equilibrium and kinetic data obtained were fitted to different models and the model parameters were estimated. This gives more details about the nature of adsorption taking place. Critical data required to design batch adsorption systems for removal of Acid Orange 10 dye and identification of factors that critically influence the separation efficiency are the key outcomes from this research.Keywords: acid orange 10, activated carbon, optimum adsorption conditions, statistical design
Procedia PDF Downloads 171295 Research Project on Learning Rationality in Strategic Behaviors: Interdisciplinary Educational Activities in Italian High Schools
Authors: Giovanna Bimonte, Luigi Senatore, Francesco Saverio Tortoriello, Ilaria Veronesi
Abstract:
The education process considers capabilities not only to be seen as a means to a certain end but rather as an effective purpose. Sen's capability approach challenges human capital theory, which sees education as an ordinary investment undertaken by individuals. A complex reality requires complex thinking capable of interpreting the dynamics of society's changes to be able to make decisions that can be rational for private, ethical and social contexts. Education is not something removed from the cultural and social context; it exists and is structured within it. In Italy, the "Mathematical High School Project" is a didactic research project is based on additional laboratory courses in extracurricular hours where mathematics intends to bring itself in a dialectical relationship with other disciplines as a cultural bridge between the two cultures, the humanistic and the scientific ones, with interdisciplinary educational modules on themes of strong impact in younger life. This interdisciplinary mathematics presents topics related to the most advanced technologies and contemporary socio-economic frameworks to demonstrate how mathematics is not only a key to reading but also a key to resolving complex problems. The recent developments in mathematics provide the potential for profound and highly beneficial changes in mathematics education at all levels, such as in socio-economic decisions. The research project is built to investigate whether repeated interactions can successfully promote cooperation among students as rational choice and if the skill, the context and the school background can influence the strategies choice and the rationality. A Laboratory on Game Theory as mathematical theory was conducted in the 4th year of the Mathematical High Schools and in an ordinary scientific high school of the Scientific degree program. Students played two simultaneous games of repeated Prisoner's Dilemma with an indefinite horizon, with two different competitors in each round; even though the competitors in each round will remain the same for the duration of the game. The results highlight that most of the students in the two classes used the two games with an immunization strategy against the risk of losing: in one of the games, they started by playing Cooperate, and in the other by the strategy of Compete. In the literature, theoretical models and experiments show that in the case of repeated interactions with the same adversary, the optimal cooperation strategy can be achieved by tit-for-tat mechanisms. In higher education, individual capacities cannot be examined independently, as conceptual framework presupposes a social construction of individuals interacting and competing, making individual and collective choices. The paper will outline all the results of the experimentation and the future development of the research.Keywords: game theory, interdisciplinarity, mathematics education, mathematical high school
Procedia PDF Downloads 74294 Emotion Regulation and Executive Functioning Scale for Children and Adolescents (REMEX): Scale Development
Authors: Cristina Costescu, Carmen David, Adrian Roșan
Abstract:
Executive functions (EF) and emotion regulation strategies are processes that allow individuals to function in an adaptative way and to be goal-oriented, which is essential for success in daily living activities, at school, or in social contexts. The Emotion Regulation and Executive Functioning Scale for Children and Adolescents (REMEX) represents an empirically based tool (based on the model of EF developed by Diamond) for evaluating significant dimensions of child and adolescent EFs and emotion regulation strategies, mainly in school contexts. The instrument measures the following dimensions: working memory, inhibition, cognitive flexibility, executive attention, planning, emotional control, and emotion regulation strategies. Building the instrument involved not only a top-down process, as we selected the content in accordance with prominent models of FE, but also a bottom-up one, as we were able to identify valid contexts in which FE and ER are put to use. For the construction of the instrument, we implemented three focus groups with teachers and other professionals since the aim was to develop an accurate, objective, and ecological instrument. We used the focus group method in order to address each dimension and to yield a bank of items to be further tested. Each dimension is addressed through a task that the examiner will apply and through several items derived from the main task. For the validation of the instrument, we plan to use item response theory (IRT), also known as the latent response theory, that attempts to explain the relationship between latent traits (unobservable cognitive processes) and their manifestations (i.e., observed outcomes, responses, or performance). REMEX represents an ecological scale that integrates a current scientific understanding of emotion regulation and EF and is directly applicable to school contexts, and it can be very useful for developing intervention protocols. We plan to test his convergent validity with the Childhood Executive Functioning Inventory (CHEXI) and Emotion Dysregulation Inventory (EDI) and divergent validity between a group of typically developing children and children with neurodevelopmental disorders, aged between 6 and 9 years old. In a previous pilot study, we enrolled a sample of 40 children with autism spectrum disorders and attention-deficit/hyperactivity disorder aged 6 to 12 years old, and we applied the above-mentioned scales (CHEXI and EDI). Our results showed that deficits in planning, bebavior regulation, inhibition, and working memory predict high levels of emotional reactivity, leading to emotional and behavioural problems. Considering previous results, we expect our findings to provide support for the validity and reliability of the REMEX version as an ecological instrument for assessing emotion regulation and EF in children and for key features of its uses in intervention protocols.Keywords: executive functions, emotion regulation, children, item response theory, focus group
Procedia PDF Downloads 101293 Qualitative Narrative Framework as Tool for Reduction of Stigma and Prejudice
Authors: Anastasia Schnitzer, Oliver Rehren
Abstract:
Mental health has become an increasingly important topic in society in recent years, not least due to the challenges posed by the corona pandemic. Along with this, the public has become more and more aware that a lack of enlightenment and proper coping mechanisms may result in a notable risk to develop mental disorders. Yet, there are still many biases against those affected, which are further connected to issues of stigmatization and societal exclusion. One of the main strategies to combat these forms of prejudice and stigma is to induce intergroup contact. More specifically, the Intergroup Contact Theory states engaging in certain types of contact with members of marginalized groups may be an effective way to improve attitudes towards these groups. However, due to the persistent prejudice and stigmatization, affected individuals often do not dare to speak openly about their mental disorders, so that intergroup contact often goes unnoticed. As a result, many people only experience conscious contact with individuals with a mental disorder through media. As an analogy to the Intergroup Contact Theory, the Parasocial Contact Hypothesis proposes that repeatedly being exposed to positive media representations of outgroup members can lead to a reduction of negative prejudices and attitudes towards this outgroup. While there is a growing body of research on the merit of this mechanism, measurements often only consist of 'positive' or 'negative' parasocial contact conditions (or examine the valence or quality of the previous contact with the outgroup); meanwhile, more specific conditions are often neglected. The current study aims to tackle this shortcoming. By scrutinizing the potential of contemporary series as a narrative framework of high quality, we strive to elucidate more detailed aspects of beneficial parasocial contact -for the sake of reducing prejudice and stigma towards individuals with mental disorders. Thus, a two-factorial between-subject online panel study with three measurement points was conducted (N = 95). Participants were randomly assigned to one of two groups, having to watch episodes of either a series with a narrative framework of high (Quality-TV) or low quality (Continental-TV), with one-week interval in-between the episodes. Suitable series were determined with the help of a pretest. Prejudice and stigma towards people with mental disorders were measured at the beginning of the study, before and after each episode, and in a final follow-up one week after the last two episodes. Additionally, parasocial interaction (PSI), quality of contact (QoC), and transportation were measured several times. Based on these data, multivariate multilevel analyses were performed in R using the lavaan package. Latent growth models showed moderate to high increases in QoC and PSI as well as small to moderate decreases in stigma and prejudice over time. Multilevel path analysis with individual and group levels further revealed that a qualitative narrative framework leads to a higher quality of contact experience, which then leads to lower prejudice and stigma, with effects ranging from moderate to high.Keywords: prejudice, quality of contact, parasocial contact, narrative framework
Procedia PDF Downloads 85292 The Effectiveness of an Occupational Therapy Metacognitive-Functional Intervention for the Improvement of Human Risk Factors of Bus Drivers
Authors: Navah Z. Ratzon, Rachel Shichrur
Abstract:
Background: Many studies have assessed and identified the risk factors of safe driving, but there is relatively little research-based evidence concerning the ability to improve the driving skills of drivers in general and in particular of bus drivers, who are defined as a population at risk. Accidents involving bus drivers can endanger dozens of passengers and cause high direct and indirect damages. Objective: To examine the effectiveness of a metacognitive-functional intervention program for the reduction of risk factors among professional drivers relative to a control group. Methods: The study examined 77 bus drivers working for a large public company in the center of the country, aged 27-69. Twenty-one drivers continued to the intervention stage; four of them dropped out before the end of the intervention. The intervention program we developed was based on previous driving models and the guiding occupational therapy practice framework model in Israel, while adjusting the model to the professional driving in public transportation and its particular risk factors. Treatment focused on raising awareness to safe driving risk factors identified at prescreening (ergonomic, perceptual-cognitive and on-road driving data), with reference to the difficulties that the driver raises and providing coping strategies. The intervention has been customized for each driver and included three sessions of two hours. The effectiveness of the intervention was tested using objective measures: In-Vehicle Data Recorders (IVDR) for monitoring natural driving data, traffic accident data before and after the intervention, and subjective measures (occupational performance questionnaire for bus drivers). Results: Statistical analysis found a significant difference between the degree of change in the rate of IVDR perilous events (t(17)=2.14, p=0.046), before and after the intervention. There was significant difference in the number of accidents per year before and after the intervention in the intervention group (t(17)=2.11, p=0.05), but no significant change in the control group. Subjective ratings of the level of performance and of satisfaction with performance improved in all areas tested following the intervention. The change in the ‘human factors/person’ field, was significant (performance : t=- 2.30, p=0.04; satisfaction with performance : t=-3.18, p=0.009). The change in the ‘driving occupation/tasks’ field, was not significant but showed a tendency toward significance (t=-1.94, p=0.07,). No significant differences were found in driving environment-related variables. Conclusions: The metacognitive-functional intervention significantly improved the objective and subjective measures of safety of bus drivers’ driving. These novel results highlight the potential contribution of occupational therapists, using metacognitive functional treatment, to preventing car accidents among the healthy drivers population and improving the well-being of these drivers. This study also enables familiarity with advanced technologies of IVDR systems and enriches the knowledge of occupational therapists in regards to using a wide variety of driving assessment tools and making the best practice decisions.Keywords: bus drivers, IVDR, human risk factors, metacognitive-functional intervention
Procedia PDF Downloads 347291 A Fast Multi-Scale Finite Element Method for Geophysical Resistivity Measurements
Authors: Mostafa Shahriari, Sergio Rojas, David Pardo, Angel Rodriguez- Rozas, Shaaban A. Bakr, Victor M. Calo, Ignacio Muga
Abstract:
Logging-While Drilling (LWD) is a technique to record down-hole logging measurements while drilling the well. Nowadays, LWD devices (e.g., nuclear, sonic, resistivity) are mostly used commercially for geo-steering applications. Modern borehole resistivity tools are able to measure all components of the magnetic field by incorporating tilted coils. The depth of investigation of LWD tools is limited compared to the thickness of the geological layers. Thus, it is a common practice to approximate the Earth’s subsurface with a sequence of 1D models. For a 1D model, we can reduce the dimensionality of the problem using a Hankel transform. We can solve the resulting system of ordinary differential equations (ODEs) either (a) analytically, which results in a so-called semi-analytic method after performing a numerical inverse Hankel transform, or (b) numerically. Semi-analytic methods are used by the industry due to their high performance. However, they have major limitations, namely: -The analytical solution of the aforementioned system of ODEs exists only for piecewise constant resistivity distributions. For arbitrary resistivity distributions, the solution of the system of ODEs is unknown by today’s knowledge. -In geo-steering, we need to solve inverse problems with respect to the inversion variables (e.g., the constant resistivity value of each layer and bed boundary positions) using a gradient-based inversion method. Thus, we need to compute the corresponding derivatives. However, the analytical derivatives of cross-bedded formation and the analytical derivatives with respect to the bed boundary positions have not been published to the best of our knowledge. The main contribution of this work is to overcome the aforementioned limitations of semi-analytic methods by solving each 1D model (associated with each Hankel mode) using an efficient multi-scale finite element method. The main idea is to divide our computations into two parts: (a) offline computations, which are independent of the tool positions and we precompute only once and use them for all logging positions, and (b) online computations, which depend upon the logging position. With the above method, (a) we can consider arbitrary resistivity distributions along the 1D model, and (b) we can easily and rapidly compute the derivatives with respect to any inversion variable at a negligible additional cost by using an adjoint state formulation. Although the proposed method is slower than semi-analytic methods, its computational efficiency is still high. In the presentation, we shall derive the mathematical variational formulation, describe the proposed multi-scale finite element method, and verify the accuracy and efficiency of our method by performing a wide range of numerical experiments and comparing the numerical solutions to semi-analytic ones when the latest are available.Keywords: logging-While-Drilling, resistivity measurements, multi-scale finite elements, Hankel transform
Procedia PDF Downloads 387290 Harnessing Artificial Intelligence for Early Detection and Management of Infectious Disease Outbreaks
Authors: Amarachukwu B. Isiaka, Vivian N. Anakwenze, Chinyere C. Ezemba, Chiamaka R. Ilodinso, Chikodili G. Anaukwu, Chukwuebuka M. Ezeokoli, Ugonna H. Uzoka
Abstract:
Infectious diseases continue to pose significant threats to global public health, necessitating advanced and timely detection methods for effective outbreak management. This study explores the integration of artificial intelligence (AI) in the early detection and management of infectious disease outbreaks. Leveraging vast datasets from diverse sources, including electronic health records, social media, and environmental monitoring, AI-driven algorithms are employed to analyze patterns and anomalies indicative of potential outbreaks. Machine learning models, trained on historical data and continuously updated with real-time information, contribute to the identification of emerging threats. The implementation of AI extends beyond detection, encompassing predictive analytics for disease spread and severity assessment. Furthermore, the paper discusses the role of AI in predictive modeling, enabling public health officials to anticipate the spread of infectious diseases and allocate resources proactively. Machine learning algorithms can analyze historical data, climatic conditions, and human mobility patterns to predict potential hotspots and optimize intervention strategies. The study evaluates the current landscape of AI applications in infectious disease surveillance and proposes a comprehensive framework for their integration into existing public health infrastructures. The implementation of an AI-driven early detection system requires collaboration between public health agencies, healthcare providers, and technology experts. Ethical considerations, privacy protection, and data security are paramount in developing a framework that balances the benefits of AI with the protection of individual rights. The synergistic collaboration between AI technologies and traditional epidemiological methods is emphasized, highlighting the potential to enhance a nation's ability to detect, respond to, and manage infectious disease outbreaks in a proactive and data-driven manner. The findings of this research underscore the transformative impact of harnessing AI for early detection and management, offering a promising avenue for strengthening the resilience of public health systems in the face of evolving infectious disease challenges. This paper advocates for the integration of artificial intelligence into the existing public health infrastructure for early detection and management of infectious disease outbreaks. The proposed AI-driven system has the potential to revolutionize the way we approach infectious disease surveillance, providing a more proactive and effective response to safeguard public health.Keywords: artificial intelligence, early detection, disease surveillance, infectious diseases, outbreak management
Procedia PDF Downloads 68289 Patterns of Change in Specific Behaviors of Autism Symptoms for Boys and for Girls Across Childhood
Authors: Einat Waizbard, Emilio Ferrer, Meghan Miller, Brianna Heath, Derek S. Andrews, Sally J. Rogers, Christine Wu Nordahl, Marjorie Solomon, David G. Amaral
Abstract:
Background: Autism symptoms are comprised of social-communication deficits and restricted/repetitive behaviors (RRB). The severity of these symptoms can change during childhood, with differences between boys and girls. From the literature, it was found that young autistic girls show a stronger tendency to decrease and a weaker tendency to increase their overall autism symptom severity levels compared to young autistic boys. It is not clear, however, which symptoms are driving these sex differences across childhood. In the current study, we evaluated the trajectories of independent autism symptoms across childhood and compared the patterns of change in such symptoms between boys and girls. Method: The study included 183 children diagnosed with autism (55 girls) evaluated three times across childhood, at ages 3, 6 and 11. We analyzed 22 independent items from the Autism Diagnostic Observation Scheudule-2 (ADOS-2), the gold-standard assessment tool for autism symptoms, each item representing a specific autism symptom. First, we used latent growth curve models to estimate the trajectories for the 22 ADOS-2 items for each child in the study. Second, we extracted the factor scores representing the individual slopes for each ADOS-2 item (i.e., slope representing that child’s change in that specific item). Third, we used factor analysis to identify common patterns of change among the ADOS-2 items, separately for boys and girls, i.e., which autism symptoms tend to change together and which change independently across childhood. Results: The best-emerging patterns for both boys and girls identified four common factors: three factors representative of changes in social-communication symptoms and one factor describing changes in RRB. Boys and girls showed the same pattern of change in RRB, with four items (e.g., speech abnormalities) changing together across childhood and three items (e.g., mannerisms) changing independently of other items. For social-communication deficits in boys, three factors were identified: the first factor included six items representing initiating and engaging in social-communication (e.g., quality of social overtures, conversation), the second factor included five items describing responsive social-communication (e.g., response to name) and the third factor included three items related to different aspects of social-communication (e.g., level of language). Girls’ social-communications deficits also loaded onto three factors: the first factor included five items (e.g., unusual eye contact), the second factor included six items (e.g., quality of social response), and the third factor included four items (e.g., showing). Some items showed similar patterns of change for both sexes (e.g., responsive joint attention), while other items showed differences (e.g., shared enjoyment). Conclusions: Girls and boys had different patterns of change in autism symptom severity across childhood. For RRB, both sexes showed similar patterns. For social-communication symptoms, however, there were both similarities and differences between boys and girls in the way symptoms changed over time. The strongest patterns of change were identified for initiating and engaging in social communication for boys and responsive social communication for girls.Keywords: autism spectrum disorder, autism symptom severity, symptom trajectories, sex differences
Procedia PDF Downloads 52288 Spatial Analysis and Determinants of Number of Antenatal Health Care Visit Among Pregnant Women in Ethiopia: Application of Spatial Multilevel Count Regression Models
Authors: Muluwerk Ayele Derebe
Abstract:
Background: Antenatal care (ANC) is an essential element in the continuum of reproductive health care for preventing preventable pregnancy-related morbidity and mortality. Objective: The aim of this study is to assess the spatial pattern and predictors of ANC visits in Ethiopia. Method: This study was done using Ethiopian Demographic and Health Survey data of 2016 among 7,174 pregnant women aged 15-49 years which was a nationwide community-based cross-sectional survey. Spatial analysis was done using Getis-Ord Gi* statistics to identify hot and cold spot areas of ANC visits. Multilevel glmmTMB packages adjusted for spatial effects were used in R software. Spatial multilevel count regression was conducted to identify predictors of antenatal care visits for pregnant women, and proportional change in variance was done to uncover the effect of individual and community-level factors of ANC visits. Results: The distribution of ANC visits was spatially clustered Moran’s I = 0.271, p<.0.001, ICC = 0.497, p<0.001). The highest spatial outlier areas of ANC visit was found in Amhara (South Wollo, Weast Gojjam, North Shewa), Oromo (west Arsi and East Harariga), Tigray (Central Tigray) and Benishangul-Gumuz (Asosa and Metekel) regions. The data was found with excess zeros (34.6%) and over-dispersed. The expected ANC visit of pregnant women with pregnancy complications was higher at 0.7868 [ARR= 2.1964, 95% CI: 1.8605, 2.5928, p-value <0.0001] compared to pregnant women who had no pregnancy complications. The expected ANC visit of a pregnant woman who lived in a rural area was 1.2254 times higher [ARR=3.4057, 95% CI: 2.1462, 5.4041, p-value <0.0001] as compared to a pregnant woman who lived in an urban. The study found dissimilar clusters with a low number of zero counts for a mean number of ANC visits surrounded by clusters with a higher number of counts of an average number of ANC visits when other variables held constant. Conclusion: This study found that the number of ANC visits in Ethiopia had a spatial pattern associated with socioeconomic, demographic, and geographic risk factors. Spatial clustering of ANC visits exists in all regions of Ethiopia. The predictor age of the mother, religion, mother’s education, husband’s education, mother's occupation, husband's occupation, signs of pregnancy complication, wealth index and marital status had a strong association with the number of ANC visits by each individual. At the community level, place of residence, region, age of the mother, sex of the household head, signs of pregnancy complications and distance to health facility factors had a strong association with the number of ANC visits.Keywords: Ethiopia, ANC, spatial, multilevel, zero inflated Poisson
Procedia PDF Downloads 76287 Unlocking New Room of Production in Brown Field; Integration of Geological Data Conditioned 3D Reservoir Modelling of Lower Senonian Matulla Formation, RAS Budran Field, East Central Gulf of Suez, Egypt
Authors: Nader Mohamed
Abstract:
The Late Cretaceous deposits are well developed through-out Egypt. This is due to a transgression phase associated with the subsidence caused by the neo-Tethyan rift event that took place across the northern margin of Africa, resulting in a period of dominantly marine deposits in the Gulf of Suez. The Late Cretaceous Nezzazat Group represents the Cenomanian, Turonian and clastic sediments of the Lower Senonian. The Nezzazat Group has been divided into four formations namely, from base to top, the Raha Formation, the Abu Qada Formation, the Wata Formation and the Matulla Formation. The Cenomanian Raha and the Lower Senonian Matulla formations are the most important clastic sequence in the Nezzazat Group because they provide the highest net reservoir thickness and the highest net/gross ratio. This study emphasis on Matulla formation located in the eastern part of the Gulf of Suez. The three stratigraphic surface sections (Wadi Sudr, Wadi Matulla and Gabal Nezzazat) which represent the exposed Coniacian-Santonian sediments in Sinai are used for correlating Matulla sediments of Ras Budran field. Cutting description, petrographic examination, log behaviors, biostratigraphy with outcrops are used to identify the reservoir characteristics, lithology, facies environment logs and subdivide the Matulla formation into three units. The lower unit is believed to be the main reservoir where it consists mainly of sands with shale and sandy carbonates, while the other units are mainly carbonate with some streaks of shale and sand. Reservoir modeling is an effective technique that assists in reservoir management as decisions concerning development and depletion of hydrocarbon reserves, So It was essential to model the Matulla reservoir as accurately as possible in order to better evaluate, calculate the reserves and to determine the most effective way of recovering as much of the petroleum economically as possible. All available data on Matulla formation are used to build the reservoir structure model, lithofacies, porosity, permeability and water saturation models which are the main parameters that describe the reservoirs and provide information on effective evaluation of the need to develop the oil potentiality of the reservoir. This study has shown the effectiveness of; 1) the integration of geological data to evaluate and subdivide Matulla formation into three units. 2) Lithology and facies environment interpretation which helped in defining the nature of deposition of Matulla formation. 3) The 3D reservoir modeling technology as a tool for adequate understanding of the spatial distribution of property and in addition evaluating the unlocked new reservoir areas of Matulla formation which have to be drilled to investigate and exploit the un-drained oil. 4) This study led to adding a new room of production and additional reserves to Ras Budran field. Keywords: geology, oil and gas, geoscience, sequence stratigraphy
Procedia PDF Downloads 106286 Optimization of Biomass Production and Lipid Formation from Chlorococcum sp. Cultivation on Dairy and Paper-Pulp Wastewater
Authors: Emmanuel C. Ngerem
Abstract:
The ever-increasing depletion of the dominant global form of energy (fossil fuels) calls for the development of sustainable and green alternative energy sources such as bioethanol, biohydrogen, and biodiesel. The production of the major biofuels relies on biomass feedstocks that are mainly derived from edible food crops and some inedible plants. One suitable feedstock with great potential as raw material for biofuel production is microalgal biomass. Despite the tremendous attributes of microalgae as a source of biofuel, their cultivation requires huge volumes of freshwater, thus posing a serious threat to commercial-scale production and utilization of algal biomass. In this study, a multi-media wastewater mixture for microalgae growth was formulated and optimized. Moreover, the obtained microalgae biomass was pre-treated to reduce sugar recovery and was compared with previous studies on microalgae biomass pre-treatment. The formulated and optimized mixed wastewater media for biomass and lipid accumulation was established using the simplex lattice mixture design. Based on the superposition approach of the potential results, numerical optimization was conducted, followed by the analysis of biomass concentration and lipid accumulation. The coefficients of regression (R²) of 0.91 and 0.98 were obtained for biomass concentration and lipid accumulation models, respectively. The developed optimization model predicted optimal biomass concentration and lipid accumulation of 1.17 g/L and 0.39 g/g, respectively. It suggested 64.69% dairy wastewater (DWW) and 35.31% paper and pulp wastewater (PWW) mixture for biomass concentration, 34.21% DWW, and 65.79% PWW for lipid accumulation. Experimental validation generated 0.94 g/L and 0.39 g/g of biomass concentration and lipid accumulation, respectively. The obtained microalgae biomass was pre-treated, enzymatically hydrolysed, and subsequently assessed for reducing sugars. The optimization of microwave pre-treatment of Chlorococcum sp. was achieved using response surface methodology (RSM). Microwave power (100 – 700 W), pre-treatment time (1 – 7 min), and acid-liquid ratio (1 – 5%) were selected as independent variables for RSM optimization. The optimum conditions were achieved at microwave power, pre-treatment time, and acid-liquid ratio of 700 W, 7 min, and 32.33:1, respectively. These conditions provided the highest amount of reducing sugars at 10.73 g/L. Process optimization predicted reducing sugar yields of 11.14 g/L on microwave-assisted pre-treatment of 2.52% HCl for 4.06 min at 700 watts. Experimental validation yielded reducing sugars of 15.67 g/L. These findings demonstrate that dairy wastewater and paper and pulp wastewater that could pose a serious environmental nuisance. They could be blended to form a suitable microalgae growth media, consolidating the potency of microalgae as a viable feedstock for fermentable sugars. Also, the outcome of this study supports the microalgal wastewater biorefinery concept, where wastewater remediation is coupled with bioenergy production.Keywords: wastewater cultivation, mixture design, lipid, biomass, nutrient removal, microwave, Chlorococcum, raceway pond, fermentable sugar, modelling, optimization
Procedia PDF Downloads 43285 Sensor and Sensor System Design, Selection and Data Fusion Using Non-Deterministic Multi-Attribute Tradespace Exploration
Authors: Matthew Yeager, Christopher Willy, John Bischoff
Abstract:
The conceptualization and design phases of a system lifecycle consume a significant amount of the lifecycle budget in the form of direct tasking and capital, as well as the implicit costs associated with unforeseeable design errors that are only realized during downstream phases. Ad hoc or iterative approaches to generating system requirements oftentimes fail to consider the full array of feasible systems or product designs for a variety of reasons, including, but not limited to: initial conceptualization that oftentimes incorporates a priori or legacy features; the inability to capture, communicate and accommodate stakeholder preferences; inadequate technical designs and/or feasibility studies; and locally-, but not globally-, optimized subsystems and components. These design pitfalls can beget unanticipated developmental or system alterations with added costs, risks and support activities, heightening the risk for suboptimal system performance, premature obsolescence or forgone development. Supported by rapid advances in learning algorithms and hardware technology, sensors and sensor systems have become commonplace in both commercial and industrial products. The evolving array of hardware components (i.e. sensors, CPUs, modular / auxiliary access, etc…) as well as recognition, data fusion and communication protocols have all become increasingly complex and critical for design engineers during both concpetualization and implementation. This work seeks to develop and utilize a non-deterministic approach for sensor system design within the multi-attribute tradespace exploration (MATE) paradigm, a technique that incorporates decision theory into model-based techniques in order to explore complex design environments and discover better system designs. Developed to address the inherent design constraints in complex aerospace systems, MATE techniques enable project engineers to examine all viable system designs, assess attribute utility and system performance, and better align with stakeholder requirements. Whereas such previous work has been focused on aerospace systems and conducted in a deterministic fashion, this study addresses a wider array of system design elements by incorporating both traditional tradespace elements (e.g. hardware components) as well as popular multi-sensor data fusion models and techniques. Furthermore, statistical performance features to this model-based MATE approach will enable non-deterministic techniques for various commercial systems that range in application, complexity and system behavior, demonstrating a significant utility within the realm of formal systems decision-making.Keywords: multi-attribute tradespace exploration, data fusion, sensors, systems engineering, system design
Procedia PDF Downloads 189284 Development and Modelling of Cellulose Nano-Crystal from Agricultural Wastes for Adsorptive Removal of Pharmaceuticals in Wastewater
Authors: Abubakar Muhammad Hammari, Usman Dadum Hamza, Maryam Ibrahim, Kabir Garba, Idris Muhammad Misau, .
Abstract:
Pharmaceuticals are increasingly present in water systems, posing threats to ecosystems and human health. The effective treatment of pharmaceutical wastewater presents a significant challenge due to the complex and diverse organic and inorganic contaminants it contains. Conventional treatment methods often struggle to completely remove these pollutants due to their stability and water solubility, leading to environmental concerns and potential health risks. This research proposes the use of cellulose nanocrystals (CNCs) derived from agricultural waste as efficient and sustainable adsorbents for pharmaceutical wastewater treatment. CNCs offer high surface area, biodegradability, and low cost compared to existing options. This study evaluates the production, characterization, adsorption properties, and reusability of cellulose nanocrystals (CNCs) derived from waste paper (CNC-WP), rice husk (CNC-RH), and groundnut shell (CNC-GS). The percentage yield of CNCs was highest from wastepaper at 50.67%, followed by groundnut shell at 33.40% and rice husk at 26.46%. X-ray diffraction (XRD) confirmed the cellulose crystalline structure across all samples while scanning electron microscopy (SEM) revealed a needle-like morphology with size distribution variations. Energy-dispersive X-ray spectroscopy (EDX) identified carbon and oxygen as the primary elements, with minor residual inorganic materials varying by source. BET analysis indicated high surface areas for all CNCs, with CNC-RH exhibiting the highest value (464.592 m²/g), suggesting a more porous structure. The pore sizes of all samples fell within the meso-pore range (2.108 nm to 2.153 nm). Adsorption studies focused on metronidazole (MNZ) removal using CNC-WP. Isotherm models, including Langmuir and Sips, described the equilibrium between MNZ concentration and adsorption onto CNC-WP, showing the best fit with R² values exceeding 0.95. The adsorption process was favourable, with monolayer coverage and potential binding energy heterogeneity. Kinetic modelling identified the pseudo-second-order model as the best fit (R² = 1, SSE = 5.00 x 10-₇), indicating chemisorption as the predominant mechanism. Thermodynamic analysis revealed negative ΔG values at all temperatures, indicating spontaneous adsorption, with more favourable adsorption at higher temperatures. The adsorption process was exothermic, as indicated by negative ΔH values. Reusability studies demonstrated that CNC-WP retained high MNZ removal efficiency, with a modest decrease from 99.59% to 89.11% over ten regeneration cycles. This study highlights the efficiency of wastepaper as a raw material for CNC production and its potential for effective and reusable MNZ adsorption.Keywords: cellulose nanocrystals (CNCs), adsorption efficiency, metronidazole removal, reusability
Procedia PDF Downloads 5283 Active Development of Tacit Knowledge: Knowledge Management, High Impact Practices and Experiential Learning
Authors: John Zanetich
Abstract:
Due to their positive associations with student learning and retention, certain undergraduate opportunities are designated ‘high-impact.’ High-Impact Practices (HIPs) such as, learning communities, community based projects, research, internships, study abroad and culminating senior experience, share several traits bin common: they demand considerable time and effort, learning occurs outside of the classroom, and they require meaningful interactions between faculty and students, they encourage collaboration with diverse others, and they provide frequent and substantive feedback. As a result of experiential learning in these practices, participation in these practices can be life changing. High impact learning helps individuals locate tacit knowledge, and build mental models that support the accumulation of knowledge. On-going learning from experience and knowledge conversion provides the individual with a way to implicitly organize knowledge and share knowledge over a lifetime. Knowledge conversion is a knowledge management component which focuses on the explication of the tacit knowledge that exists in the minds of students and that knowledge which is embedded in the process and relationships of the classroom educational experience. Knowledge conversion is required when working with tacit knowledge and the demand for a learner to align deeply held beliefs with the cognitive dissonance created by new information. Knowledge conversion and tacit knowledge result from the fact that an individual's way of knowing, that is, their core belief structure, is considered generalized and tacit instead of explicit and specific. As a phenomenon, tacit knowledge is not readily available to the learner for explicit description unless evoked by an external source. The development of knowledge–related capabilities such as Aggressive Development of Tacit Knowledge (ADTK) can be used in experiential educational programs to enhance knowledge, foster behavioral change, improve decision making, and overall performance. ADTK allows the student in HIPs to use their existing knowledge in a way that allows them to evaluate and make any necessary modifications to their core construct of reality in order to amalgamate new information. Based on the Lewin/Schein Change Theory, the learner will reach for tacit knowledge as a stabilizing mechanism when they are challenged by new information that puts them slightly off balance. As in word association drills, the important concept is the first thought. The reactionary outpouring to an experience is the programmed or tacit memory and knowledge of their core belief structure. ADTK is a way to help teachers design their own methods and activities to unfreeze, create new learning, and then refreeze the core constructs upon which future learning in a subject area is built. This paper will explore the use of ADTK as a technique for knowledge conversion in the classroom in general and in HIP programs specifically. It will focus on knowledge conversion in curriculum development and propose the use of one-time educational experiences, multi-session experiences and sequential program experiences focusing on tacit knowledge in educational programs.Keywords: tacit knowledge, knowledge management, college programs, experiential learning
Procedia PDF Downloads 264282 Advancements in Electronic Sensor Technologies for Tea Quality Evaluation
Authors: Raana Babadi Fathipour
Abstract:
Tea, second only to water in global consumption rates, holds a significant place as the beverage of choice for many around the world. The process of fermenting tea leaves plays a crucial role in determining its ultimate quality, traditionally assessed through meticulous observation by tea tasters and laboratory analysis. However, advancements in technology have paved the way for innovative electronic sensing platforms like the electronic nose (e-nose), electronic tongue (e-tongue), and electronic eye (e-eye). These cutting-edge tools, coupled with sophisticated data processing algorithms, not only expedite the assessment of tea's sensory qualities based on consumer preferences but also establish new benchmarks for this esteemed bioactive product to meet burgeoning market demands worldwide. By harnessing intricate data sets derived from electronic signals and deploying multivariate statistical techniques, these technological marvels can enhance accuracy in predicting and distinguishing tea quality with unparalleled precision. In this contemporary exploration, a comprehensive overview is provided of the most recent breakthroughs and viable solutions aimed at addressing forthcoming challenges in the realm of tea analysis. Utilizing bio-mimicking Electronic Sensory Perception systems (ESPs), researchers have developed innovative technologies that enable precise and instantaneous evaluation of the sensory-chemical attributes inherent in tea and its derivatives. These sophisticated sensing mechanisms are adept at deciphering key elements such as aroma, taste, and color profiles, transitioning valuable data into intricate mathematical algorithms for classification purposes. Through their adept capabilities, these cutting-edge devices exhibit remarkable proficiency in discerning various teas with respect to their distinct pricing structures, geographic origins, harvest epochs, fermentation processes, storage durations, quality classifications, and potential adulteration levels. While voltammetric and fluorescent sensor arrays have emerged as promising tools for constructing electronic tongue systems proficient in scrutinizing tea compositions, potentiometric electrodes continue to serve as reliable instruments for meticulously monitoring taste dynamics within different tea varieties. By implementing a feature-level fusion strategy within predictive models, marked enhancements can be achieved regarding efficiency and accuracy levels. Moreover, by establishing intrinsic linkages through pattern recognition methodologies between sensory traits and biochemical makeup found within tea samples, further strides are made toward enhancing our understanding of this venerable beverage's complex nature.Keywords: classifier system, tea, polyphenol, sensor, taste sensor
Procedia PDF Downloads 0281 Analyzing Data Protection in the Era of Big Data under the Framework of Virtual Property Layer Theory
Authors: Xiaochen Mu
Abstract:
Data rights confirmation, as a key legal issue in the development of the digital economy, is undergoing a transition from a traditional rights paradigm to a more complex private-economic paradigm. In this process, data rights confirmation has evolved from a simple claim of rights to a complex structure encompassing multiple dimensions of personality rights and property rights. Current data rights confirmation practices are primarily reflected in two models: holistic rights confirmation and process rights confirmation. The holistic rights confirmation model continues the traditional "one object, one right" theory, while the process rights confirmation model, through contractual relationships in the data processing process, recognizes rights that are more adaptable to the needs of data circulation and value release. In the design of the data property rights system, there is a hierarchical characteristic aimed at decoupling from raw data to data applications through horizontal stratification and vertical staging. This design not only respects the ownership rights of data originators but also, based on the usufructuary rights of enterprises, constructs a corresponding rights system for different stages of data processing activities. The subjects of data property rights include both data originators, such as users, and data producers, such as enterprises, who enjoy different rights at different stages of data processing. The intellectual property rights system, with the mission of incentivizing innovation and promoting the advancement of science, culture, and the arts, provides a complete set of mechanisms for protecting innovative results. However, unlike traditional private property rights, the granting of intellectual property rights is not an end in itself; the purpose of the intellectual property system is to balance the exclusive rights of the rights holders with the prosperity and long-term development of society's public learning and the entire field of science, culture, and the arts. Therefore, the intellectual property granting mechanism provides both protection and limitations for the rights holder. This perfectly aligns with the dual attributes of data. In terms of achieving the protection of data property rights, the granting of intellectual property rights is an important institutional choice that can enhance the effectiveness of the data property exchange mechanism. Although this is not the only path, the granting of data property rights within the framework of the intellectual property rights system helps to establish fundamental legal relationships and rights confirmation mechanisms and is more compatible with the classification and grading system of data. The modernity of the intellectual property rights system allows it to adapt to the needs of big data technology development through special clauses or industry guidelines, thus promoting the comprehensive advancement of data intellectual property rights legislation. This paper analyzes data protection under the virtual property layer theory and two-fold virtual property rights system. Based on the “bundle of right” theory, this paper establishes specific three-level data rights. This paper analyzes the cases: Google v. Vidal-Hall, Halliday v Creation Consumer Finance, Douglas v Hello Limited, Campbell v MGN and Imerman v Tchenquiz. This paper concluded that recognizing property rights over personal data and protecting data under the framework of intellectual property will be beneficial to establish the tort of misuse of personal information.Keywords: data protection, property rights, intellectual property, Big data
Procedia PDF Downloads 41