Search results for: non uniform utility computing
84 R Statistical Software Applied in Reliability Analysis: Case Study of Diesel Generator Fans
Authors: Jelena Vucicevic
Abstract:
Reliability analysis represents a very important task in different areas of work. In any industry, this is crucial for maintenance, efficiency, safety and monetary costs. There are ways to calculate reliability, unreliability, failure density and failure rate. This paper will try to introduce another way of calculating reliability by using R statistical software. R is a free software environment for statistical computing and graphics. It compiles and runs on a wide variety of UNIX platforms, Windows and MacOS. The R programming environment is a widely used open source system for statistical analysis and statistical programming. It includes thousands of functions for the implementation of both standard and new statistical methods. R does not limit user only to operation related only to these functions. This program has many benefits over other similar programs: it is free and, as an open source, constantly updated; it has built-in help system; the R language is easy to extend with user-written functions. The significance of the work is calculation of time to failure or reliability in a new way, using statistic. Another advantage of this calculation is that there is no need for technical details and it can be implemented in any part for which we need to know time to fail in order to have appropriate maintenance, but also to maximize usage and minimize costs. In this case, calculations have been made on diesel generator fans but the same principle can be applied to any other part. The data for this paper came from a field engineering study of the time to failure of diesel generator fans. The ultimate goal was to decide whether or not to replace the working fans with a higher quality fan to prevent future failures. Seventy generators were studied. For each one, the number of hours of running time from its first being put into service until fan failure or until the end of the study (whichever came first) was recorded. Dataset consists of two variables: hours and status. Hours show the time of each fan working and status shows the event: 1- failed, 0- censored data. Censored data represent cases when we cannot track the specific case, so it could fail or success. Gaining the result by using R was easy and quick. The program will take into consideration censored data and include this into the results. This is not so easy in hand calculation. For the purpose of the paper results from R program have been compared to hand calculations in two different cases: censored data taken as a failure and censored data taken as a success. In all three cases, results are significantly different. If user decides to use the R for further calculations, it will give more precise results with work on censored data than the hand calculation.Keywords: censored data, R statistical software, reliability analysis, time to failure
Procedia PDF Downloads 40183 Consumption and Diffusion Based Model of Tissue Organoid Development
Authors: Elena Petersen, Inna Kornienko, Svetlana Guryeva, Sergey Simakov
Abstract:
In vitro organoid cultivation requires the simultaneous provision of necessary vascularization and nutrients perfusion of cells during organoid development. However, many aspects of this problem are still unsolved. The functionality of vascular network intergrowth is limited during early stages of organoid development since a function of the vascular network initiated on final stages of in vitro organoid cultivation. Therefore, a microchannel network should be created in early stages of organoid cultivation in hydrogel matrix aimed to conduct and maintain minimally required the level of nutrients perfusion for all cells in the expanding organoid. The network configuration should be designed properly in order to exclude hypoxic and necrotic zones in expanding organoid at all stages of its cultivation. In vitro vascularization is currently the main issue within the field of tissue engineering. As perfusion and oxygen transport have direct effects on cell viability and differentiation, researchers are currently limited only to tissues of few millimeters in thickness. These limitations are imposed by mass transfer and are defined by the balance between the metabolic demand of the cellular components in the system and the size of the scaffold. Current approaches include growth factor delivery, channeled scaffolds, perfusion bioreactors, microfluidics, cell co-cultures, cell functionalization, modular assembly, and in vivo systems. These approaches may improve cell viability or generate capillary-like structures within a tissue construct. Thus, there is a fundamental disconnect between defining the metabolic needs of tissue through quantitative measurements of oxygen and nutrient diffusion and the potential ease of integration into host vasculature for future in vivo implantation. A model is proposed for growth prognosis of the organoid perfusion based on joint simulations of general nutrient diffusion, nutrient diffusion to the hydrogel matrix through the contact surfaces and microchannels walls, nutrient consumption by the cells of expanding organoid, including biomatrix contraction during tissue development, which is associated with changed consumption rate of growing organoid cells. The model allows computing effective microchannel network design giving minimally required the level of nutrients concentration in all parts of growing organoid. It can be used for preliminary planning of microchannel network design and simulations of nutrients supply rate depending on the stage of organoid development.Keywords: 3D model, consumption model, diffusion, spheroid, tissue organoid
Procedia PDF Downloads 30882 Structured Cross System Planning and Control in Modular Production Systems by Using Agent-Based Control Loops
Authors: Simon Komesker, Achim Wagner, Martin Ruskowski
Abstract:
In times of volatile markets with fluctuating demand and the uncertainty of global supply chains, flexible production systems are the key to an efficient implementation of a desired production program. In this publication, the authors present a holistic information concept taking into account various influencing factors for operating towards the global optimum. Therefore, a strategy for the implementation of multi-level planning for a flexible, reconfigurable production system with an alternative production concept in the automotive industry is developed. The main contribution of this work is a system structure mixing central and decentral planning and control evaluated in a simulation framework. The information system structure in current production systems in the automotive industry is rigidly hierarchically organized in monolithic systems. The production program is created rule-based with the premise of achieving uniform cycle time. This program then provides the information basis for execution in subsystems at the station and process execution level. In today's era of mixed-(car-)model factories, complex conditions and conflicts arise in achieving logistics, quality, and production goals. There is no provision for feedback loops of results from the process execution level (resources) and process supporting (quality and logistics) systems and reconsideration in the planning systems. To enable a robust production flow, the complexity of production system control is artificially reduced by the line structure and results, for example in material-intensive processes (buffers and safety stocks - two container principle also for different variants). The limited degrees of freedom of line production have produced the principle of progress figure control, which results in one-time sequencing, sequential order release, and relatively inflexible capacity control. As a result, modularly structured production systems such as modular production according to known approaches with more degrees of freedom are currently difficult to represent in terms of information technology. The remedy is an information concept that supports cross-system and cross-level information processing for centralized and decentralized decision-making. Through an architecture of hierarchically organized but decoupled subsystems, the paradigm of hybrid control is used, and a holonic manufacturing system is offered, which enables flexible information provisioning and processing support. In this way, the influences from quality, logistics, and production processes can be linked holistically with the advantages of mixed centralized and decentralized planning and control. Modular production systems also require modularly networked information systems with semi-autonomous optimization for a robust production flow. Dynamic prioritization of different key figures between subsystems should lead the production system to an overall optimum. The tasks and goals of quality, logistics, process, resource, and product areas in a cyber-physical production system are designed as an interconnected multi-agent-system. The result is an alternative system structure that executes centralized process planning and decentralized processing. An agent-based manufacturing control is used to enable different flexibility and reconfigurability states and manufacturing strategies in order to find optimal partial solutions of subsystems, that lead to a near global optimum for hybrid planning. This allows a robust near to plan execution with integrated quality control and intralogistics.Keywords: holonic manufacturing system, modular production system, planning, and control, system structure
Procedia PDF Downloads 16981 Screening for Larvicidal Activity of Aqueous and Ethanolic Extracts of Fourteen Selected Plants and Formulation of a Larvicide against Aedes aegypti (Linn.) and Aedes albopictus (Skuse) Larvae
Authors: Michael Russelle S. Alvarez, Noel S. Quiming, Francisco M. Heralde
Abstract:
This study aims to: a) obtain ethanolic (95% EtOH) and aqueous extracts of Selaginella elmeri, Christella dentata, Elatostema sinnatum, Curculigo capitulata, Euphorbia hirta, Murraya koenigii, Alpinia speciosa, Cymbopogon citratus, Eucalyptus globulus, Jatropha curcas, Psidium guajava, Gliricidia sepium, Ixora coccinea and Capsicum frutescens and screen them for larvicidal activities against Aedes aegypti (Linn.) and Aedes albopictus (Skuse) larvae; b) to fractionate the most active extract and determine the most active fraction; c) to determine the larvicidal properties of the most active extract and fraction against by computing their percentage mortality, LC50, and LC90 after 24 and 48 hours of exposure; and d) to determine the nature of the components of the active extracts and fractions using phytochemical screening. Ethanolic (95% EtOH) and aqueous extracts of the selected plants will be screened for potential larvicidal activity against Ae. aegypti and Ae. albopictus using standard procedures and 1% malathion and a Piper nigrum based ovicide-larvicide by the Department of Science and Technology as positive controls. The results were analyzed using One-Way ANOVA with Tukey’s and Dunnett’s test. The most active extract will be subjected to partial fractionation using normal-phase column chromatography, and the fractions subsequently screened to determine the most active fraction. The most active extract and fraction were subjected to dose-response assay and probit analysis to determine the LC50 and LC90 after 24 and 48 hours of exposure. The active extracts and fractions will be screened for phytochemical content. The ethanolic extracts of C. citratus, E. hirta, I. coccinea, G. sepium, M. koenigii, E globulus, J. curcas and C. frutescens exhibited significant larvicidal activity, with C. frutescens being the most active. After fractionation, the ethyl acetate fraction was found to be the most active. Phytochemical screening of the extracts revealed the presence of alkaloids, tannins, indoles and steroids. A formulation using talcum powder–300 mg fraction per 1 g talcum powder–was made and again tested for larvicidal activity. At 2 g/L, the formulation proved effective in killing all of the test larvae after 24 hours.Keywords: larvicidal activity screening, partial purification, dose-response assay, capsicum frutescens
Procedia PDF Downloads 32980 Tailoring Workspaces for Generation Z: Harmonizing Teamwork, Privacy, and Connectivity
Authors: Maayan Nakash
Abstract:
The modern workplace is undergoing a revolution, with Generation Z (Gen-Z) at the forefront of this transformative shift. However, empirical investigations specifically targeting the workplace preferences of this generation remain limited. Through direct examination of their tendencies via a survey approach, this study offers vital insights for aligning organizational policies and practices. The results presented in this paper are part of a comprehensive study that explored Gen Z's viewpoints on various employment market aspects, likely to decisively influence the design of future work environments. Data were collected via an online survey distributed among a cohort of 461 individuals from Gen-Z, born between the mid-1990s and 2010, consisting of 241 males (52.28%) and 220 females (47.72%). Responses were gauged using Likert scale statements that probed preferences for teamwork versus individual work, virtual versus personal interactions, and open versus private workspaces. Descriptive statistics and analytical analyses were conducted to pinpoint key patterns. We discovered that a high proportion of respondents (81.99%, n=378) exhibited a preference for teamwork over individual work. Correspondingly, the data indicate strong support for the recognition of team-based tasks as a tool contributing to personal and professional development. In terms of communication, the majority of respondents (61.38%) either disagreed (n=154) or slightly agreed (n=129) with the exclusive reliance on virtual interactions with their organizational peers. This finding underscores that despite technological progress, digital natives place significant value on physical interaction and non-mediated communication. Moreover, we understand that they also value a quiet and private work environment, clearly preferring it over open and shared workspaces. Considering that Gen-Z does not necessarily experience high levels of stress within social frameworks in the workplace, this can be attributed to a desire for a space that allows for focused engagement with work tasks. A One-Sample Chi-Square Test was performed on the observed distribution of respondents' reactions to each examined statement. The results showed statistically significant deviations from a uniform distribution (p<.001), indicating that the response patterns did not occur by chance and that there were meaningful tendencies in the participants' responses. The findings expand the theoretical knowledge base on human resources in the dynamics of a multi-generational workforce, illuminating the values, approaches, and expectations of Gen-Z. Practically, the results may lead organizations to equip themselves with tools to create policies tailored to Gen-Z in the context of workspaces and social needs, which could potentially foster a fertile environment and aid in attracting and retaining young talent. Future studies might include investigating potential mitigating factors, such as cultural influences or individual personality traits, which could further clarify the nuances in Gen-Z's work style preferences. Longitudinal studies tracking changes in these preferences as the generation matures may also yield valuable insights. Ultimately, as the landscape of the workforce continues to evolve, ongoing investigations into the unique characteristics and aspirations of emerging generations remain essential for nurturing harmonious, productive, and future-ready organizational environments.Keywords: workplace, future of work, generation Z, digital natives, human resources management
Procedia PDF Downloads 5379 Monitoring Potential Temblor Localities as a Supplemental Risk Control System
Authors: Mikhail Zimin, Svetlana Zimina, Maxim Zimin
Abstract:
Without question, the basic method of prevention of human and material losses is the provision for adequate strength of constructions. At the same time, seismic load has a stochastic character. So, at all times, there is little danger of earthquake forces exceeding the selected design load. This risk is very low, but the consequences of such events may be extremely serious. Very dangerous are also occasional mistakes in seismic zoning, soil conditions changing before temblors, and failure to take into account hazardous natural phenomena caused by earthquakes. Besides, it is known that temblors detrimentally affect the environmental situation in regions where they occur, resulting in panic and worsening various disease courses. It may lead to mistakes of personnel of hazardous production facilities like the production and distribution of gas and oil, which may provoke severe accidents. In addition, gas and oil pipelines often have long mileage and cross many perilous zones by contrast with buildings. This situation increases the risk of heavy accidents. In such cases, complex monitoring of potential earthquake localities would be relevant. Even though the number of successful real-time forecasts of earthquakes is not great, it is well in excess, such as may be under random guessing. Experimental performed time-lapse study and analysis consist of searching seismic, biological, meteorological, and light earthquake precursors, processing such data with the help of fuzzy sets, collecting weather information, utilizing a database of terrain, and computing risk of slope processes under the temblor in a given setting. Works were done in a real-time environment and broadly acceptable results took place. Observations from already in-place seismic recording systems are used. Furthermore, a look back study of precursors of known earthquakes is done. Situations before Ashkhabad, Tashkent, and Haicheng seismic events are analyzed. Fairish findings are obtained. Results of earthquake forecasts can be used for predicting dangerous natural phenomena caused by temblors such as avalanches and mudslides. They may also be utilized for prophylaxis of some diseases and their complications. Relevant software is worked out too. It should be emphasized that such control does not require serious financial expenses and can be performed by a small group of professionals. Thus, complex monitoring of potential earthquake localities, including short-term earthquake forecasts and analysis of possible hazardous consequences of temblors, may further the safety of pipeline facilities.Keywords: risk, earthquake, monitoring, forecast, precursor
Procedia PDF Downloads 2478 Data Quality and Associated Factors on Regular Immunization Programme at Ararso District: Somali Region- Ethiopia
Authors: Eyob Seife, Molla Alemayaehu, Tesfalem Teshome, Bereket Seyoum, Behailu Getachew
Abstract:
Globally, immunization averts between 2 and 3 million deaths yearly, but Vaccine-Preventable Diseases still account for more in Sub-Saharan African countries and takes the majority of under-five deaths yearly, which indicates the need for consistent and on-time information to have evidence-based decision so as to save lives of these vulnerable groups. However, ensuring data of sufficient quality and promoting an information-use culture at the point of collection remains critical and challenging, especially in remote areas where the Ararso district is selected based on a hypothesis of there is a difference in reported and recounted immunization data consistency. Data quality is dependent on different factors where organizational, behavioral, technical and contextual factors are the mentioned ones. A cross-sectional quantitative study was conducted on September 2022 in the Ararso district. The study used the world health organization (WHO) recommended data quality self-assessment (DQS) tools. Immunization tally sheets, registers and reporting documents were reviewed at 4 health facilities (1 health center and 3 health posts) of primary health care units for one fiscal year (12 months) to determine the accuracy ratio, availability and timeliness of reports. The data was collected by trained DQS assessors to explore the quality of monitoring systems at health posts, health centers, and at the district health office. A quality index (QI), availability and timeliness of reports were assessed. Accuracy ratios formulated were: the first and third doses of pentavalent vaccines, fully immunized (FI), TT2+ and the first dose of measles-containing vaccines (MCV). In this study, facility-level results showed poor timeliness at all levels and both over-reporting and under-reporting were observed at all levels when computing the accuracy ratio of registration to health post reports found at health centers for almost all antigens verified. A quality index (QI) of all facilities also showed poor results. Most of the verified immunization data accuracy ratios were found to be relatively better than that of quality index and timeliness of reports. So attention should be given to improving the capacity of staff, timeliness of reports and quality of monitoring system components, namely recording, reporting, archiving, data analysis and using information for decisions at all levels, especially in remote and areas.Keywords: accuracy ratio, ararso district, quality of monitoring system, regular immunization program, timeliness of reports, Somali region-Ethiopia
Procedia PDF Downloads 7377 Post Liberal Perspective on Minorities Visibility in Contemporary Visual Culture: The Case of Mizrahi Jews
Authors: Merav Alush Levron, Sivan Rajuan Shtang
Abstract:
From as early as their emergence in Europe and the US, postmodern and post-colonial paradigm have formed the backbone of the visual culture field of study. The self-representation project of political minorities is studied, described and explained within the premises and perspectives drawn from these paradigms, addressing the key issues they had raised: modernism’s crisis of representation. The struggle for self-representation, agency and multicultural visibility sought to challenge the liberal pretense of universality and equality, hitting at its different blind spots, on issues such as class, gender, race, sex, and nationality. This struggle yielded subversive identity and hybrid performances, including reclaiming, mimicry and masquerading. These performances sought to defy the uniform, universal self, which forms the basis for the liberal, rational, enlightened subject. The argument of this research runs that this politics of representation itself is confined within liberal thought. Alongside post-colonialism and multiculturalism’s contribution in undermining oppressive structures of power, generating diversity in cultural visibility, and exposing the failure of liberal colorblindness, this subversion is constituted in the visual field by way of confrontation, flying in the face of the universal law and relying on its ongoing comparison and attribution to this law. Relying on Deleuze and Guattari, this research set out to draw theoretic and empiric attention to an alternative, post-liberal occurrence which has been taking place in the visual field in parallel to the contra-hegemonic phase and as a product of political reality in the aftermath of the crisis of representation. It is no longer a counter-representation; rather, it is a motion of organic minor desire, progressing in the form of flows and generating what Deleuze and Guattari termed deterritorialization of social structures. This discussion shall have its focus on current post-liberal performances of ‘Mizrahim’ (Jewish Israelis of Arab and Muslim extraction) in the visual field in Israel. In television, video art and photography, these performances challenge the issue of representation and generate concrete peripheral Mizrahiness, realized in the visual organization of the photographic frame. Mizrahiness then transforms from ‘confrontational’ representation into a 'presence', flooding the visual sphere in our plain sight, in a process of 'becoming'. The Mizrahi desire is exerted on the plains of sound, spoken language, the body and the space where they appear. It removes from these plains the coding and stratification engendered by European dominance and rational, liberal enlightenment. This stratification, adhering to the hegemonic surface, is flooded not by way of resisting false consciousness or employing hybridity, but by way of the Mizrahi identity’s own productive, material immanent yearning. The Mizrahi desire reverberates with Mizrahi peripheral 'worlds of meaning', where post-colonial interpretation almost invariably identifies a product of internalized oppression, and a recurrence thereof, rather than a source in itself - an ‘offshoot, never a wellspring’, as Nissim Mizrachi clarifies in his recent pioneering work. The peripheral Mizrahi performance ‘unhook itself’, in Deleuze and Guattari words, from the point of subjectification and interpretation and does not correspond with the partialness, absence, and split that mark post-colonial identities.Keywords: desire, minority, Mizrahi Jews, post-colonialism, post-liberalism, visibility, Deleuze and Guattari
Procedia PDF Downloads 32476 Structure Clustering for Milestoning Applications of Complex Conformational Transitions
Authors: Amani Tahat, Serdal Kirmizialtin
Abstract:
Trajectory fragment methods such as Markov State Models (MSM), Milestoning (MS) and Transition Path sampling are the prime choice of extending the timescale of all atom Molecular Dynamics simulations. In these approaches, a set of structures that covers the accessible phase space has to be chosen a priori using cluster analysis. Structural clustering serves to partition the conformational state into natural subgroups based on their similarity, an essential statistical methodology that is used for analyzing numerous sets of empirical data produced by Molecular Dynamics (MD) simulations. Local transition kernel among these clusters later used to connect the metastable states using a Markovian kinetic model in MSM and a non-Markovian model in MS. The choice of clustering approach in constructing such kernel is crucial since the high dimensionality of the biomolecular structures might easily confuse the identification of clusters when using the traditional hierarchical clustering methodology. Of particular interest, in the case of MS where the milestones are very close to each other, accurate determination of the milestone identity of the trajectory becomes a challenging issue. Throughout this work we present two cluster analysis methods applied to the cis–trans isomerism of dinucleotide AA. The choice of nucleic acids to commonly used proteins to study the cluster analysis is two fold: i) the energy landscape is rugged; hence transitions are more complex, enabling a more realistic model to study conformational transitions, ii) Nucleic acids conformational space is high dimensional. A diverse set of internal coordinates is necessary to describe the metastable states in nucleic acids, posing a challenge in studying the conformational transitions. Herein, we need improved clustering methods that accurately identify the AA structure in its metastable states in a robust way for a wide range of confused data conditions. The single linkage approach of the hierarchical clustering available in GROMACS MD-package is the first clustering methodology applied to our data. Self Organizing Map (SOM) neural network, that also known as a Kohonen network, is the second data clustering methodology. The performance comparison of the neural network as well as hierarchical clustering method is studied by means of computing the mean first passage times for the cis-trans conformational rates. Our hope is that this study provides insight into the complexities and need in determining the appropriate clustering algorithm for kinetic analysis. Our results can improve the effectiveness of decisions based on clustering confused empirical data in studying conformational transitions in biomolecules.Keywords: milestoning, self organizing map, single linkage, structure clustering
Procedia PDF Downloads 22475 Machine Learning in Patent Law: How Genetic Breeding Algorithms Challenge Modern Patent Law Regimes
Authors: Stefan Papastefanou
Abstract:
Artificial intelligence (AI) is an interdisciplinary field of computer science with the aim of creating intelligent machine behavior. Early approaches to AI have been configured to operate in very constrained environments where the behavior of the AI system was previously determined by formal rules. Knowledge was presented as a set of rules that allowed the AI system to determine the results for specific problems; as a structure of if-else rules that could be traversed to find a solution to a particular problem or question. However, such rule-based systems typically have not been able to generalize beyond the knowledge provided. All over the world and especially in IT-heavy industries such as the United States, the European Union, Singapore, and China, machine learning has developed to be an immense asset, and its applications are becoming more and more significant. It has to be examined how such products of machine learning models can and should be protected by IP law and for the purpose of this paper patent law specifically, since it is the IP law regime closest to technical inventions and computing methods in technical applications. Genetic breeding models are currently less popular than recursive neural network method and deep learning, but this approach can be more easily described by referring to the evolution of natural organisms, and with increasing computational power; the genetic breeding method as a subset of the evolutionary algorithms models is expected to be regaining popularity. The research method focuses on patentability (according to the world’s most significant patent law regimes such as China, Singapore, the European Union, and the United States) of AI inventions and machine learning. Questions of the technical nature of the problem to be solved, the inventive step as such, and the question of the state of the art and the associated obviousness of the solution arise in the current patenting processes. Most importantly, and the key focus of this paper is the problem of patenting inventions that themselves are developed through machine learning. The inventor of a patent application must be a natural person or a group of persons according to the current legal situation in most patent law regimes. In order to be considered an 'inventor', a person must actually have developed part of the inventive concept. The mere application of machine learning or an AI algorithm to a particular problem should not be construed as the algorithm that contributes to a part of the inventive concept. However, when machine learning or the AI algorithm has contributed to a part of the inventive concept, there is currently a lack of clarity regarding the ownership of artificially created inventions. Since not only all European patent law regimes but also the Chinese and Singaporean patent law approaches include identical terms, this paper ultimately offers a comparative analysis of the most relevant patent law regimes.Keywords: algorithms, inventor, genetic breeding models, machine learning, patentability
Procedia PDF Downloads 10874 Improving Student Learning in a Math Bridge Course through Computer Algebra Systems
Authors: Alejandro Adorjan
Abstract:
Universities are motivated to understand the factor contributing to low retention of engineering undergraduates. While precollege students for engineering increases, the number of engineering graduates continues to decrease and attrition rates for engineering undergraduates remains high. Calculus 1 (C1) is the entry point of most undergraduate Engineering Science and often a prerequisite for Computing Curricula courses. Mathematics continues to be a major hurdle for engineering students and many students who drop out from engineering cite specifically Calculus as one of the most influential factors in that decision. In this context, creating course activities that increase retention and motivate students to obtain better final results is a challenge. In order to develop several competencies in our students of Software Engineering courses, Calculus 1 at Universidad ORT Uruguay focuses on developing several competencies such as capacity of synthesis, abstraction, and problem solving (based on the ACM/AIS/IEEE). Every semester we try to reflect on our practice and try to answer the following research question: What kind of teaching approach in Calculus 1 can we design to retain students and obtain better results? Since 2010, Universidad ORT Uruguay offers a six-week summer noncompulsory bridge course of preparatory math (to bridge the math gap between high school and university). Last semester was the first time the Department of Mathematics offered the course while students were enrolled in C1. Traditional lectures in this bridge course lead to just transcribe notes from blackboard. Last semester we proposed a Hands On Lab course using Geogebra (interactive geometry and Computer Algebra System (CAS) software) as a Math Driven Development Tool. Students worked in a computer laboratory class and developed most of the tasks and topics in Geogebra. As a result of this approach, several pros and cons were found. It was an excessive amount of weekly hours of mathematics for students and, as the course was non-compulsory; the attendance decreased with time. Nevertheless, this activity succeeds in improving final test results and most students expressed the pleasure of working with this methodology. This teaching technology oriented approach strengthens student math competencies needed for Calculus 1 and improves student performance, engagement, and self-confidence. It is important as a teacher to reflect on our practice, including innovative proposals with the objective of engaging students, increasing retention and obtaining better results. The high degree of motivation and engagement of participants with this methodology exceeded our initial expectations, so we plan to experiment with more groups during the summer so as to validate preliminary results.Keywords: calculus, engineering education, PreCalculus, Summer Program
Procedia PDF Downloads 29173 The Procedural Sedation Checklist Manifesto, Emergency Department, Jersey General Hospital
Authors: Jerome Dalphinis, Vishal Patel
Abstract:
The Bailiwick of Jersey is an island British crown dependency situated off the coast of France. Jersey General Hospital’s emergency department sees approximately 40,000 patients a year. It’s outside the NHS, with secondary care being free at the point of care. Sedation is a continuum which extends from a normal conscious level to being fully unresponsive. Procedural sedation produces a minimally depressed level of consciousness in which the patient retains the ability to maintain an airway, and they respond appropriately to physical stimulation. The goals of it are to improve patient comfort and tolerance of the procedure and alleviate associated anxiety. Indications can be stratified by acuity, emergency (cardioversion for life-threatening dysrhythmia), and urgency (joint reduction). In the emergency department, this is most often achieved using a combination of opioids and benzodiazepines. Some departments also use ketamine to produce dissociative sedation, a cataleptic state of profound analgesia and amnesia. The response to pharmacological agents is highly individual, and the drugs used occasionally have unpredictable pharmacokinetics and pharmacodynamics, which can always result in progression between levels of sedation irrespective of the intention. Therefore, practitioners must be able to ‘rescue’ patients from deeper sedation. These practitioners need to be senior clinicians with advanced airway skills (AAS) training. It can lead to adverse effects such as dangerous hypoxia and unintended loss of consciousness if incorrectly undertaken; studies by the National Confidential Enquiry into Patient Outcome and Death (NCEPOD) have reported avoidable deaths. The Royal College of Emergency Medicine, UK (RCEM) released an updated ‘Safe Sedation of Adults in the Emergency Department’ guidance in 2017 detailing a series of standards for staff competencies, and the required environment and equipment, which are required for each target sedation depth. The emergency department in Jersey undertook audit research in 2018 to assess their current practice. It showed gaps in clinical competency, the need for uniform care, and improved documentation. This spurred the development of a checklist incorporating the above RCEM standards, including contraindication for procedural sedation and difficult airway assessment. This was approved following discussion with the relevant heads of departments and the patient safety directorates. Following this, a second audit research was carried out in 2019 with 17 completed checklists (11 relocation of joints, 6 cardioversions). Data was obtained from looking at the controlled resuscitation drugs book containing documented use of ketamine, alfentanil, and fentanyl. TrakCare, which is the patient electronic record system, was then referenced to obtain further information. The results showed dramatic improvement compared to 2018, and they have been subdivided into six categories; pre-procedure assessment recording of significant medical history and ASA grade (2 fold increase), informed consent (100% documentation), pre-oxygenation (88%), staff (90% were AAS practitioners) and monitoring (92% use of non-invasive blood pressure, pulse oximetry, capnography, and cardiac rhythm monitoring) during procedure, and discharge instructions including the documented return of normal vitals and consciousness (82%). This procedural sedation checklist is a safe intervention that identifies pertinent information about the patient and provides a standardised checklist for the delivery of gold standard of care.Keywords: advanced airway skills, checklist, procedural sedation, resuscitation
Procedia PDF Downloads 11772 Electric Vehicle Fleet Operators in the Energy Market - Feasibility and Effects on the Electricity Grid
Authors: Benjamin Blat Belmonte, Stephan Rinderknecht
Abstract:
The transition to electric vehicles (EVs) stands at the forefront of innovative strategies designed to address environmental concerns and reduce fossil fuel dependency. As the number of EVs on the roads increases, so too does the potential for their integration into energy markets. This research dives deep into the transformative possibilities of using electric vehicle fleets, specifically electric bus fleets, not just as consumers but as active participants in the energy market. This paper investigates the feasibility and grid effects of electric vehicle fleet operators in the energy market. Our objective centers around a comprehensive exploration of the sector coupling domain, with an emphasis on the economic potential in both electricity and balancing markets. Methodologically, our approach combines data mining techniques with thorough pre-processing, pulling from a rich repository of electricity and balancing market data. Our findings are grounded in the actual operational realities of the bus fleet operator in Darmstadt, Germany. We employ a Mixed Integer Linear Programming (MILP) approach, with the bulk of the computations being processed on the High-Performance Computing (HPC) platform ‘Lichtenbergcluster’. Our findings underscore the compelling economic potential of EV fleets in the energy market. With electric buses becoming more prevalent, the considerable size of these fleets, paired with their substantial battery capacity, opens up new horizons for energy market participation. Notably, our research reveals that economic viability is not the sole advantage. Participating actively in the energy market also translates into pronounced positive effects on grid stabilization. Essentially, EV fleet operators can serve a dual purpose: facilitating transport while simultaneously playing an instrumental role in enhancing grid reliability and resilience. This research highlights the symbiotic relationship between the growth of EV fleets and the stabilization of the energy grid. Such systems could lead to both commercial and ecological advantages, reinforcing the value of electric bus fleets in the broader landscape of sustainable energy solutions. In conclusion, the electrification of transport offers more than just a means to reduce local greenhouse gas emissions. By positioning electric vehicle fleet operators as active participants in the energy market, there lies a powerful opportunity to drive forward the energy transition. This study serves as a testament to the synergistic potential of EV fleets in bolstering both economic viability and grid stabilization, signaling a promising trajectory for future sector coupling endeavors.Keywords: electric vehicle fleet, sector coupling, optimization, electricity market, balancing market
Procedia PDF Downloads 7471 Circular Tool and Dynamic Approach to Grow the Entrepreneurship of Macroeconomic Metabolism
Authors: Maria Areias, Diogo Simões, Ana Figueiredo, Anishur Rahman, Filipa Figueiredo, João Nunes
Abstract:
It is expected that close to 7 billion people will live in urban areas by 2050. In order to improve the sustainability of the territories and its transition towards circular economy, it’s necessary to understand its metabolism and promote and guide the entrepreneurship answer. The study of a macroeconomic metabolism involves the quantification of the inputs, outputs and storage of energy, water, materials and wastes for an urban region. This quantification and analysis representing one opportunity for the promotion of green entrepreneurship. There are several methods to assess the environmental impacts of an urban territory, such as human and environmental risk assessment (HERA), life cycle assessment (LCA), ecological footprint assessment (EF), material flow analysis (MFA), physical input-output table (PIOT), ecological network analysis (ENA), multicriteria decision analysis (MCDA) among others. However, no consensus exists about which of those assessment methods are best to analyze the sustainability of these complex systems. Taking into account the weaknesses and needs identified, the CiiM - Circular Innovation Inter-Municipality project aims to define an uniform and globally accepted methodology through the integration of various methodologies and dynamic approaches to increase the efficiency of macroeconomic metabolisms and promoting entrepreneurship in a circular economy. The pilot territory considered in CiiM project has a total area of 969,428 ha, comprising a total of 897,256 inhabitants (about 41% of the population of the Center Region). The main economic activities in the pilot territory, which contribute to a gross domestic product of 14.4 billion euros, are: social support activities for the elderly; construction of buildings; road transport of goods, retailing in supermarkets and hypermarkets; mass production of other garments; inpatient health facilities; and the manufacture of other components and accessories for motor vehicles. The region's business network is mostly constituted of micro and small companies (similar to the Central Region of Portugal), with a total of 53,708 companies identified in the CIM Region of Coimbra (39 large companies), 28,146 in the CIM Viseu Dão Lafões (22 large companies) and 24,953 in CIM Beiras and Serra da Estrela (13 large companies). For the construction of the database was taking into account data available at the National Institute of Statistics (INE), General Directorate of Energy and Geology (DGEG), Eurostat, Pordata, Strategy and Planning Office (GEP), Portuguese Environment Agency (APA), Commission for Coordination and Regional Development (CCDR) and Inter-municipal Community (CIM), as well as dedicated databases. In addition to the collection of statistical data, it was necessary to identify and characterize the different stakeholder groups in the pilot territory that are relevant to the different metabolism components under analysis. The CIIM project also adds the potential of a Geographic Information System (GIS) so that it is be possible to obtain geospatial results of the territorial metabolisms (rural and urban) of the pilot region. This platform will be a powerful visualization tool of flows of products/services that occur within the region and will support the stakeholders, improving their circular performance and identifying new business ideas and symbiotic partnerships.Keywords: circular economy tools, life cycle assessment macroeconomic metabolism, multicriteria decision analysis, decision support tools, circular entrepreneurship, industrial and regional symbiosis
Procedia PDF Downloads 10170 Microfluidic Plasmonic Device for the Sensitive Dual LSPR-Thermal Detection of the Cardiac Troponin Biomarker in Laminal Flow
Authors: Andreea Campu, Ilinica Muresan, Simona Cainap, Simion Astilean, Monica Focsan
Abstract:
Acute myocardial infarction (AMI) is the most severe cardiovascular disease, which has threatened human lives for decades, thus a continuous interest is directed towards the detection of cardiac biomarkers such as cardiac troponin I (cTnI) in order to predict risk and, implicitly, fulfill the early diagnosis requirements in AMI settings. Microfluidics is a major technology involved in the development of efficient sensing devices with real-time fast responses and on-site applicability. Microfluidic devices have gathered a lot of attention recently due to their advantageous features such as high sensitivity and specificity, miniaturization and portability, ease-of-use, low-cost, facile fabrication, and reduced sample manipulation. The integration of gold nanoparticles into the structure of microfluidic sensors has led to the development of highly effective detection systems, considering the unique properties of the metallic nanostructures, specifically the Localized Surface Plasmon Resonance (LSPR), which makes them highly sensitive to their microenvironment. In this scientific context, herein, we propose the implementation of a novel detection device, which successfully combines the efficiency of gold bipyramids (AuBPs) as signal transducers and thermal generators with the sample-driven advantages of the microfluidic channels into a miniaturized, portable, low-cost, specific, and sensitive test for the dual LSPR-thermographic cTnI detection. Specifically, AuBPs with longitudinal LSPR response at 830 nm were chemically synthesized using the seed-mediated growth approach and characterized in terms of optical and morphological properties. Further, the colloidal AuBPs were deposited onto pre-treated silanized glass substrates thus, a uniform nanoparticle coverage of the substrate was obtained and confirmed by extinction measurements showing a 43 nm blue-shift of the LSPR response as a consequence of the refractive index change. The as-obtained plasmonic substrate was then integrated into a microfluidic “Y”-shaped polydimethylsiloxane (PDMS) channel, fabricated using a Laser Cutter system. Both plasmonic and microfluidic elements were plasma treated in order to achieve a permanent bond. The as-developed microfluidic plasmonic chip was further coupled to an automated syringe pump system. The proposed biosensing protocol implicates the successive injection inside the microfluidic channel as follows: p-aminothiophenol and glutaraldehyde, to achieve a covalent bond between the metallic surface and cTnI antibody, anti-cTnI, as a recognition element, and target cTnI biomarker. The successful functionalization and capture of cTnI was monitored by LSPR detection thus, after each step, a red-shift of the optical response was recorded. Furthermore, as an innovative detection technique, thermal determinations were made after each injection by exposing the microfluidic plasmonic chip to 785 nm laser excitation, considering that the AuBPs exhibit high light-to-heat conversion performances. By the analysis of the thermographic images, thermal curves were obtained, showing a decrease in the thermal efficiency after the anti-cTnI-cTnI reaction was realized. Thus, we developed a microfluidic plasmonic chip able to operate as both LSPR and thermal sensor for the detection of the cardiac troponin I biomarker, leading thus to the progress of diagnostic devices.Keywords: gold nanobipyramids, microfluidic device, localized surface plasmon resonance detection, thermographic detection
Procedia PDF Downloads 12969 Federated Knowledge Distillation with Collaborative Model Compression for Privacy-Preserving Distributed Learning
Authors: Shayan Mohajer Hamidi
Abstract:
Federated learning has emerged as a promising approach for distributed model training while preserving data privacy. However, the challenges of communication overhead, limited network resources, and slow convergence hinder its widespread adoption. On the other hand, knowledge distillation has shown great potential in compressing large models into smaller ones without significant loss in performance. In this paper, we propose an innovative framework that combines federated learning and knowledge distillation to address these challenges and enhance the efficiency of distributed learning. Our approach, called Federated Knowledge Distillation (FKD), enables multiple clients in a federated learning setting to collaboratively distill knowledge from a teacher model. By leveraging the collaborative nature of federated learning, FKD aims to improve model compression while maintaining privacy. The proposed framework utilizes a coded teacher model that acts as a reference for distilling knowledge to the client models. To demonstrate the effectiveness of FKD, we conduct extensive experiments on various datasets and models. We compare FKD with baseline federated learning methods and standalone knowledge distillation techniques. The results show that FKD achieves superior model compression, faster convergence, and improved performance compared to traditional federated learning approaches. Furthermore, FKD effectively preserves privacy by ensuring that sensitive data remains on the client devices and only distilled knowledge is shared during the training process. In our experiments, we explore different knowledge transfer methods within the FKD framework, including Fine-Tuning (FT), FitNet, Correlation Congruence (CC), Similarity-Preserving (SP), and Relational Knowledge Distillation (RKD). We analyze the impact of these methods on model compression and convergence speed, shedding light on the trade-offs between size reduction and performance. Moreover, we address the challenges of communication efficiency and network resource utilization in federated learning by leveraging the knowledge distillation process. FKD reduces the amount of data transmitted across the network, minimizing communication overhead and improving resource utilization. This makes FKD particularly suitable for resource-constrained environments such as edge computing and IoT devices. The proposed FKD framework opens up new avenues for collaborative and privacy-preserving distributed learning. By combining the strengths of federated learning and knowledge distillation, it offers an efficient solution for model compression and convergence speed enhancement. Future research can explore further extensions and optimizations of FKD, as well as its applications in domains such as healthcare, finance, and smart cities, where privacy and distributed learning are of paramount importance.Keywords: federated learning, knowledge distillation, knowledge transfer, deep learning
Procedia PDF Downloads 7668 A Case Study of Remote Location Viewing, and Its Significance in Mobile Learning
Authors: James Gallagher, Phillip Benachour
Abstract:
As location aware mobile technologies become ever more omnipresent, the prospect of exploiting their context awareness to enforce learning approaches thrives. Utilizing the growing acceptance of ubiquitous computing, and the steady progress both in accuracy and battery usage of pervasive devices, we present a case study of remote location viewing, how the application can be utilized to support mobile learning in situ using an existing scenario. Through the case study we introduce a new innovative application: Mobipeek based around a request/response protocol for the viewing of a remote location and explore how this can apply both as part of a teacher lead activity and informal learning situations. The system developed allows a user to select a point on a map, and send a request. Users can attach messages alongside time and distance constraints. Users within the bounds of the request can respond with an image, and accompanying message, providing context to the response. This application can be used alongside a structured learning activity such as the use of mobile phone cameras outdoors as part of an interactive lesson. An example of a learning activity would be to collect photos in the wild about plants, vegetation, and foliage as part of a geography or environmental science lesson. Another example could be to take photos of architectural buildings and monuments as part of an architecture course. These images can be uploaded then displayed back in the classroom for students to share their experiences and compare their findings with their peers. This can help to fosters students’ active participation while helping students to understand lessons in a more interesting and effective way. Mobipeek could augment the student learning experience by providing further interaction with other peers in a remote location. The activity can be part of a wider study between schools in different areas of the country enabling the sharing and interaction between more participants. Remote location viewing can be used to access images in a specific location. The choice of location will depend on the activity and lesson. For example architectural buildings of a specific period can be shared between two or more cities. The augmentation of the learning experience can be manifested in the different contextual and cultural influences as well as the sharing of images from different locations. In addition to the implementation of Mobipeek, we strive to analyse this application, and a subset of other possible and further solutions targeted towards making learning more engaging. Consideration is given to the benefits of such a system, privacy concerns, and feasibility of widespread usage. We also propose elements of “gamification”, in an attempt to further the engagement derived from such a tool and encourage usage. We conclude by identifying limitations, both from a technical, and a mobile learning perspective.Keywords: context aware, location aware, mobile learning, remote viewing
Procedia PDF Downloads 29167 Characterization of Aluminosilicates and Verification of Their Impact on Quality of Ceramic Proppants Intended for Shale Gas Output
Authors: Joanna Szymanska, Paulina Wawulska-Marek, Jaroslaw Mizera
Abstract:
Nowadays, the rapid growth of global energy consumption and uncontrolled depletion of natural resources become a serious problem. Shale rocks are the largest and potential global basins containing hydrocarbons, trapped in closed pores of the shale matrix. Regardless of the shales origin, mining conditions are extremely unfavourable due to high reservoir pressure, great depths, increased clay minerals content and limited permeability (nanoDarcy) of the rocks. Taking into consideration such geomechanical barriers, effective extraction of natural gas from shales with plastic zones demands effective operations. Actually, hydraulic fracturing is the most developed technique based on the injection of pressurized fluid into a wellbore, to initiate fractures propagation. However, a rapid drop of pressure after fluid suction to the ground induces a fracture closure and conductivity reduction. In order to minimize this risk, proppants should be applied. They are solid granules transported with hydraulic fluids to locate inside the rock. Proppants act as a prop for the closing fracture, thus gas migration to a borehole is effective. Quartz sands are commonly applied proppants only at shallow deposits (USA). Whereas, ceramic proppants are designed to meet rigorous downhole conditions to intensify output. Ceramic granules predominate with higher mechanical strength, stability in strong acidic environment, spherical shape and homogeneity as well. Quality of ceramic proppants is conditioned by raw materials selection. Aim of this study was to obtain the proppants from aluminosilicates (the kaolinite subgroup) and mix of minerals with a high alumina content. These loamy minerals contain a tubular and platy morphology that improves mechanical properties and reduces their specific weight. Moreover, they are distinguished by well-developed surface area, high porosity, fine particle size, superb dispersion and nontoxic properties - very crucial for particles consolidation into spherical and crush-resistant granules in mechanical granulation process. The aluminosilicates were mixed with water and natural organic binder to improve liquid-bridges and pores formation between particles. Afterward, the green proppants were subjected to sintering at high temperatures. Evaluation of the minerals utility was based on their particle size distribution (laser diffraction study) and thermal stability (thermogravimetry). Scanning Electron Microscopy was useful for morphology and shape identification combined with specific surface area measurement (BET). Chemical composition was verified by Energy Dispersive Spectroscopy and X-ray Fluorescence. Moreover, bulk density and specific weight were measured. Such comprehensive characterization of loamy materials confirmed their favourable impact on the proppants granulation. The sintered granules were analyzed by SEM to verify the surface topography and phase transitions after sintering. Pores distribution was identified by X-Ray Tomography. This method enabled also the simulation of proppants settlement in a fracture, while measurement of bulk density was essential to predict their amount to fill a well. Roundness coefficient was also evaluated, whereas impact on mining environment was identified by turbidity and solubility in acid - to indicate risk of the material decay in a well. The obtained outcomes confirmed a positive influence of the loamy minerals on ceramic proppants properties with respect to the strict norms. This research is perspective for higher quality proppants production with costs reduction.Keywords: aluminosilicates, ceramic proppants, mechanical granulation, shale gas
Procedia PDF Downloads 16366 Comparison of Titanium and Aluminum Functions as Spoilers for Dose Uniformity Achievement in Abutting Oblique Electron Fields: A Monte Carlo Simulation Study
Authors: Faranak Felfeliyan, Parvaneh Shokrani, Maryam Atarod
Abstract:
Introduction Using electron beam is widespread in radiotherapy. The main criteria in radiation therapy is to irradiate the tumor volume with maximum prescribed dose and minimum dose to vital organs around it. Using abutting fields is common in radiotherapy. The main problem in using abutting fields is dose inhomogeneity in the junction region. Electron beam divergence and lateral scattering may lead to hot and cold spots in the junction region. One solution for this problem is using of a spoiler to broaden the penumbra and uniform dose in the junction region. The goal of this research was to compare titanium and aluminum effects as a spoiler for dose uniformity achievement in the junction region of oblique electron fields with Monte Carlo simulation. Dose uniformity in the junction region depends on density, scattering power, thickness of the spoiler and the angle between two fields. Materials and Methods In this study, Monte Carlo model of Siemens Primus linear accelerator was simulated for a 5 MeV nominal energy electron beam using manufacture provided specifications. BEAMnrc and EGSnrc user code were used to simulate the treatment head in electron mode (simulation of beam model). The resulting phase space file was used as a source for dose calculations for 10×10 cm2 field size at SSD=100 cm in a 30×30×45 cm3 water phantom using DOSXYZnrc user code (dose calculations). An automatic MP3-M water phantom tank, MEPHYSTO mc2 software platform and a Semi-Flex Chamber-31010 with sensitive volume of 0.125 cm3 (PTW, Freiburg, Germany) were used for dose distribution measurements. Moreover, the electron field size was 10×10 cm2 and SSD=100 cm. Validation of developed beam model was done by comparing the measured and calculated depth and lateral dose distributions (verification of electron beam model). Simulation of spoilers (using SLAB component module) placed at the end of the electron applicator, was done using previously validated phase space file for a 5 MeV nominal energy and 10×10 cm2 field size (simulation of spoiler). An in-house routine was developed in order to calculate the combined isodose curves resulting from the two simulated abutting fields (calculation of dose distribution in abutting electron fields). Results Verification of the developed 5.9 MeV electron beam model was done by comparing the calculated and measured dose distributions. The maximum percentage difference between calculated and measured PDD was 1%, except for the build-up region in which the difference was 2%. The difference between calculated and measured profile was 2% at the edges of the field and less than 1% in other regions. The effect of PMMA, aluminum, titanium and chromium in dose uniformity achievement in abutting normal electron fields with equivalent thicknesses to 5mm PMMA was evaluated. Comparing R90 and uniformity index of different materials, aluminum was chosen as the optimum spoiler. Titanium has the maximum surface dose. Thus, aluminum and titanium had been chosen to use for dose uniformity achievement in oblique electron fields. Using the optimum beam spoiler, junction dose decreased from 160% to 110% for 15 degrees, from 180% to 120% for 30 degrees, from 160% to 120% for 45 degrees and from 180% to 100% for 60 degrees oblique abutting fields. Using Titanium spoiler, junction dose decreased from 160% to 120% for 15 degrees, 180% to 120% for 30 degrees, 160% to 120% for 45 degrees and 180% to 110% for 60 degrees. In addition, penumbra width for 15 degrees, without spoiler in the surface was 10 mm and was increased to 15.5 mm with titanium spoiler. For 30 degrees, from 9 mm to 15 mm, for 45 degrees from 4 mm to 6 mm and for 60 degrees, from 5 mm to 8 mm. Conclusion Using spoilers, penumbra width at the surface increased, size and depth of hot spots was decreased and dose homogeneity improved at the junction of abutting electron fields. Dose at the junction region of abutting oblique fields was improved significantly by using spoiler. Maximum dose at the junction region for 15⁰, 30⁰, 45⁰ and 60⁰ was decreased about 40%, 60%, 40% and 70% respectively for Titanium and about 50%, 60%, 40% and 80% for Aluminum. Considering significantly decrease in maximum dose using titanium spoiler, unfortunately, dose distribution in the junction region was not decreased less than 110%.Keywords: abutting fields, electron beam, radiation therapy, spoilers
Procedia PDF Downloads 17665 Modeling Engagement with Multimodal Multisensor Data: The Continuous Performance Test as an Objective Tool to Track Flow
Authors: Mohammad H. Taheri, David J. Brown, Nasser Sherkat
Abstract:
Engagement is one of the most important factors in determining successful outcomes and deep learning in students. Existing approaches to detect student engagement involve periodic human observations that are subject to inter-rater reliability. Our solution uses real-time multimodal multisensor data labeled by objective performance outcomes to infer the engagement of students. The study involves four students with a combined diagnosis of cerebral palsy and a learning disability who took part in a 3-month trial over 59 sessions. Multimodal multisensor data were collected while they participated in a continuous performance test. Eye gaze, electroencephalogram, body pose, and interaction data were used to create a model of student engagement through objective labeling from the continuous performance test outcomes. In order to achieve this, a type of continuous performance test is introduced, the Seek-X type. Nine features were extracted including high-level handpicked compound features. Using leave-one-out cross-validation, a series of different machine learning approaches were evaluated. Overall, the random forest classification approach achieved the best classification results. Using random forest, 93.3% classification for engagement and 42.9% accuracy for disengagement were achieved. We compared these results to outcomes from different models: AdaBoost, decision tree, k-Nearest Neighbor, naïve Bayes, neural network, and support vector machine. We showed that using a multisensor approach achieved higher accuracy than using features from any reduced set of sensors. We found that using high-level handpicked features can improve the classification accuracy in every sensor mode. Our approach is robust to both sensor fallout and occlusions. The single most important sensor feature to the classification of engagement and distraction was shown to be eye gaze. It has been shown that we can accurately predict the level of engagement of students with learning disabilities in a real-time approach that is not subject to inter-rater reliability, human observation or reliant on a single mode of sensor input. This will help teachers design interventions for a heterogeneous group of students, where teachers cannot possibly attend to each of their individual needs. Our approach can be used to identify those with the greatest learning challenges so that all students are supported to reach their full potential.Keywords: affective computing in education, affect detection, continuous performance test, engagement, flow, HCI, interaction, learning disabilities, machine learning, multimodal, multisensor, physiological sensors, student engagement
Procedia PDF Downloads 9564 Recrystallization Behavior and Microstructural Evolution of Nickel Base Superalloy AD730 Billet during Hot Forging at Subsolvus Temperatures
Authors: Marcos Perez, Christian Dumont, Olivier Nodin, Sebastien Nouveau
Abstract:
Nickel superalloys are used to manufacture high-temperature rotary engine parts such as high-pressure disks in gas turbine engines. High strength at high operating temperatures is required due to the levels of stress and heat the disk must withstand. Therefore it is necessary parts made from materials that can maintain mechanical strength at high temperatures whilst remain comparatively low in cost. A manufacturing process referred to as the triple melt process has made the production of cast and wrought (C&W) nickel superalloys possible. This means that the balance of cost and performance at high temperature may be optimized. AD730TM is a newly developed Ni-based superalloy for turbine disk applications, with reported superior service properties around 700°C when compared to Inconel 718 and several other alloys. The cast ingot is converted into billet during either cogging process or open die forging. The semi-finished billet is then further processed into its final geometry by forging, heat treating, and machining. Conventional ingot-to-billet conversion is an expensive and complex operation, requiring a significant amount of steps to break up the coarse as-cast structure and interdendritic regions. Due to the size of conventional ingots, it is difficult to achieve a uniformly high level of strain for recrystallization, resulting in non-recrystallized regions that retain large unrecrystallized grains. Non-uniform grain distributions will also affect the ultrasonic inspectability response, which is used to find defects in the final component. The main aim is to analyze the recrystallization behavior and microstructural evolution of AD730 at subsolvus temperatures from a semi-finished product (billet) under conditions representative of both cogging and hot forging operations. Special attention to the presence of large unrecrystallized grains was paid. Double truncated cones (DTCs) were hot forged at subsolvus temperatures in hydraulic press, followed by air cooling. SEM and EBSD analysis were conducted in the as-received (billet) and the as-forged conditions. AD730 from billet alloy presents a complex microstructure characterized by a mixture of several constituents. Large unrecrystallized grains present a substructure characterized by large misorientation gradients with the formation of medium to high angle boundaries in their interior, especially close to the grain boundaries, denoting inhomogeneous strain distribution. A fine distribution of intragranular precipitates was found in their interior, playing a key role on strain distribution and subsequent recrystallization behaviour during hot forging. Continuous dynamic recrystallization (CDRX) mechanism was found to be operating in the large unrecrystallized grains, promoting the formation intragranular DRX grains and the gradual recrystallization of these grains. Evidences that hetero-epitaxial recrystallization mechanism is operating in AD730 billet material were found. Coherent γ-shells around primary γ’ precipitates were found. However, no significant contribution to the overall recrystallization during hot forging was found. By contrast, strain presents the strongest effect on the microstructural evolution of AD730, increasing the recrystallization fraction and refining the structure. Regions with low level of deformation (ε ≤ 0.6) were translated into large fractions of unrecrystallized structures (strain accumulation). The presence of undissolved secondary γ’ precipitates (pinning effect), prior to hot forging operations, could explain these results.Keywords: AD730 alloy, continuous dynamic recrystallization, hot forging, γ’ precipitates
Procedia PDF Downloads 20263 Product Life Cycle Assessment of Generatively Designed Furniture for Interiors Using Robot Based Additive Manufacturing
Authors: Andrew Fox, Qingping Yang, Yuanhong Zhao, Tao Zhang
Abstract:
Furniture is a very significant subdivision of architecture and its inherent interior design activities. The furniture industry has developed from an artisan-driven craft industry, whose forerunners saw themselves manifested in their crafts and treasured a sense of pride in the creativity of their designs, these days largely reduced to an anonymous collective mass-produced output. Although a very conservative industry, there is great potential for the implementation of collaborative digital technologies allowing a reconfigured artisan experience to be reawakened in a new and exciting form. The furniture manufacturing industry, in general, has been slow to adopt new methodologies for a design using artificial and rule-based generative design. This tardiness has meant the loss of potential to enhance its capabilities in producing sustainable, flexible, and mass customizable ‘right first-time’ designs. This paper aims to demonstrate the concept methodology for the creation of alternative and inspiring aesthetic structures for robot-based additive manufacturing (RBAM). These technologies can enable the economic creation of previously unachievable structures, which traditionally would not have been commercially economic to manufacture. The integration of these technologies with the computing power of generative design provides the tools for practitioners to create concepts which are well beyond the insight of even the most accomplished traditional design teams. This paper aims to address the problem by introducing generative design methodologies employing the Autodesk Fusion 360 platform. Examination of the alternative methods for its use has the potential to significantly reduce the estimated 80% contribution to environmental impact at the initial design phase. Though predominantly a design methodology, generative design combined with RBAM has the potential to leverage many lean manufacturing and quality assurance benefits, enhancing the efficiency and agility of modern furniture manufacturing. Through a case study examination of a furniture artifact, the results will be compared to a traditionally designed and manufactured product employing the Ecochain Mobius product life cycle analysis (LCA) platform. This will highlight the benefits of both generative design and robot-based additive manufacturing from an environmental impact and manufacturing efficiency standpoint. These step changes in design methodology and environmental assessment have the potential to revolutionise the design to manufacturing workflow, giving momentum to the concept of conceiving a pre-industrial model of manufacturing, with the global demand for a circular economy and bespoke sustainable design at its heart.Keywords: robot, manufacturing, generative design, sustainability, circular econonmy, product life cycle assessment, furniture
Procedia PDF Downloads 14162 Big Data for Local Decision-Making: Indicators Identified at International Conference on Urban Health 2017
Authors: Dana R. Thomson, Catherine Linard, Sabine Vanhuysse, Jessica E. Steele, Michal Shimoni, Jose Siri, Waleska Caiaffa, Megumi Rosenberg, Eleonore Wolff, Tais Grippa, Stefanos Georganos, Helen Elsey
Abstract:
The Sustainable Development Goals (SDGs) and Urban Health Equity Assessment and Response Tool (Urban HEART) identify dozens of key indicators to help local decision-makers prioritize and track inequalities in health outcomes. However, presentations and discussions at the International Conference on Urban Health (ICUH) 2017 suggested that additional indicators are needed to make decisions and policies. A local decision-maker may realize that malaria or road accidents are a top priority. However, s/he needs additional health determinant indicators, for example about standing water or traffic, to address the priority and reduce inequalities. Health determinants reflect the physical and social environments that influence health outcomes often at community- and societal-levels and include such indicators as access to quality health facilities, access to safe parks, traffic density, location of slum areas, air pollution, social exclusion, and social networks. Indicator identification and disaggregation are necessarily constrained by available datasets – typically collected about households and individuals in surveys, censuses, and administrative records. Continued advancements in earth observation, data storage, computing and mobile technologies mean that new sources of health determinants indicators derived from 'big data' are becoming available at fine geographic scale. Big data includes high-resolution satellite imagery and aggregated, anonymized mobile phone data. While big data are themselves not representative of the population (e.g., satellite images depict the physical environment), they can provide information about population density, wealth, mobility, and social environments with tremendous detail and accuracy when combined with population-representative survey, census, administrative and health system data. The aim of this paper is to (1) flag to data scientists important indicators needed by health decision-makers at the city and sub-city scale - ideally free and publicly available, and (2) summarize for local decision-makers new datasets that can be generated from big data, with layperson descriptions of difficulties in generating them. We include SDGs and Urban HEART indicators, as well as indicators mentioned by decision-makers attending ICUH 2017.Keywords: health determinant, health outcome, mobile phone, remote sensing, satellite imagery, SDG, urban HEART
Procedia PDF Downloads 21161 Simplified Modeling of Post-Soil Interaction for Roadside Safety Barriers
Authors: Charly Julien Nyobe, Eric Jacquelin, Denis Brizard, Alexy Mercier
Abstract:
The performance of road side safety barriers depends largely on the dynamic interactions between post and soil. These interactions play a key role in the response of barriers to crash testing. In the literature, soil-post interaction is modeled in crash test simulations using three approaches. Many researchers have initially used the finite element approach, in which the post is embedded in a continuum soil modelled by solid finite elements. This method represents a more comprehensive and detailed approach, employing a mesh-based continuum to model the soil’s behavior and its interaction with the post. Although this method takes all soil properties into account, it is nevertheless very costly in terms of simulation time. In the second approach, all the points of the post located at a predefined depth are fixed. Although this approach reduces CPU computing time, it overestimates soil-post stiffness. The third approach involves modeling the post as a beam supported by a set of nonlinear springs in the horizontal directions. For support in the vertical direction, the posts were constrained at a node at ground level. This approach is less costly, but the literature does not provide a simple procedure to determine the constitutive law of the springs The aim of this study is to propose a simple and low-cost procedure to obtain the constitutive law of nonlinear springs that model the soil-post interaction. To achieve this objective, we will first present a procedure to obtain the constitutive law of nonlinear springs thanks to the simulation of a soil compression test. The test consists in compressing the soil contained in the tank by a rigid solid, up to a vertical displacement of 200 mm. The resultant force exerted by the ground on the rigid solid and its vertical displacement are extracted and, a force-displacement curve was determined. The proposed procedure for replacing the soil with springs must be tested against a reference model. The reference model consists of a wooden post embedded into the ground and impacted with an impactor. Two simplified models with springs are studied. In the first model, called Kh-Kv model, the springs are attached to the post in the horizontal and vertical directions. The second Kh model is the one described in the literature. The two simplified models are compared with the reference model according to several criteria: the displacement of a node located at the top of the post in vertical and horizontal directions; displacement of the post's center of rotation and impactor velocity. The results given by both simplified models are very close to the reference model results. It is noticeable that the Kh-Kv model is slightly better than the Kh model. Further, the former model is more interesting than the latter as it involves less arbitrary conditions. The simplified models also reduce the simulation time by a factor 4. The Kh-Kv model can therefore be used as a reliable tool to represent the soil-post interaction in a future research and development of road safety barriers.Keywords: crash tests, nonlinear springs, soil-post interaction modeling, constitutive law
Procedia PDF Downloads 3260 The 4th Critical R: Conceptualising the Development of Resilience as an Addition to the 3 Rs of the Essential Education Curricula
Authors: Akhentoolove Corbin, Leta De Jonge, Charmaine De Jonge
Abstract:
Introduction: Various writers have promoted the adoption of the 4th R in the education curricula (relationships, respect, reasoning, religion, computing, science, art, conflict management, music) and the 5th R (responsibility). They argue that the traditional 3 Rs are not adequate for the modern environment and the requirements for students to become functional citizens in society. In particular, the developing countries of the anglophone Caribbean (most of which are tiny islands) are susceptible to the dangers and complexities of climate change and global economic volatility. These proposed additions to the 3Rs do have some justification, but this research considers Resilience as even more important and relevant in a world that is faced with the negative prospects of climate change, poverty, discrimination, and economic volatility. It is argued that the foundation for resilient citizens, workers, and workplaces, must be built in the elementary and secondary/middle schools and then through the tertiary level, to achieve an outcome of more resilient students. Government, business, and society require widespread resilience to be capable of ‘bouncing back’ and be more adaptable, transformational, and sustainable. Methodology: The paper utilises a mixed-methods approach incorporating a questionnaire and interviews to determine participants’ opinions on the importance and relevance of resilience in the schools’ curricula and to government, business, and society. The target groups are as follows: educators at all levels, education administrators, members of the business sector, public sector, and 3rd sector. The research specifically targets the anglophone Caribbean developing countries (Barbados, Guyana, Jamaica, Trinidad, St. Lucia, and St Vincent, and the Grenadines). The research utilises SPSS for data analysis. Major Findings: The preliminary findings suggest that the majority of participants support the adoption of resilience as a 4th R in the curricula of the elementary, secondary/middle schools, and tertiary level in the anglophone Caribbean. The final results will allow the researchers to reveal more specific details on any variations among the islands in the sample andto engage in an in-depth discussion of the relevance and importance of resilience as the 4th R. Conclusion: Results seem to suggest that the education system should adopt the 4th R of resilience so that educators working in collaboration with the family and community/village can develop young citizens who are more resilient and capable of manifesting the behaviours and attitudes associated with ‘bouncing back,’ adaptability, transformation, and sustainability. These findings may be useful for education decision-makers and governments in these Caribbean islands, who have the authority and responsibility for the development of education policy, laws, and regulations.Keywords: education, resilient students, adaptable, transformational, resilient citizens, workplaces, government
Procedia PDF Downloads 7059 Evaluation of Academic Research Projects Using the AHP and TOPSIS Methods
Authors: Murat Arıbaş, Uğur Özcan
Abstract:
Due to the increasing number of universities and academics, the fund of the universities for research activities and grants/supports given by government institutions have increased number and quality of academic research projects. Although every academic research project has a specific purpose and importance, limited resources (money, time, manpower etc.) require choosing the best ones from all (Amiri, 2010). It is a pretty hard process to compare and determine which project is better such that the projects serve different purposes. In addition, the evaluation process has become complicated since there are more than one evaluator and multiple criteria for the evaluation (Dodangeh, Mojahed and Yusuff, 2009). Mehrez and Sinuany-Stern (1983) determined project selection problem as a Multi Criteria Decision Making (MCDM) problem. If a decision problem involves multiple criteria and objectives, it is called as a Multi Attribute Decision Making problem (Ömürbek & Kınay, 2013). There are many MCDM methods in the literature for the solution of such problems. These methods are AHP (Analytic Hierarchy Process), ANP (Analytic Network Process), TOPSIS (Technique for Order Preference by Similarity to Ideal Solution), PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluation), UTADIS (Utilities Additives Discriminantes), ELECTRE (Elimination et Choix Traduisant la Realite), MAUT (Multiattribute Utility Theory), GRA (Grey Relational Analysis) etc. Teach method has some advantages compared with others (Ömürbek, Blacksmith & Akalın, 2013). Hence, to decide which MCDM method will be used for solution of the problem, factors like the nature of the problem, types of choices, measurement scales, type of uncertainty, dependency among the attributes, expectations of decision maker, and quantity and quality of the data should be considered (Tavana & Hatami-Marbini, 2011). By this study, it is aimed to develop a systematic decision process for the grant support applications that are expected to be evaluated according to their scientific adequacy by multiple evaluators under certain criteria. In this context, project evaluation process applied by The Scientific and Technological Research Council of Turkey (TÜBİTAK) the leading institutions in our country, was investigated. Firstly in the study, criteria that will be used on the project evaluation were decided. The main criteria were selected among TÜBİTAK evaluation criteria. These criteria were originality of project, methodology, project management/team and research opportunities and extensive impact of project. Moreover, for each main criteria, 2-4 sub criteria were defined, hence it was decided to evaluate projects over 13 sub-criterion in total. Due to superiority of determination criteria weights AHP method and provided opportunity ranking great number of alternatives TOPSIS method, they are used together. AHP method, developed by Saaty (1977), is based on selection by pairwise comparisons. Because of its simple structure and being easy to understand, AHP is the very popular method in the literature for determining criteria weights in MCDM problems. Besides, the TOPSIS method developed by Hwang and Yoon (1981) as a MCDM technique is an alternative to ELECTRE method and it is used in many areas. In the method, distance from each decision point to ideal and to negative ideal solution point was calculated by using Euclidian Distance Approach. In the study, main criteria and sub-criteria were compared on their own merits by using questionnaires that were developed based on an importance scale by four relative groups of people (i.e. TUBITAK specialists, TUBITAK managers, academics and individuals from business world ) After these pairwise comparisons, weight of the each main criteria and sub-criteria were calculated by using AHP method. Then these calculated criteria’ weights used as an input in TOPSİS method, a sample consisting 200 projects were ranked on their own merits. This new system supported to opportunity to get views of the people that take part of project process including preparation, evaluation and implementation on the evaluation of academic research projects. Moreover, instead of using four main criteria in equal weight to evaluate projects, by using weighted 13 sub-criteria and decision point’s distance from the ideal solution, systematic decision making process was developed. By this evaluation process, new approach was created to determine importance of academic research projects.Keywords: Academic projects, Ahp method, Research projects evaluation, Topsis method.
Procedia PDF Downloads 59158 A Fermatean Fuzzy MAIRCA Approach for Maintenance Strategy Selection of Process Plant Gearbox Using Sustainability Criteria
Authors: Soumava Boral, Sanjay K. Chaturvedi, Ian Howard, Kristoffer McKee, V. N. A. Naikan
Abstract:
Due to strict regulations from government to enhance the possibilities of sustainability practices in industries, and noting the advances in sustainable manufacturing practices, it is necessary that the associated processes are also sustainable. Maintenance of large scale and complex machines is a pivotal task to maintain the uninterrupted flow of manufacturing processes. Appropriate maintenance practices can prolong the lifetime of machines, and prevent associated breakdowns, which subsequently reduces different cost heads. Selection of the best maintenance strategies for such machines are considered as a burdensome task, as they require the consideration of multiple technical criteria, complex mathematical calculations, previous fault data, maintenance records, etc. In the era of the fourth industrial revolution, organizations are rapidly changing their way of business, and they are giving their utmost importance to sensor technologies, artificial intelligence, data analytics, automations, etc. In this work, the effectiveness of several maintenance strategies (e.g., preventive, failure-based, reliability centered, condition based, total productive maintenance, etc.) related to a large scale and complex gearbox, operating in a steel processing plant is evaluated in terms of economic, social, environmental and technical criteria. As it is not possible to obtain/describe some criteria by exact numerical values, these criteria are evaluated linguistically by cross-functional experts. Fuzzy sets are potential soft-computing technique, which has been useful to deal with linguistic data and to provide inferences in many complex situations. To prioritize different maintenance practices based on the identified sustainable criteria, multi-criteria decision making (MCDM) approaches can be considered as potential tools. Multi-Attributive Ideal Real Comparative Analysis (MAIRCA) is a recent addition in the MCDM family and has proven its superiority over some well-known MCDM approaches, like TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) and ELECTRE (ELimination Et Choix Traduisant la REalité). It has a simple but robust mathematical approach, which is easy to comprehend. On the other side, due to some inherent drawbacks of Intuitionistic Fuzzy Sets (IFS) and Pythagorean Fuzzy Sets (PFS), recently, the use of Fermatean Fuzzy Sets (FFSs) has been proposed. In this work, we propose the novel concept of FF-MAIRCA. We obtain the weights of the criteria by experts’ evaluation and use them to prioritize the different maintenance practices according to their suitability by FF-MAIRCA approach. Finally, a sensitivity analysis is carried out to highlight the robustness of the approach.Keywords: Fermatean fuzzy sets, Fermatean fuzzy MAIRCA, maintenance strategy selection, sustainable manufacturing, MCDM
Procedia PDF Downloads 13957 The Role of Professional Teacher Development in Introducing Trilingual Education into the Secondary School Curriculum: Lessons from Kazakhstan, Central Asia
Authors: Kairat Kurakbayev, Dina Gungor, Adil Ashirbekov, Assel Kambatyrova
Abstract:
Kazakhstan, a post-Soviet economy located in the Central Asia, is making great efforts to internationalize its national system of education. The country is very ambitious in making the national economy internationally competitive and education has become one of the main pillars of the nation’s strategic development plan for 2030. This paper discusses the role of professional teacher development in upgrading the secondary education curriculum with the introduction of English as a medium of instruction (EMI) in grades 10-11 grades. Having Kazakh as the state language and Russian as the official language, English bears a status of foreign language in the country. The development of trilingual education is very high on the agenda of the Ministry of Education and Science. It is planned that by 2019 STEM-related subjects – Biology, Chemistry, Computing and Physics – will be taught in EMI. Introducing English-medium education appears to be a very drastic reform and the teaching cadre is the key driver here. At the same time, after the collapse of the Soviet Union, the teaching profession is still struggling to become attractive in the eyes of the local youth. Moreover, the quality of Kazakhstan’s secondary education is put in question by OECD national review reports. The paper presents a case study of the nation-wide professional development programme arranged for 5 010 school teachers so that they could be able to teach their content subjects in English starting from 2019 onwards. The study is based on the mixed methods research involving the data derived from the surveys and semi-structured interviews held with the programme participants, i.e. school teachers. The findings of the study imply the significance of the school teachers’ attitudes towards the top-down reform of trilingual education. The qualitative research data reveal the teachers’ beliefs about advantages and disadvantages of having their content subjects (e.g. Biology or Chemistry) taught in EMI. The study highlights teachers’ concerns about their professional readiness to implement the top-down reform of English-medium education and discusses possible risks of academic underperforming on the part of students whose English language proficiency is not advanced. This paper argues that for the effective implementation of the English-medium education in secondary schools, the state should adopt a comprehensive approach to upgrading the national academic system where teachers’ attitudes and beliefs play the key role in making the trilingual education policy effective. The study presents lessons for other national academic systems considering to transfer its secondary education to English as a medium of instruction.Keywords: teacher education, teachers' beliefs, trilingual education, case study
Procedia PDF Downloads 18656 Application of Deep Learning Algorithms in Agriculture: Early Detection of Crop Diseases
Authors: Manaranjan Pradhan, Shailaja Grover, U. Dinesh Kumar
Abstract:
Farming community in India, as well as other parts of the world, is one of the highly stressed communities due to reasons such as increasing input costs (cost of seeds, fertilizers, pesticide), droughts, reduced revenue leading to farmer suicides. Lack of integrated farm advisory system in India adds to the farmers problems. Farmers need right information during the early stages of crop’s lifecycle to prevent damage and loss in revenue. In this paper, we use deep learning techniques to develop an early warning system for detection of crop diseases using images taken by farmers using their smart phone. The research work leads to building a smart assistant using analytics and big data which could help the farmers with early diagnosis of the crop diseases and corrective actions. The classical approach for crop disease management has been to identify diseases at crop level. Recently, ImageNet Classification using the convolutional neural network (CNN) has been successfully used to identify diseases at individual plant level. Our model uses convolution filters, max pooling, dense layers and dropouts (to avoid overfitting). The models are built for binary classification (healthy or not healthy) and multi class classification (identifying which disease). Transfer learning is used to modify the weights of parameters learnt through ImageNet dataset and apply them on crop diseases, which reduces number of epochs to learn. One shot learning is used to learn from very few images, while data augmentation techniques are used to improve accuracy with images taken from farms by using techniques such as rotation, zoom, shift and blurred images. Models built using combination of these techniques are more robust for deploying in the real world. Our model is validated using tomato crop. In India, tomato is affected by 10 different diseases. Our model achieves an accuracy of more than 95% in correctly classifying the diseases. The main contribution of our research is to create a personal assistant for farmers for managing plant disease, although the model was validated using tomato crop, it can be easily extended to other crops. The advancement of technology in computing and availability of large data has made possible the success of deep learning applications in computer vision, natural language processing, image recognition, etc. With these robust models and huge smartphone penetration, feasibility of implementation of these models is high resulting in timely advise to the farmers and thus increasing the farmers' income and reducing the input costs.Keywords: analytics in agriculture, CNN, crop disease detection, data augmentation, image recognition, one shot learning, transfer learning
Procedia PDF Downloads 12055 Quantitative, Preservative Methodology for Review of Interview Transcripts Using Natural Language Processing
Authors: Rowan P. Martnishn
Abstract:
During the execution of a National Endowment of the Arts grant, approximately 55 interviews were collected from professionals across various fields. These interviews were used to create deliverables – historical connections for creations that began as art and evolved entirely into computing technology. With dozens of hours’ worth of transcripts to be analyzed by qualitative coders, a quantitative methodology was created to sift through the documents. The initial step was to both clean and format all the data. First, a basic spelling and grammar check was applied, as well as a Python script for normalized formatting which used an open-source grammatical formatter to make the data as coherent as possible. 10 documents were randomly selected to manually review, where words often incorrectly translated during the transcription were recorded and replaced throughout all other documents. Then, to remove all banter and side comments, the transcripts were spliced into paragraphs (separated by change in speaker) and all paragraphs with less than 300 characters were removed. Secondly, a keyword extractor, a form of natural language processing where significant words in a document are selected, was run on each paragraph for all interviews. Every proper noun was put into a data structure corresponding to that respective interview. From there, a Bidirectional and Auto-Regressive Transformer (B.A.R.T.) summary model was then applied to each paragraph that included any of the proper nouns selected from the interview. At this stage the information to review had been sent from about 60 hours’ worth of data to 20. The data was further processed through light, manual observation – any summaries which proved to fit the criteria of the proposed deliverable were selected, as well their locations within the document. This narrowed that data down to about 5 hours’ worth of processing. The qualitative researchers were then able to find 8 more connections in addition to our previous 4, exceeding our minimum quota of 3 to satisfy the grant. Major findings of the study and subsequent curation of this methodology raised a conceptual finding crucial to working with qualitative data of this magnitude. In the use of artificial intelligence there is a general trade off in a model between breadth of knowledge and specificity. If the model has too much knowledge, the user risks leaving out important data (too general). If the tool is too specific, it has not seen enough data to be useful. Thus, this methodology proposes a solution to this tradeoff. The data is never altered outside of grammatical and spelling checks. Instead, the important information is marked, creating an indicator of where the significant data is without compromising the purity of it. Secondly, the data is chunked into smaller paragraphs, giving specificity, and then cross-referenced with the keywords (allowing generalization over the whole document). This way, no data is harmed, and qualitative experts can go over the raw data instead of using highly manipulated results. Given the success in deliverable creation as well as the circumvention of this tradeoff, this methodology should stand as a model for synthesizing qualitative data while maintaining its original form.Keywords: B.A.R.T.model, keyword extractor, natural language processing, qualitative coding
Procedia PDF Downloads 31