Search results for: atmospheric models
6602 Quantitative Structure-Property Relationship Study of Base Dissociation Constants of Some Benzimidazoles
Authors: Sanja O. Podunavac-Kuzmanović, Lidija R. Jevrić, Strahinja Z. Kovačević
Abstract:
Benzimidazoles are a group of compounds with significant antibacterial, antifungal and anticancer activity. The studied compounds consist of the main benzimidazole structure with different combinations of substituens. This study is based on the two-dimensional and three-dimensional molecular modeling and calculation of molecular descriptors (physicochemical and lipophilicity descriptors) of structurally diverse benzimidazoles. Molecular modeling was carried out by using ChemBio3D Ultra version 14.0 software. The obtained 3D models were subjected to energy minimization using molecular mechanics force field method (MM2). The cutoff for structure optimization was set at a gradient of 0.1 kcal/Åmol. The obtained set of molecular descriptors was used in principal component analysis (PCA) of possible similarities and dissimilarities among the studied derivatives. After the molecular modeling, the quantitative structure-property relationship (QSPR) analysis was applied in order to get the mathematical models which can be used in prediction of pKb values of structurally similar benzimidazoles. The obtained models are based on statistically valid multiple linear regression (MLR) equations. The calculated cross-validation parameters indicate the high prediction ability of the established QSPR models. This study is financially supported by COST action CM1306 and the project No. 114-451-347/2015-02, financially supported by the Provincial Secretariat for Science and Technological Development of Vojvodina.Keywords: benzimidazoles, chemometrics, molecular modeling, molecular descriptors, QSPR
Procedia PDF Downloads 2876601 Laser Beam Bending via Lenses
Authors: Remzi Yildirim, Fatih. V. Çelebi, H. Haldun Göktaş, A. Behzat Şahin
Abstract:
This study is about a single component cylindrical structured lens with gradient curve which we used for bending laser beams. It operates under atmospheric conditions and bends the laser beam independent of temperature, pressure, polarity, polarization, magnetic field, electric field, radioactivity, and gravity. A single piece cylindrical lens that can bend laser beams is invented. Lenses are made of transparent, tinted or colored glasses and used for undermining or absorbing the energy of the laser beams.Keywords: laser, bending, lens, light, nonlinear optics
Procedia PDF Downloads 4886600 Laser Light Bending via Lenses
Authors: Remzi Yildirim, Fatih V. Çelebi, H. Haldun Göktaş, A. Behzat Şahin
Abstract:
This study is about a single component cylindrical structured lens with gradient curve which we used for bending laser beams. It operates under atmospheric conditions and bends the laser beam independent of temperature, pressure, polarity, polarization, magnetic field, electric field, radioactivity, and gravity. A single piece cylindrical lens that can bend laser beams is invented. Lenses are made of transparent, tinted or colored glasses and used for undermining or absorbing the energy of the laser beams.Keywords: laser, bending, lens, light, nonlinear optics
Procedia PDF Downloads 7026599 User Intention Generation with Large Language Models Using Chain-of-Thought Prompting Title
Authors: Gangmin Li, Fan Yang
Abstract:
Personalized recommendation is crucial for any recommendation system. One of the techniques for personalized recommendation is to identify the intention. Traditional user intention identification uses the user’s selection when facing multiple items. This modeling relies primarily on historical behaviour data resulting in challenges such as the cold start, unintended choice, and failure to capture intention when items are new. Motivated by recent advancements in Large Language Models (LLMs) like ChatGPT, we present an approach for user intention identification by embracing LLMs with Chain-of-Thought (CoT) prompting. We use the initial user profile as input to LLMs and design a collection of prompts to align the LLM's response through various recommendation tasks encompassing rating prediction, search and browse history, user clarification, etc. Our tests on real-world datasets demonstrate the improvements in recommendation by explicit user intention identification and, with that intention, merged into a user model.Keywords: personalized recommendation, generative user modelling, user intention identification, large language models, chain-of-thought prompting
Procedia PDF Downloads 536598 A Sentence-to-Sentence Relation Network for Recognizing Textual Entailment
Authors: Isaac K. E. Ampomah, Seong-Bae Park, Sang-Jo Lee
Abstract:
Over the past decade, there have been promising developments in Natural Language Processing (NLP) with several investigations of approaches focusing on Recognizing Textual Entailment (RTE). These models include models based on lexical similarities, models based on formal reasoning, and most recently deep neural models. In this paper, we present a sentence encoding model that exploits the sentence-to-sentence relation information for RTE. In terms of sentence modeling, Convolutional neural network (CNN) and recurrent neural networks (RNNs) adopt different approaches. RNNs are known to be well suited for sequence modeling, whilst CNN is suited for the extraction of n-gram features through the filters and can learn ranges of relations via the pooling mechanism. We combine the strength of RNN and CNN as stated above to present a unified model for the RTE task. Our model basically combines relation vectors computed from the phrasal representation of each sentence and final encoded sentence representations. Firstly, we pass each sentence through a convolutional layer to extract a sequence of higher-level phrase representation for each sentence from which the first relation vector is computed. Secondly, the phrasal representation of each sentence from the convolutional layer is fed into a Bidirectional Long Short Term Memory (Bi-LSTM) to obtain the final sentence representations from which a second relation vector is computed. The relations vectors are combined and then used in then used in the same fashion as attention mechanism over the Bi-LSTM outputs to yield the final sentence representations for the classification. Experiment on the Stanford Natural Language Inference (SNLI) corpus suggests that this is a promising technique for RTE.Keywords: deep neural models, natural language inference, recognizing textual entailment (RTE), sentence-to-sentence relation
Procedia PDF Downloads 3486597 Zero Valent Iron Algal Biocomposite for the Removal of Crystal Violet from Aqueous Solution: Box-Behnken Optimization and Fixed Bed Column Studies
Authors: M. Jerold, V. Sivasubramanian
Abstract:
In this study, nano zero valent iron Sargassum swartzii (nZVI-SS) biocomposite a marine algal based biosorbent was used for the removal of simulated crystal violet (CV) in batch and continuous fixed bed operation. The Box-Behnen design (BBD) experimental results revealed the biosoprtion was maximum at pH 7.5, biosorbent dosage 0.1 g/L and initial CV concentration of 100 mg/L. The effect of various column parameters like bed depth (3, 6 and 9 cm), flow rate (5, 10 and 15 mL/min) and influent CV concentration (5, 10 and 15 mg/L) were investigated. The exhaustion time increased with increase of bed depth, influent CV concentration and decrease of flow rate. Adam-Bohart, Thomas and Yoon-Nelson models were used to predict the breakthrough curve and to evaluate the model parameters. Out of these models, Thomas and Yoon-Nelson models well described the experimental data. Therefore, the result implies that nZVI-SS biocomposite is a cheap and most promising biosorbent for the removal of CV from wastewater.Keywords: algae, biosorption, zero-valent, dye, wastewater
Procedia PDF Downloads 1956596 On the Possibility of Real Time Characterisation of Ambient Toxicity Using Multi-Wavelength Photoacoustic Instrument
Authors: Tibor Ajtai, Máté Pintér, Noémi Utry, Gergely Kiss-Albert, Andrea Palágyi, László Manczinger, Csaba Vágvölgyi, Gábor Szabó, Zoltán Bozóki
Abstract:
According to the best knowledge of the authors, here we experimentally demonstrate first, a quantified correlation between the real-time measured optical feature of the ambient and the off-line measured toxicity data. Finally, using these correlations we are presenting a novel methodology for real time characterisation of ambient toxicity based on the multi wavelength aerosol phase photoacoustic measurement. Ambient carbonaceous particulate matter is one of the most intensively studied atmospheric constituent in climate science nowadays. Beyond their climatic impact, atmospheric soot also plays an important role as an air pollutant that harms human health. Moreover, according to the latest scientific assessments ambient soot is the second most important anthropogenic emission source, while in health aspect its being one of the most harmful atmospheric constituents as well. Despite of its importance, generally accepted standard methodology for the quantitative determination of ambient toxicology is not available yet. Dominantly, ambient toxicology measurement is based on the posterior analysis of filter accumulated aerosol with limited time resolution. Most of the toxicological studies are based on operational definitions using different measurement protocols therefore the comprehensive analysis of the existing data set is really limited in many cases. The situation is further complicated by the fact that even during its relatively short residence time the physicochemical features of the aerosol can be masked significantly by the actual ambient factors. Therefore, decreasing the time resolution of the existing methodology and developing real-time methodology for air quality monitoring are really actual issues in the air pollution research. During the last decades many experimental studies have verified that there is a relation between the chemical composition and the absorption feature quantified by Absorption Angström Exponent (AAE) of the carbonaceous particulate matter. Although the scientific community are in the common platform that the PhotoAcoustic Spectroscopy (PAS) is the only methodology that can measure the light absorption by aerosol with accurate and reliable way so far, the multi-wavelength PAS which are able to selectively characterise the wavelength dependency of absorption has become only available in the last decade. In this study, the first results of the intensive measurement campaign focusing the physicochemical and toxicological characterisation of ambient particulate matter are presented. Here we demonstrate the complete microphysical characterisation of winter time urban ambient including optical absorption and scattering as well as size distribution using our recently developed state of the art multi-wavelength photoacoustic instrument (4λ-PAS), integrating nephelometer (Aurora 3000) as well as single mobility particle sizer and optical particle counter (SMPS+C). Beyond this on-line characterisation of the ambient, we also demonstrate the results of the eco-, cyto- and genotoxicity measurements of ambient aerosol based on the posterior analysis of filter accumulated aerosol with 6h time resolution. We demonstrate a diurnal variation of toxicities and AAE data deduced directly from the multi-wavelength absorption measurement results.Keywords: photoacoustic spectroscopy, absorption Angström exponent, toxicity, Ames-test
Procedia PDF Downloads 3026595 Transition from Linear to Circular Business Models with Service Design Methodology
Authors: Minna-Maari Harmaala, Hanna Harilainen
Abstract:
Estimates of the economic value of transitioning to circular economy models vary but it has been estimated to represent $1 trillion worth of new business into the global economy. In Europe alone, estimates claim that adopting circular-economy principles could not only have environmental and social benefits but also generate a net economic benefit of €1.8 trillion by 2030. Proponents of a circular economy argue that it offers a major opportunity to increase resource productivity, decrease resource dependence and waste, and increase employment and growth. A circular system could improve competitiveness and unleash innovation. Yet, most companies are not capturing these opportunities and thus the even abundant circular opportunities remain uncaptured even though they would seem inherently profitable. Service design in broad terms relates to developing an existing or a new service or service concept with emphasis and focus on the customer experience from the onset of the development process. Service design may even mean starting from scratch and co-creating the service concept entirely with the help of customer involvement. Service design methodologies provide a structured way of incorporating customer understanding and involvement in the process of designing better services with better resonance to customer needs. A business model is a depiction of how the company creates, delivers, and captures value; i.e. how it organizes its business. The process of business model development and adjustment or modification is also called business model innovation. Innovating business models has become a part of business strategy. Our hypothesis is that in addition to linear models still being easier to adopt and often with lower threshold costs, companies lack an understanding of how circular models can be adopted into their business and how customers will be willing and ready to adopt the new circular business models. In our research, we use robust service design methodology to develop circular economy solutions with two case study companies. The aim of the process is to not only develop the service concepts and portfolio, but to demonstrate the willingness to adopt circular solutions exists in the customer base. In addition to service design, we employ business model innovation methods to develop, test, and validate the new circular business models further. The results clearly indicate that amongst the customer groups there are specific customer personas that are willing to adopt and in fact are expecting the companies to take a leading role in the transition towards a circular economy. At the same time, there is a group of indifferents, to whom the idea of circularity provides no added value. In addition, the case studies clearly show what changes adoption of circular economy principles brings to the existing business model and how they can be integrated.Keywords: business model innovation, circular economy, circular economy business models, service design
Procedia PDF Downloads 1356594 Numerical Study of the Influence of the Primary Stream Pressure on the Performance of the Ejector Refrigeration System Based on Heat Exchanger Modeling
Authors: Elhameh Narimani, Mikhail Sorin, Philippe Micheau, Hakim Nesreddine
Abstract:
Numerical models of the heat exchangers in ejector refrigeration system (ERS) were developed and validated with the experimental data. The models were based on the switched heat exchangers model using the moving boundary method, which were capable of estimating the zones’ lengths, the outlet temperatures of both sides and the heat loads at various experimental points. The developed models were utilized to investigate the influence of the primary flow pressure on the performance of an R245fa ERS based on its coefficient of performance (COP) and exergy efficiency. It was illustrated numerically and proved experimentally that increasing the primary flow pressure slightly reduces the COP while the exergy efficiency goes through a maximum before decreasing.Keywords: Coefficient of Performance, COP, Ejector Refrigeration System, ERS, exergy efficiency (ηII), heat exchangers modeling, moving boundary method
Procedia PDF Downloads 2016593 Correction Factors for Soil-Structure Interaction Predicted by Simplified Models: Axisymmetric 3D Model versus Fully 3D Model
Authors: Fu Jia
Abstract:
The effects of soil-structure interaction (SSI) are often studied using axial-symmetric three-dimensional (3D) models to avoid the high computational cost of the more realistic, fully 3D models, which require 2-3 orders of magnitude more computer time and storage. This paper analyzes the error and presents correction factors for system frequency, system damping, and peak amplitude of structural response computed by axisymmetric models, embedded in uniform or layered half-space. The results are compared with those for fully 3D rectangular foundations of different aspect ratios. Correction factors are presented for a range of the model parameters, such as fixed-base frequency, structure mass, height and length-to-width ratio, foundation embedment, soil-layer stiffness and thickness. It is shown that the errors are larger for stiffer, taller and heavier structures, deeper foundations and deeper soil layer. For example, for a stiff structure like Millikan Library (NS response; length-to-width ratio 1), the error is 6.5% in system frequency, 49% in system damping and 180% in peak amplitude. Analysis of a case study shows that the NEHRP-2015 provisions for reduction of base shear force due to SSI effects may be unsafe for some structures and need revision. The presented correction factor diagrams can be used in practical design and other applications.Keywords: 3D soil-structure interaction, correction factors for axisymmetric models, length-to-width ratio, NEHRP-2015 provisions for reduction of base shear force, rectangular embedded foundations, SSI system frequency, SSI system damping
Procedia PDF Downloads 2666592 Modeling of Induced Voltage in Disconnected Grounded Conductor of Three-Phase Power Line
Authors: Misho Matsankov, Stoyan Petrov
Abstract:
The paper presents the methodology and the obtained mathematical models for determining the value of the grounding resistance of a disconnected conductor in a three-phase power line, for which the contact voltage is safe, by taking into account the potentials, induced by the non-disconnected phase conductors. The mathematical models have been obtained by implementing the experimental design techniques.Keywords: contact voltage, experimental design, induced voltage, safety
Procedia PDF Downloads 1766591 Practical Skill Education for Doctors in Training: Economical and Efficient Methods for Students to Receive Hands-on Experience
Authors: Nathaniel Deboever, Malcolm Breeze, Adrian Sheen
Abstract:
Basic surgical and suturing techniques are a fundamental requirement for all doctors. In order to gain confidence and competence, doctors in training need to obtain sufficient teaching and just as importantly: practice. Young doctors with an apt level of expertise on these simple surgical skills, which are often used in the Emergency Department, can help alleviate some pressure during a busy evening. Unfortunately, learning these skills can be quite difficult during medical school or even during junior doctor years. The aim of this project was to adequately train medical students attending University of Sydney’s Nepean Clinical School through a series of workshops highlighting practical skills, with hopes to further extend this program to junior doctors in the hospital. The sessions instructed basic skills via tutorials, demonstrations, and lastly, the sessions cemented these proficiencies with practical sessions. During such an endeavor, it is fundamental to employ models that appropriately resemble what students will encounter in the clinical setting. The sustainability of workshops is similarly important to the continuity of such a program. To address both these challenges, the authors have developed models including suturing platforms, knot tying, and vessel ligation stations, as well as a shave and punch biopsy models and ophthalmologic foreign body device. The unique aspect of this work is that we utilized hands-on teaching sessions, to address a gap in doctors-in-training and junior doctor curriculum. Presented to you through this poster are our approaches to creating models that do not employ animal products and therefore do not necessitate particular facilities or discarding requirements. Covering numerous skills that would be beneficial to all young doctors, these models are easily replicable and affordable. This exciting work allows for countless sessions at low cost, providing enough practice for students to perform these skills confidently as it has been shown through attendee questionnaires.Keywords: medical education, surgical models, surgical simulation, surgical skills education
Procedia PDF Downloads 1576590 Aerodynamic Investigation of Rear Vehicle by Geometry Variations on the Backlight Angle
Authors: Saud Hassan
Abstract:
This paper shows simulation for the prediction of the flow around the backlight angle of the passenger vehicle. The CFD simulations are carried out on different car models. The Ahmed model “bluff body” used as the stander model to study aerodynamics of the backlight angle. This paper described the airflow over the different car models with different backlight angles and also on the Ahmed model to determine the trailing vortices with the varying backlight angle of a passenger vehicle body. The CFD simulation is carried out with the Ahmed body which has simplified car model mainly used in automotive industry to investigate the flow over the car body surface. The main goal of the simulation is to study the behavior of trailing vortices of these models. In this paper the air flow over the slant angle of 0,5o, 12.5o, 20o, 30o, 40o are considered. As investigating on the rear backlight angle two dimensional flows occurred at the rear slant, on the other hand when the slant angle is 30o the flow become three dimensional. Above this angle sudden drop occurred in drag.Keywords: aerodynamics, Ahemd vehicle , backlight angle, finite element method
Procedia PDF Downloads 7816589 Recurrent Neural Networks for Complex Survival Models
Authors: Pius Marthin, Nihal Ata Tutkun
Abstract:
Survival analysis has become one of the paramount procedures in the modeling of time-to-event data. When we encounter complex survival problems, the traditional approach remains limited in accounting for the complex correlational structure between the covariates and the outcome due to the strong assumptions that limit the inference and prediction ability of the resulting models. Several studies exist on the deep learning approach to survival modeling; moreover, the application for the case of complex survival problems still needs to be improved. In addition, the existing models need to address the data structure's complexity fully and are subject to noise and redundant information. In this study, we design a deep learning technique (CmpXRnnSurv_AE) that obliterates the limitations imposed by traditional approaches and addresses the above issues to jointly predict the risk-specific probabilities and survival function for recurrent events with competing risks. We introduce the component termed Risks Information Weights (RIW) as an attention mechanism to compute the weighted cumulative incidence function (WCIF) and an external auto-encoder (ExternalAE) as a feature selector to extract complex characteristics among the set of covariates responsible for the cause-specific events. We train our model using synthetic and real data sets and employ the appropriate metrics for complex survival models for evaluation. As benchmarks, we selected both traditional and machine learning models and our model demonstrates better performance across all datasets.Keywords: cumulative incidence function (CIF), risk information weight (RIW), autoencoders (AE), survival analysis, recurrent events with competing risks, recurrent neural networks (RNN), long short-term memory (LSTM), self-attention, multilayers perceptrons (MLPs)
Procedia PDF Downloads 896588 Machine Learning for Classifying Risks of Death and Length of Stay of Patients in Intensive Unit Care Beds
Authors: Itamir de Morais Barroca Filho, Cephas A. S. Barreto, Ramon Malaquias, Cezar Miranda Paula de Souza, Arthur Costa Gorgônio, João C. Xavier-Júnior, Mateus Firmino, Fellipe Matheus Costa Barbosa
Abstract:
Information and Communication Technologies (ICT) in healthcare are crucial for efficiently delivering medical healthcare services to patients. These ICTs are also known as e-health and comprise technologies such as electronic record systems, telemedicine systems, and personalized devices for diagnosis. The focus of e-health is to improve the quality of health information, strengthen national health systems, and ensure accessible, high-quality health care for all. All the data gathered by these technologies make it possible to help clinical staff with automated decisions using machine learning. In this context, we collected patient data, such as heart rate, oxygen saturation (SpO2), blood pressure, respiration, and others. With this data, we were able to develop machine learning models for patients’ risk of death and estimate the length of stay in ICU beds. Thus, this paper presents the methodology for applying machine learning techniques to develop these models. As a result, although we implemented these models on an IoT healthcare platform, helping clinical staff in healthcare in an ICU, it is essential to create a robust clinical validation process and monitoring of the proposed models.Keywords: ICT, e-health, machine learning, ICU, healthcare
Procedia PDF Downloads 1096587 Daily Probability Model of Storm Events in Peninsular Malaysia
Authors: Mohd Aftar Abu Bakar, Noratiqah Mohd Ariff, Abdul Aziz Jemain
Abstract:
Storm Event Analysis (SEA) provides a method to define rainfalls events as storms where each storm has its own amount and duration. By modelling daily probability of different types of storms, the onset, offset and cycle of rainfall seasons can be determined and investigated. Furthermore, researchers from the field of meteorology will be able to study the dynamical characteristics of rainfalls and make predictions for future reference. In this study, four categories of storms; short, intermediate, long and very long storms; are introduced based on the length of storm duration. Daily probability models of storms are built for these four categories of storms in Peninsular Malaysia. The models are constructed by using Bernoulli distribution and by applying linear regression on the first Fourier harmonic equation. From the models obtained, it is found that daily probability of storms at the Eastern part of Peninsular Malaysia shows a unimodal pattern with high probability of rain beginning at the end of the year and lasting until early the next year. This is very likely due to the Northeast monsoon season which occurs from November to March every year. Meanwhile, short and intermediate storms at other regions of Peninsular Malaysia experience a bimodal cycle due to the two inter-monsoon seasons. Overall, these models indicate that Peninsular Malaysia can be divided into four distinct regions based on the daily pattern for the probability of various storm events.Keywords: daily probability model, monsoon seasons, regions, storm events
Procedia PDF Downloads 3436586 Optimizing Production Yield Through Process Parameter Tuning Using Deep Learning Models: A Case Study in Precision Manufacturing
Authors: Tolulope Aremu
Abstract:
This paper is based on the idea of using deep learning methodology for optimizing production yield by tuning a few key process parameters in a manufacturing environment. The study was explicitly on how to maximize production yield and minimize operational costs by utilizing advanced neural network models, specifically Long Short-Term Memory and Convolutional Neural Networks. These models were implemented using Python-based frameworks—TensorFlow and Keras. The targets of the research are the precision molding processes in which temperature ranges between 150°C and 220°C, the pressure ranges between 5 and 15 bar, and the material flow rate ranges between 10 and 50 kg/h, which are critical parameters that have a great effect on yield. A dataset of 1 million production cycles has been considered for five continuous years, where detailed logs are present showing the exact setting of parameters and yield output. The LSTM model would model time-dependent trends in production data, while CNN analyzed the spatial correlations between parameters. Models are designed in a supervised learning manner. For the model's loss, an MSE loss function is used, optimized through the Adam optimizer. After running a total of 100 training epochs, 95% accuracy was achieved by the models recommending optimal parameter configurations. Results indicated that with the use of RSM and DOE traditional methods, there was an increase in production yield of 12%. Besides, the error margin was reduced by 8%, hence consistent quality products from the deep learning models. The monetary value was annually around $2.5 million, the cost saved from material waste, energy consumption, and equipment wear resulting from the implementation of optimized process parameters. This system was deployed in an industrial production environment with the help of a hybrid cloud system: Microsoft Azure, for data storage, and the training and deployment of their models were performed on Google Cloud AI. The functionality of real-time monitoring of the process and automatic tuning of parameters depends on cloud infrastructure. To put it into perspective, deep learning models, especially those employing LSTM and CNN, optimize the production yield by fine-tuning process parameters. Future research will consider reinforcement learning with a view to achieving further enhancement of system autonomy and scalability across various manufacturing sectors.Keywords: production yield optimization, deep learning, tuning of process parameters, LSTM, CNN, precision manufacturing, TensorFlow, Keras, cloud infrastructure, cost saving
Procedia PDF Downloads 296585 Facilitating Written Biology Assessment in Large-Enrollment Courses Using Machine Learning
Authors: Luanna B. Prevost, Kelli Carter, Margaurete Romero, Kirsti Martinez
Abstract:
Writing is an essential scientific practice, yet, in several countries, the increasing university science class-size limits the use of written assessments. Written assessments allow students to demonstrate their learning in their own words and permit the faculty to evaluate students’ understanding. However, the time and resources required to grade written assessments prohibit their use in large-enrollment science courses. This study examined the use of machine learning algorithms to automatically analyze student writing and provide timely feedback to the faculty about students' writing in biology. Written responses to questions about matter and energy transformation were collected from large-enrollment undergraduate introductory biology classrooms. Responses were analyzed using the LightSide text mining and classification software. Cohen’s Kappa was used to measure agreement between the LightSide models and human raters. Predictive models achieved agreement with human coding of 0.7 Cohen’s Kappa or greater. Models captured that when writing about matter-energy transformation at the ecosystem level, students focused on primarily on the concepts of heat loss, recycling of matter, and conservation of matter and energy. Models were also produced to capture writing about processes such as decomposition and biochemical cycling. The models created in this study can be used to provide automatic feedback about students understanding of these concepts to biology faculty who desire to use formative written assessments in larger enrollment biology classes, but do not have the time or personnel for manual grading.Keywords: machine learning, written assessment, biology education, text mining
Procedia PDF Downloads 2816584 Harnessing the Power of Large Language Models in Orthodontics: AI-Generated Insights on Class II and Class III Orthopedic Appliances: A Cross-Sectional Study
Authors: Laiba Amin, Rashna H. Sukhia, Mubassar Fida
Abstract:
Introduction: This study evaluates the accuracy of responses from ChatGPT, Google Bard, and Microsoft Copilot regarding dentofacial orthopedic appliances. As artificial intelligence (AI) increasingly enhances various fields, including healthcare, understanding its reliability in specialized domains like orthodontics becomes crucial. By comparing the accuracy of different AI models, this study aims to shed light on their effectiveness and potential limitations in providing technical insights. Materials and Methods: A total of 110 questions focused on dentofacial orthopedic appliances were posed to each AI model. The responses were then evaluated by five experienced orthodontists using a modified 5-point Likert scale to ensure a thorough assessment of accuracy. This structured approach allowed for consistent and objective rating, facilitating a meaningful comparison between the AI systems. Results: The results revealed that Google Bard demonstrated the highest accuracy at 74%, followed by Microsoft Copilot, with an accuracy of 72.2%. In contrast, ChatGPT was found to be the least accurate, achieving only 52.2%. These results highlight significant differences in the performance of the AI models when addressing orthodontic queries. Conclusions: Our study highlights the need for caution in relying on AI for orthodontic insights. The overall accuracy of the three chatbots was 66%, with Google Bard performing best for removable Class II appliances. Microsoft Copilot was more accurate than ChatGPT, which, despite its popularity, was the least accurate. This variability emphasizes the importance of human expertise in interpreting AI-generated information. Further research is necessary to improve the reliability of AI models in specialized healthcare settings.Keywords: artificial intelligence, large language models, orthodontics, dentofacial orthopaedic appliances, accuracy assessment.
Procedia PDF Downloads 66583 Dynamic Modeling of the Exchange Rate in Tunisia: Theoretical and Empirical Study
Authors: Chokri Slim
Abstract:
The relative failure of simultaneous equation models in the seventies has led researchers to turn to other approaches that take into account the dynamics of economic and financial systems. In this paper, we use an approach based on vector autoregressive model that is widely used in recent years. Their popularity is due to their flexible nature and ease of use to produce models with useful descriptive characteristics. It is also easy to use them to test economic hypotheses. The standard econometric techniques assume that the series studied are stable over time (stationary hypothesis). Most economic series do not verify this hypothesis, which assumes, when one wishes to study the relationships that bind them to implement specific techniques. This is cointegration which characterizes non-stationary series (integrated) with a linear combination is stationary, will also be presented in this paper. Since the work of Johansen, this approach is generally presented as part of a multivariate analysis and to specify long-term stable relationships while at the same time analyzing the short-term dynamics of the variables considered. In the empirical part, we have applied these concepts to study the dynamics of of the exchange rate in Tunisia, which is one of the most important economic policy of a country open to the outside. According to the results of the empirical study by the cointegration method, there is a cointegration relationship between the exchange rate and its determinants. This relationship shows that the variables have a significant influence in determining the exchange rate in Tunisia.Keywords: stationarity, cointegration, dynamic models, causality, VECM models
Procedia PDF Downloads 3646582 Structural Performance of a Bridge Pier on Dubious Deep Foundation
Authors: Víctor Cecilio, Roberto Gómez, J. Alberto Escobar, Héctor Guerrero
Abstract:
The study of the structural behavior of a support/pier of an elevated viaduct in Mexico City is presented. Detection of foundation piles with uncertain integrity prompted the review of possible situations that could jeopardy the structural safety of the pier. The objective of this paper is to evaluate the structural conditions of the support, taking into account the type of anomaly reported and the depth at which it is located, the position of the pile with uncertain integrity in the foundation system, the stratigraphy of the surrounding soil and the geometry and structural characteristics of the pier. To carry out the above, dynamic analysis, spectral modal, and step-by-step, with elastic and inelastic material models, were performed. Results were evaluated in accordance with the standards used for the design of the original structural project and with the Construction Regulations for Mexico’s Federal District (RCDF-2017, 2017). Comments on the response of the analyzed models are issued, and the conclusions are presented from a structural point of view.Keywords: dynamic analysis, inelastic models, dubious foundation, bridge pier
Procedia PDF Downloads 1376581 Comparative Study of Bending Angle in Laser Forming Process Using Artificial Neural Network and Fuzzy Logic System
Authors: M. Hassani, Y. Hassani, N. Ajudanioskooei, N. N. Benvid
Abstract:
Laser Forming process as a non-contact thermal forming process is widely used to forming and bending of metallic and non-metallic sheets. In this process, according to laser irradiation along a specific path, sheet is bent. One of the most important output parameters in laser forming is bending angle that depends on process parameters such as physical and mechanical properties of materials, laser power, laser travel speed and the number of scan passes. In this paper, Artificial Neural Network and Fuzzy Logic System were used to predict of bending angle in laser forming process. Inputs to these models were laser travel speed and laser power. The comparison between artificial neural network and fuzzy logic models with experimental results has been shown both of these models have high ability to prediction of bending angles with minimum errors.Keywords: artificial neural network, bending angle, fuzzy logic, laser forming
Procedia PDF Downloads 5976580 Comfort Sensor Using Fuzzy Logic and Arduino
Authors: Samuel John, S. Sharanya
Abstract:
Automation has become an important part of our life. It has been used to control home entertainment systems, changing the ambience of rooms for different events etc. One of the main parameters to control in a smart home is the atmospheric comfort. Atmospheric comfort mainly includes temperature and relative humidity. In homes, the desired temperature of different rooms varies from 20 °C to 25 °C and relative humidity is around 50%. However, it varies widely. Hence, automated measurement of these parameters to ensure comfort assumes significance. To achieve this, a fuzzy logic controller using Arduino was developed using MATLAB. Arduino is an open source hardware consisting of a 24 pin ATMEGA chip (atmega328), 14 digital input /output pins and an inbuilt ADC. It runs on 5v and 3.3v power supported by a board voltage regulator. Some of the digital pins in Aruduino provide PWM (pulse width modulation) signals, which can be used in different applications. The Arduino platform provides an integrated development environment, which includes support for c, c++ and java programming languages. In the present work, soft sensor was introduced in this system that can indirectly measure temperature and humidity and can be used for processing several measurements these to ensure comfort. The Sugeno method (output variables are functions or singleton/constant, more suitable for implementing on microcontrollers) was used in the soft sensor in MATLAB and then interfaced to the Arduino, which is again interfaced to the temperature and humidity sensor DHT11. The temperature-humidity sensor DHT11 acts as the sensing element in this system. Further, a capacitive humidity sensor and a thermistor were also used to support the measurement of temperature and relative humidity of the surrounding to provide a digital signal on the data pin. The comfort sensor developed was able to measure temperature and relative humidity correctly. The comfort percentage was calculated and accordingly the temperature in the room was controlled. This system was placed in different rooms of the house to ensure that it modifies the comfort values depending on temperature and relative humidity of the environment. Compared to the existing comfort control sensors, this system was found to provide an accurate comfort percentage. Depending on the comfort percentage, the air conditioners and the coolers in the room were controlled. The main highlight of the project is its cost efficiency.Keywords: arduino, DHT11, soft sensor, sugeno
Procedia PDF Downloads 3126579 3D Microscopy, Image Processing, and Analysis of Lymphangiogenesis in Biological Models
Authors: Thomas Louis, Irina Primac, Florent Morfoisse, Tania Durre, Silvia Blacher, Agnes Noel
Abstract:
In vitro and in vivo lymphangiogenesis assays are essential for the identification of potential lymphangiogenic agents and the screening of pharmacological inhibitors. In the present study, we analyse three biological models: in vitro lymphatic endothelial cell spheroids, in vivo ear sponge assay, and in vivo lymph node colonisation by tumour cells. These assays provide suitable 3D models to test pro- and anti-lymphangiogenic factors or drugs. 3D images were acquired by confocal laser scanning and light sheet fluorescence microscopy. Virtual scan microscopy followed by 3D reconstruction by image aligning methods was also used to obtain 3D images of whole large sponge and ganglion samples. 3D reconstruction, image segmentation, skeletonisation, and other image processing algorithms are described. Fixed and time-lapse imaging techniques are used to analyse lymphatic endothelial cell spheroids behaviour. The study of cell spatial distribution in spheroid models enables to detect interactions between cells and to identify invasion hierarchy and guidance patterns. Global measurements such as volume, length, and density of lymphatic vessels are measured in both in vivo models. Branching density and tortuosity evaluation are also proposed to determine structure complexity. Those properties combined with vessel spatial distribution are evaluated in order to determine lymphangiogenesis extent. Lymphatic endothelial cell invasion and lymphangiogenesis were evaluated under various experimental conditions. The comparison of these conditions enables to identify lymphangiogenic agents and to better comprehend their roles in the lymphangiogenesis process. The proposed methodology is validated by its application on the three presented models.Keywords: 3D image segmentation, 3D image skeletonisation, cell invasion, confocal microscopy, ear sponges, light sheet microscopy, lymph nodes, lymphangiogenesis, spheroids
Procedia PDF Downloads 3776578 Rainwater Harvesting and Management of Ground Water (Case Study Weather Modification Project in Iran)
Authors: Samaneh Poormohammadi, Farid Golkar, Vahideh Khatibi Sarabi
Abstract:
Climate change and consecutive droughts have increased the importance of using rainwater harvesting methods. One of the methods of rainwater harvesting and, in other words, the management of atmospheric water resources is the use of weather modification technologies. Weather modification (also known as weather control) is the act of intentionally manipulating or altering the weather. The most common form of weather modification is cloud seeding, which increases rain or snow, usually for the purpose of increasing the local water supply. Cloud seeding operations in Iran have been married since 1999 in central Iran with the aim of harvesting rainwater and reducing the effects of drought. In this research, we analyze the results of cloud seeding operations in the Simindashtplain in northern Iran. Rainwater harvesting with the help of cloud seeding technology has been evaluated through its effects on surface water and underground water. For this purpose, two different methods have been used to estimate runoff. The first method is the US Soil Conservation Service (SCS) curve number method. Another method, known as the reasoning method, has also been used. In order to determine the infiltration rate of underground water, the balance reports of the comprehensive water plan of the country have been used. In this regard, the study areas located in the target area of each province have been extracted by drawing maps of the influence coefficients of each area in the GIS software. It should be mentioned that the infiltration coefficients were taken from the balance sheet reports of the country's comprehensive water plan. Then, based on the area of each study area, the weighted average of the infiltration coefficient of the study areas located in the target area of each province is considered as the infiltration coefficient of that province. Results show that the amount of water extracted from the rain with the help of cloud seeding projects in Simindasht is as follows: an increase in runoff 63.9 million cubic meters (with SCS equation) or 51.2 million cubic meters (with logical equation) and an increase in ground water resources: 40.5 million cubic meters.Keywords: rainwater harvesting, ground water, atmospheric water resources, weather modification, cloud seeding
Procedia PDF Downloads 1046577 A Goms Model for Blind Users Website Navigation
Authors: Suraina Sulong
Abstract:
Keyboard support is one of the main accessibility requirements for web pages and web applications for blind user. But it is not sufficient that the blind user can perform all actions on the page using the keyboard. In addition, designers of web sites or web applications have to make sure that keyboard users can use their pages with acceptable performance. We present GOMS models for navigation in web pages with specific task given to the blind user to accomplish. These models can be used to construct the user model for accessible website.Keywords: GOMS analysis, usability factor, blind user, human computer interaction
Procedia PDF Downloads 1506576 Mathematical Models for GMAW and FCAW Welding Processes for Structural Steels Used in the Oil Industry
Authors: Carlos Alberto Carvalho Castro, Nancy Del Ducca Barbedo, Edmilsom Otoni Côrrea
Abstract:
With increase the production oil and lines transmission gases that are in ample expansion, the industries medium and great transport they had to adapt itself to supply the demand manufacture in this fabrication segment. In this context, two welding processes have been more extensively used: the GMAW (Gas Metal Arc Welding) and the FCAW (Flux Cored Arc Welding). In this work, welds using these processes were carried out in flat position on ASTM A-36 carbon steel plates in order to make a comparative evaluation between them concerning to mechanical and metallurgical properties. A statistical tool based on technical analysis and design of experiments, DOE, from the Minitab software was adopted. For these analyses, the voltage, current, and welding speed, in both processes, were varied. As a result, it was observed that the welds in both processes have different characteristics in relation to the metallurgical properties and performance, but they present good weldability, satisfactory mechanical strength e developed mathematical models.Keywords: Flux Cored Arc Welding (FCAW), Gas Metal Arc Welding (GMAW), Design of Experiments (DOE), mathematical models
Procedia PDF Downloads 5606575 Sentiment Analysis of Fake Health News Using Naive Bayes Classification Models
Authors: Danielle Shackley, Yetunde Folajimi
Abstract:
As more people turn to the internet seeking health-related information, there is more risk of finding false, inaccurate, or dangerous information. Sentiment analysis is a natural language processing technique that assigns polarity scores to text, ranging from positive, neutral, and negative. In this research, we evaluate the weight of a sentiment analysis feature added to fake health news classification models. The dataset consists of existing reliably labeled health article headlines that were supplemented with health information collected about COVID-19 from social media sources. We started with data preprocessing and tested out various vectorization methods such as Count and TFIDF vectorization. We implemented 3 Naive Bayes classifier models, including Bernoulli, Multinomial, and Complement. To test the weight of the sentiment analysis feature on the dataset, we created benchmark Naive Bayes classification models without sentiment analysis, and those same models were reproduced, and the feature was added. We evaluated using the precision and accuracy scores. The Bernoulli initial model performed with 90% precision and 75.2% accuracy, while the model supplemented with sentiment labels performed with 90.4% precision and stayed constant at 75.2% accuracy. Our results show that the addition of sentiment analysis did not improve model precision by a wide margin; while there was no evidence of improvement in accuracy, we had a 1.9% improvement margin of the precision score with the Complement model. Future expansion of this work could include replicating the experiment process and substituting the Naive Bayes for a deep learning neural network model.Keywords: sentiment analysis, Naive Bayes model, natural language processing, topic analysis, fake health news classification model
Procedia PDF Downloads 976574 ID + PD: Training Instructional Designers to Foster and Facilitate Learning Communities in Digital Spaces
Authors: Belkis L. Cabrera
Abstract:
Contemporary technological innovations have reshaped possibility, interaction, communication, engagement, education, and training. Indeed, today, a high-quality technology enhanced learning experience can be transformative as much for the learner as for the educator-trainer. As innovative technologies continue to facilitate, support, foster, and enhance collaboration, problem-solving, creativity, adaptiveness, multidisciplinarity, and communication, the field of instructional design (ID) also continues to develop and expand. Shifting its focus from media to the systematic design of instruction, or rather from the gadgets and devices themselves to the theories, models, and impact of implementing educational technology, the evolution of ID marks a restructuring of the teaching, learning, and training paradigms. However, with all of its promise, this latter component of ID remains underdeveloped. The majority of ID models are crafted and guided by learning theories and, therefore, most models are constructed around student and educator roles rather than trainer roles. Thus, when these models or systems are employed for training purposes, they usually have to be re-fitted, tweaked, and stretched to meet the training needs. This paper is concerned with the training or professional development (PD) facet of instructional design and how ID models built on teacher-to-teacher interaction and dialogue can support the creation of professional learning communities (PLCs) or communities of practice (CoPs), which can augment learning and PD experiences for all. Just as technology is changing the face of education, so too can it change the face of PD within the educational realm. This paper not only provides a new ID model but using innovative technologies such as Padlet and Thinkbinder, this paper presents a concrete example of how a traditional body-to-body, brick, and mortar learning community can be transferred and transformed into the online context.Keywords: communities of practice, e-learning, educational reform, instructional design, professional development, professional learning communities, technology, training
Procedia PDF Downloads 3406573 Adding a Degree of Freedom to Opinion Dynamics Models
Authors: Dino Carpentras, Alejandro Dinkelberg, Michael Quayle
Abstract:
Within agent-based modeling, opinion dynamics is the field that focuses on modeling people's opinions. In this prolific field, most of the literature is dedicated to the exploration of the two 'degrees of freedom' and how they impact the model’s properties (e.g., the average final opinion, the number of final clusters, etc.). These degrees of freedom are (1) the interaction rule, which determines how agents update their own opinion, and (2) the network topology, which defines the possible interaction among agents. In this work, we show that the third degree of freedom exists. This can be used to change a model's output up to 100% of its initial value or to transform two models (both from the literature) into each other. Since opinion dynamics models are representations of the real world, it is fundamental to understand how people’s opinions can be measured. Even for abstract models (i.e., not intended for the fitting of real-world data), it is important to understand if the way of numerically representing opinions is unique; and, if this is not the case, how the model dynamics would change by using different representations. The process of measuring opinions is non-trivial as it requires transforming real-world opinion (e.g., supporting most of the liberal ideals) to a number. Such a process is usually not discussed in opinion dynamics literature, but it has been intensively studied in a subfield of psychology called psychometrics. In psychometrics, opinion scales can be converted into each other, similarly to how meters can be converted to feet. Indeed, psychometrics routinely uses both linear and non-linear transformations of opinion scales. Here, we analyze how this transformation affects opinion dynamics models. We analyze this effect by using mathematical modeling and then validating our analysis with agent-based simulations. Firstly, we study the case of perfect scales. In this way, we show that scale transformations affect the model’s dynamics up to a qualitative level. This means that if two researchers use the same opinion dynamics model and even the same dataset, they could make totally different predictions just because they followed different renormalization processes. A similar situation appears if two different scales are used to measure opinions even on the same population. This effect may be as strong as providing an uncertainty of 100% on the simulation’s output (i.e., all results are possible). Still, by using perfect scales, we show that scales transformations can be used to perfectly transform one model to another. We test this using two models from the standard literature. Finally, we test the effect of scale transformation in the case of finite precision using a 7-points Likert scale. In this way, we show how a relatively small-scale transformation introduces both changes at the qualitative level (i.e., the most shared opinion at the end of the simulation) and in the number of opinion clusters. Thus, scale transformation appears to be a third degree of freedom of opinion dynamics models. This result deeply impacts both theoretical research on models' properties and on the application of models on real-world data.Keywords: degrees of freedom, empirical validation, opinion scale, opinion dynamics
Procedia PDF Downloads 119