Search results for: models error comparison
10594 Epigenetic Drugs for Major Depressive Disorder: A Critical Appraisal of Available Studies
Authors: Aniket Kumar, Jacob Peedicayil
Abstract:
Major depressive disorder (MDD) is a common and important psychiatric disorder. Several clinical features of MDD suggest an epigenetic basis for its pathogenesis. Since epigenetics (heritable changes in gene expression not involving changes in DNA sequence) may underlie the pathogenesis of MDD, epigenetic drugs such as DNA methyltransferase inhibitors (DNMTi) and histone deactylase inhibitors (HDACi) may be useful for treating MDD. The available literature indexed in Pubmed on preclinical drug trials of epigenetic drugs for the treatment of MDD was investigated. The search terms we used were ‘depression’ or ‘depressive’ and ‘HDACi’ or ‘DNMTi’. Among epigenetic drugs, it was found that there were 3 preclinical trials using HDACi and 3 using DNMTi for the treatment of MDD. All the trials were conducted on rodents (mice or rats). The animal models of depression that were used were: learned helplessness-induced animal model, forced swim test, open field test, and the tail suspension test. One study used a genetic rat model of depression (the Flinders Sensitive Line). The HDACi that were tested were: sodium butyrate, compound 60 (Cpd-60), and valproic acid. The DNMTi that were tested were: 5-azacytidine and decitabine. Among the three preclinical trials using HDACi, all showed an antidepressant effect in animal models of depression. Among the 3 preclinical trials using DNMTi also, all showed an antidepressant effect in animal models of depression. Thus, epigenetic drugs, namely, HDACi and DNMTi, may prove to be useful in the treatment of MDD and merit further investigation for the treatment of this disorder.Keywords: DNA methylation, drug discovery, epigenetics, major depressive disorder
Procedia PDF Downloads 18810593 A Biomechanical Model for the Idiopathic Scoliosis Using the Antalgic-Trak Technology
Authors: Joao Fialho
Abstract:
The mathematical modelling of idiopathic scoliosis has been studied throughout the years. The models presented on those papers are based on the orthotic stabilization of the idiopathic scoliosis, which are based on a transversal force being applied to the human spine on a continuous form. When considering the ATT (Antalgic-Trak Technology) device, the existent models cannot be used, as the type of forces applied are no longer transversal nor applied in a continuous manner. In this device, vertical traction is applied. In this study we propose to model the idiopathic scoliosis, using the ATT (Antalgic-Trak Technology) device, and with the parameters obtained from the mathematical modeling, set up a case-by-case individualized therapy plan, for each patient.Keywords: idiopathic scoliosis, mathematical modelling, human spine, Antalgic-Trak technology
Procedia PDF Downloads 26910592 On the Use of Analytical Performance Models to Design a High-Performance Active Queue Management Scheme
Authors: Shahram Jamali, Samira Hamed
Abstract:
One of the open issues in Random Early Detection (RED) algorithm is how to set its parameters to reach high performance for the dynamic conditions of the network. Although original RED uses fixed values for its parameters, this paper follows a model-based approach to upgrade performance of the RED algorithm. It models the routers queue behavior by using the Markov model and uses this model to predict future conditions of the queue. This prediction helps the proposed algorithm to make some tunings over RED's parameters and provide efficiency and better performance. Widespread packet level simulations confirm that the proposed algorithm, called Markov-RED, outperforms RED and FARED in terms of queue stability, bottleneck utilization and dropped packets count.Keywords: active queue management, RED, Markov model, random early detection algorithm
Procedia PDF Downloads 53910591 Ontology Expansion via Synthetic Dataset Generation and Transformer-Based Concept Extraction
Authors: Andrey Khalov
Abstract:
The rapid proliferation of unstructured data in IT infrastructure management demands innovative approaches for extracting actionable knowledge. This paper presents a framework for ontology-based knowledge extraction that combines relational graph neural networks (R-GNN) with large language models (LLMs). The proposed method leverages the DOLCE framework as the foundational ontology, extending it with concepts from ITSMO for domain-specific applications in IT service management and outsourcing. A key component of this research is the use of transformer-based models, such as DeBERTa-v3-large, for automatic entity and relationship extraction from unstructured texts. Furthermore, the paper explores how transfer learning techniques can be applied to fine-tune large language models (LLaMA) for using to generate synthetic datasets to improve precision in BERT-based entity recognition and ontology alignment. The resulting IT Ontology (ITO) serves as a comprehensive knowledge base that integrates domain-specific insights from ITIL processes, enabling more efficient decision-making. Experimental results demonstrate significant improvements in knowledge extraction and relationship mapping, offering a cutting-edge solution for enhancing cognitive computing in IT service environments.Keywords: ontology expansion, synthetic dataset, transformer fine-tuning, concept extraction, DOLCE, BERT, taxonomy, LLM, NER
Procedia PDF Downloads 1410590 Determination of Optimum Water Consumptive Using Deficit Irrigation Model for Barely: A Case Study in Arak, Iran
Authors: Mohsen Najarchi
Abstract:
This research was carried out in five fields (5-15 hectares) in Arak located in center of Iran, to determine optimum level of water consumed for Barely in four stages growth (vegetative, yield formation, flowering, and ripening). Actual evapotranspiration was calculated using measured water requirement in the fields. Five levels of water requirement equal to 50, 60, 70, 80, and 90 percents formed the treatments. To determine the optimum level of water requirement linear programming was used. The study showed 60 percent water requirement (40 percent deficit irrigation) has been the optimum level of irrigation for winter wheat in four stages of growth. Comparison between all of the treatments indicated above with normal condition (100% water requirement) shows increasing in water use efficiency. Although 40% deficit irrigation treatment lead to decrease of 38% in yield, net benefit was increasing in 11.37%. Furthermore, in comparison with normal condition, 70% of water requirement increased water use efficiency as 30%.Keywords: optimum, deficit irrigation, water use efficiency, evapotranspiration
Procedia PDF Downloads 39610589 Computational Fluid Dynamics Simulation and Comparison of Flow through Mechanical Heart Valve Using Newtonian and Non-Newtonian Fluid
Authors: D. Šedivý, S. Fialová
Abstract:
The main purpose of this study is to show differences between the numerical solution of the flow through the artificial heart valve using Newtonian or non-Newtonian fluid. The simulation was carried out by a commercial computational fluid dynamics (CFD) package based on finite-volume method. An aortic bileaflet heart valve (Sorin Bicarbon) was used as a pattern for model of real heart valve replacement. Computed tomography (CT) was used to gain the accurate parameters of the valve. Data from CT were transferred in the commercial 3D designer, where the model for CFD was made. Carreau rheology model was applied as non-Newtonian fluid. Physiological data of cardiac cycle were used as boundary conditions. Outputs were taken the leaflets excursion from opening to closure and the fluid dynamics through the valve. This study also includes experimental measurement of pressure fields in ambience of valve for verification numerical outputs. Results put in evidence a favorable comparison between the computational solutions of flow through the mechanical heart valve using Newtonian and non-Newtonian fluid.Keywords: computational modeling, dynamic mesh, mechanical heart valve, non-Newtonian fluid
Procedia PDF Downloads 38610588 Downscaling Grace Gravity Models Using Spectral Combination Techniques for Terrestrial Water Storage and Groundwater Storage Estimation
Authors: Farzam Fatolazadeh, Kalifa Goita, Mehdi Eshagh, Shusen Wang
Abstract:
The Gravity Recovery and Climate Experiment (GRACE) is a satellite mission with twin satellites for the precise determination of spatial and temporal variations in the Earth’s gravity field. The products of this mission are monthly global gravity models containing the spherical harmonic coefficients and their errors. These GRACE models can be used for estimating terrestrial water storage (TWS) variations across the globe at large scales, thereby offering an opportunity for surface and groundwater storage (GWS) assessments. Yet, the ability of GRACE to monitor changes at smaller scales is too limited for local water management authorities. This is largely due to the low spatial and temporal resolutions of its models (~200,000 km2 and one month, respectively). High-resolution GRACE data products would substantially enrich the information that is needed by local-scale decision-makers while offering the data for the regions that lack adequate in situ monitoring networks, including northern parts of Canada. Such products could eventually be obtained through downscaling. In this study, we extended the spectral combination theory to simultaneously downscale spatiotemporally the 3o spatial coarse resolution of GRACE to 0.25o degrees resolution and monthly coarse resolution to daily resolution. This method combines the monthly gravity field solution of GRACE and daily hydrological model products in the form of both low and high-frequency signals to produce high spatiotemporal resolution TWSA and GWSA products. The main contribution and originality of this study are to comprehensively and simultaneously consider GRACE and hydrological variables and their uncertainties to form the estimator in the spectral domain. Therefore, it is predicted that we reach downscale products with an acceptable accuracy.Keywords: GRACE satellite, groundwater storage, spectral combination, terrestrial water storage
Procedia PDF Downloads 8310587 The Factors Affecting the Use of Massive Open Online Courses in Blended Learning by Lecturers in Universities
Authors: Taghreed Alghamdi, Wendy Hall, David Millard
Abstract:
Massive Open Online Courses (MOOCs) have recently gained widespread interest in the academic world, starting a wide range of discussion of a number of issues. One of these issues, using MOOCs in teaching and learning in the higher education by integrating MOOCs’ contents with traditional face-to-face activities in blended learning format, is called blended MOOCs (bMOOCs) and is intended not to replace traditional learning but to enhance students learning. Most research on MOOCs has focused on students’ perception and institutional threats whereas there is a lack of published research on academics’ experiences and practices. Thus, the first aim of the study is to develop a classification of blended MOOCs models by conducting a systematic literature review, classifying 19 different case studies, and identifying the broad types of bMOOCs models namely: Supplementary Model and Integrated Model. Thus, the analyses phase will emphasize on these different types of bMOOCs models in terms of adopting MOOCs by lecturers. The second aim of the study is to improve the understanding of lecturers’ acceptance of bMOOCs by investigate the factors that influence academics’ acceptance of using MOOCs in traditional learning by distributing an online survey to lecturers who participate in MOOCs platforms. These factors can help institutions to encourage their lecturers to integrate MOOCs with their traditional courses in universities.Keywords: acceptance, blended learning, blended MOOCs, higher education, lecturers, MOOCs, professors
Procedia PDF Downloads 13110586 Assessment of Pre-Processing Influence on Near-Infrared Spectra for Predicting the Mechanical Properties of Wood
Authors: Aasheesh Raturi, Vimal Kothiyal, P. D. Semalty
Abstract:
We studied mechanical properties of Eucalyptus tereticornis using FT-NIR spectroscopy. Firstly, spectra were pre-processed to eliminate useless information. Then, prediction model was constructed by partial least squares regression. To study the influence of pre-processing on prediction of mechanical properties for NIR analysis of wood samples, we applied various pretreatment methods like straight line subtraction, constant offset elimination, vector-normalization, min-max normalization, multiple scattering. Correction, first derivative, second derivatives and their combination with other treatment such as First derivative + straight line subtraction, First derivative+ vector normalization and First derivative+ multiplicative scattering correction. The data processing methods in combination of preprocessing with different NIR regions, RMSECV, RMSEP and optimum factors/rank were obtained by optimization process of model development. More than 350 combinations were obtained during optimization process. More than one pre-processing method gave good calibration/cross-validation and prediction/test models, but only the best calibration/cross-validation and prediction/test models are reported here. The results show that one can safely use NIR region between 4000 to 7500 cm-1 with straight line subtraction, constant offset elimination, first derivative and second derivative preprocessing method which were found to be most appropriate for models development.Keywords: FT-NIR, mechanical properties, pre-processing, PLS
Procedia PDF Downloads 36210585 Economic Development Impacts of Connected and Automated Vehicles (CAV)
Authors: Rimon Rafiah
Abstract:
This paper will present a combination of two seemingly unrelated models, which are the one for estimating economic development impacts as a result of transportation investment and the other for increasing CAV penetration in order to reduce congestion. Measuring economic development impacts resulting from transportation investments is becoming more recognized around the world. Examples include the UK’s Wider Economic Benefits (WEB) model, Economic Impact Assessments in the USA, various input-output models, and additional models around the world. The economic impact model is based on WEB and is based on the following premise: investments in transportation will reduce the cost of personal travel, enabling firms to be more competitive, creating additional throughput (the same road allows more people to travel), and reducing the cost of travel of workers to a new workplace. This reduction in travel costs was estimated in out-of-pocket terms in a given localized area and was then translated into additional employment based on regional labor supply elasticity. This additional employment was conservatively assumed to be at minimum wage levels, translated into GDP terms, and from there into direct taxation (i.e., an increase in tax taken by the government). The CAV model is based on economic principles such as CAV usage, supply, and demand. Usage of CAVs can increase capacity using a variety of means – increased automation (known as Level I thru Level IV) and also by increased penetration and usage, which has been predicted to go up to 50% by 2030 according to several forecasts, with possible full conversion by 2045-2050. Several countries have passed policies and/or legislation on sales of gasoline-powered vehicles (none) starting in 2030 and later. Supply was measured via increased capacity on given infrastructure as a function of both CAV penetration and implemented technologies. The CAV model, as implemented in the USA, has shown significant savings in travel time and also in vehicle operating costs, which can be translated into economic development impacts in terms of job creation, GDP growth and salaries as well. The models have policy implications as well and can be adapted for use in Japan as well.Keywords: CAV, economic development, WEB, transport economics
Procedia PDF Downloads 7410584 AutoML: Comprehensive Review and Application to Engineering Datasets
Authors: Parsa Mahdavi, M. Amin Hariri-Ardebili
Abstract:
The development of accurate machine learning and deep learning models traditionally demands hands-on expertise and a solid background to fine-tune hyperparameters. With the continuous expansion of datasets in various scientific and engineering domains, researchers increasingly turn to machine learning methods to unveil hidden insights that may elude classic regression techniques. This surge in adoption raises concerns about the adequacy of the resultant meta-models and, consequently, the interpretation of the findings. In response to these challenges, automated machine learning (AutoML) emerges as a promising solution, aiming to construct machine learning models with minimal intervention or guidance from human experts. AutoML encompasses crucial stages such as data preparation, feature engineering, hyperparameter optimization, and neural architecture search. This paper provides a comprehensive overview of the principles underpinning AutoML, surveying several widely-used AutoML platforms. Additionally, the paper offers a glimpse into the application of AutoML on various engineering datasets. By comparing these results with those obtained through classical machine learning methods, the paper quantifies the uncertainties inherent in the application of a single ML model versus the holistic approach provided by AutoML. These examples showcase the efficacy of AutoML in extracting meaningful patterns and insights, emphasizing its potential to revolutionize the way we approach and analyze complex datasets.Keywords: automated machine learning, uncertainty, engineering dataset, regression
Procedia PDF Downloads 6110583 The Different Improvement of Numerical Magnitude and Spatial Representation of Numbers to Symbolic Approximate Arithmetic: A Training Study of Preschooler
Abstract:
Spatial representation of numbers and numerical magnitude are important for preschoolers’ mathematical ability. Mental number line, a typical index to measure numbers spatial representation, and numerical comparison are both related to arithmetic obviously. However, they seem to rely on different mechanisms and probably influence arithmetic through different mechanisms. In line with this idea, preschool children were trained with two tasks to investigate which one is more important for approximate arithmetic. The training of numerical processing and number line estimation were proved to be effective. They both improved the ability of approximate arithmetic. When the difficulty of approximate arithmetic was taken into account, the performance in number line training group was not significantly different among three levels. However, two harder levels achieved significance in numerical comparison training group. Thus, comparing spatial representation ability, symbolic approximation arithmetic relies more on numerical magnitude. Educational implications of the study were discussed.Keywords: approximate arithmetic, mental number line, numerical magnitude, preschooler
Procedia PDF Downloads 25210582 Numerical Simulation of Different Enhanced Oil Recovery (EOR) Scenarios on a Volatile Oil Reservoir
Authors: Soheil Tavakolpour
Abstract:
Enhance Oil Recovery (EOR) can be considered as an undeniable action in reservoirs life period. Different kind of EOR methods are available, but suitable EOR method depends on reservoir properties, like rock and fluid properties. In this paper, we nominated fifth SPE’s Comparative Solution Projects (CSP) for testing different scenarios. We used seven EOR scenarios for this reservoir and we simulated it for 10 years after 2 years production without any injection. The first scenario is waterflooding for whole of the 10 years period. The second scenario is gas injection for ten years. The third scenario is Water-Alternation-Gas (WAG). In the next scenario, water injected for 4 years before starting WAG injection for the next 6 years. In the fifth scenario, water injected after 6 years WAG injection for 4 years. For sixth and last scenarios, all the things are similar to fourth and fifth scenarios, but gas injected instead of water. Results show that fourth scenario was the most efficient method for 10 years EOR, but it resulted very high water production. Fifth scenario was efficient too, with little water production in comparison to the fourth scenario. Gas injection was not economically attractive. In addition to high gas production, it produced less oil in comparison to other scenarios.Keywords: WAG, SPE’s comparative solution projects, numerical simulation, EOR scenarios
Procedia PDF Downloads 43410581 Regularization of Gene Regulatory Networks Perturbed by White Noise
Authors: Ramazan I. Kadiev, Arcady Ponosov
Abstract:
Mathematical models of gene regulatory networks can in many cases be described by ordinary differential equations with switching nonlinearities, where the initial value problem is ill-posed. Several regularization methods are known in the case of deterministic networks, but the presence of stochastic noise leads to several technical difficulties. In the presentation, it is proposed to apply the methods of the stochastic singular perturbation theory going back to Yu. Kabanov and Yu. Pergamentshchikov. This approach is used to regularize the above ill-posed problem, which, e.g., makes it possible to design stable numerical schemes. Several examples are provided in the presentation, which support the efficiency of the suggested analysis. The method can also be of interest in other fields of biomathematics, where differential equations contain switchings, e.g., in neural field models.Keywords: ill-posed problems, singular perturbation analysis, stochastic differential equations, switching nonlinearities
Procedia PDF Downloads 19410580 Improving Lane Detection for Autonomous Vehicles Using Deep Transfer Learning
Authors: Richard O’Riordan, Saritha Unnikrishnan
Abstract:
Autonomous Vehicles (AVs) are incorporating an increasing number of ADAS features, including automated lane-keeping systems. In recent years, many research papers into lane detection algorithms have been published, varying from computer vision techniques to deep learning methods. The transition from lower levels of autonomy defined in the SAE framework and the progression to higher autonomy levels requires increasingly complex models and algorithms that must be highly reliable in their operation and functionality capacities. Furthermore, these algorithms have no room for error when operating at high levels of autonomy. Although the current research details existing computer vision and deep learning algorithms and their methodologies and individual results, the research also details challenges faced by the algorithms and the resources needed to operate, along with shortcomings experienced during their detection of lanes in certain weather and lighting conditions. This paper will explore these shortcomings and attempt to implement a lane detection algorithm that could be used to achieve improvements in AV lane detection systems. This paper uses a pre-trained LaneNet model to detect lane or non-lane pixels using binary segmentation as the base detection method using an existing dataset BDD100k followed by a custom dataset generated locally. The selected roads will be modern well-laid roads with up-to-date infrastructure and lane markings, while the second road network will be an older road with infrastructure and lane markings reflecting the road network's age. The performance of the proposed method will be evaluated on the custom dataset to compare its performance to the BDD100k dataset. In summary, this paper will use Transfer Learning to provide a fast and robust lane detection algorithm that can handle various road conditions and provide accurate lane detection.Keywords: ADAS, autonomous vehicles, deep learning, LaneNet, lane detection
Procedia PDF Downloads 10410579 Modeling Binomial Dependent Distribution of the Values: Synthesis Tables of Probabilities of Errors of the First and Second Kind of Biometrics-Neural Network Authentication System
Authors: B. S.Akhmetov, S. T. Akhmetova, D. N. Nadeyev, V. Yu. Yegorov, V. V. Smogoonov
Abstract:
Estimated probabilities of errors of the first and second kind for nonideal biometrics-neural transducers 256 outputs, the construction of nomograms based error probability of 'own' and 'alien' from the mathematical expectation and standard deviation of the normalized measures Hamming.Keywords: modeling, errors, probability, biometrics, neural network, authentication
Procedia PDF Downloads 48210578 Working Capital Management and Profitability of Uk Firms: A Contingency Theory Approach
Authors: Ishmael Tingbani
Abstract:
This paper adopts a contingency theory approach to investigate the relationship between working capital management and profitability using data of 225 listed British firms on the London Stock Exchange for the period 2001-2011. The paper employs a panel data analysis on a series of interactive models to estimate this relationship. The findings of the study confirm the relevance of the contingency theory. Evidence from the study suggests that the impact of working capital management on profitability varies and is constrained by organizational contingencies (environment, resources, and management factors) of the firm. These findings have implications for a more balanced and nuanced view of working capital management policy for policy-makers.Keywords: working capital management, profitability, contingency theory approach, interactive models
Procedia PDF Downloads 34710577 Experimental Parameters’ Effects on the Electrical Discharge Machining Performances (µEDM)
Authors: Asmae Tafraouti, Yasmina Layouni, Pascal Kleimann
Abstract:
The growing market for Microsystems (MST) and Micro-Electromechanical Systems (MEMS) is driving the research for alternative manufacturing techniques to microelectronics-based technologies, which are generally expensive and time-consuming. Hot-embossing and micro-injection modeling of thermoplastics appear to be industrially viable processes. However, both require the use of master models, usually made in hard materials such as steel. These master models cannot be fabricated using standard microelectronics processes. Thus, other micromachining processes are used, as laser machining or micro-electrical discharge machining (µEDM). In this work, µEDM has been used. The principle of µEDM is based on the use of a thin cylindrical micro-tool that erodes the workpiece surface. The two electrodes are immersed in a dielectric with a distance of a few micrometers (gap). When an electrical voltage is applied between the two electrodes, electrical discharges are generated, which cause material machining. In order to produce master models with high resolution and smooth surfaces, it is necessary to well control the discharge mechanism. However, several problems are encountered, such as a random electrical discharge process, the fluctuation of the discharge energy, the electrodes' polarity inversion, and the wear of the micro-tool. The effect of different parameters, such as the applied voltage, the working capacitor, the micro-tool diameter, the initial gap, has been studied. This analysis helps to improve the machining performances, such: the workpiece surface condition and the lateral crater's gap.Keywords: craters, electrical discharges, micro-electrical discharge machining (µEDM), microsystems
Procedia PDF Downloads 9610576 Validity of a Timing System in the Alpine Ski Field: A Magnet-Based Timing System Using the Magnetometer Built into an Inertial Measurement Units
Authors: Carla Pérez-Chirinos Buxadé, Bruno Fernández-Valdés, Mónica Morral-Yepes, Sílvia Tuyà Viñas, Josep Maria Padullés Riu, Gerard Moras Feliu
Abstract:
There is a long way to explore all the possible applications inertial measurement units (IMUs) have in the sports field. The aim of this study was to evaluate the validity of a new application on the use of these wearable sensors, specifically it was to evaluate a magnet-based timing system (M-BTS) for timing gate-to-gate in an alpine ski slalom using the magnetometer embedded in an IMU. This was a validation study. The criterion validity of time measured by the M-BTS was assessed using the 95% error range against actual time obtained from photocells. The experiment was carried out with first-and second-year junior skiers performing a ski slalom on a ski training slope. Eight alpine skiers (17.4 ± 0.8 years, 176.4 ± 4.9 cm, 67.7 ± 2.0 kg, 128.8 ± 26.6 slalom FIS-Points) participated in the study. An IMU device was attached to the skier’s lower back. Skiers performed a 40-gate slalom from which four gates were assessed. The M-BTS consisted of placing four bar magnets buried into the snow surface on the inner side of each gate’s turning pole; the magnetometer built into the IMU detected the peak-shaped magnetic field when passing near the magnets at a certain speed. Four magnetic peaks were detected. The time compressed between peaks was calculated. Three inter-gate times were obtained for each system: photocells and M-BTS. The total time was defined as the time sum of the inter-gate times. The 95% error interval for the total time was 0.050 s for the ski slalom. The M-BTS is valid for timing gate-to-gate in an alpine ski slalom. Inter-gate times can provide additional data for analyzing a skier’s performance, such as asymmetries between left and right foot.Keywords: gate crossing time, inertial measurement unit, timing system, wearable sensor
Procedia PDF Downloads 18410575 Mathematics Model Approaching: Parameter Estimation of Transmission Dynamics of HIV and AIDS in Indonesia
Authors: Endrik Mifta Shaiful, Firman Riyudha
Abstract:
Acquired Immunodeficiency Syndrome (AIDS) is one of the world's deadliest diseases caused by the Human Immunodeficiency Virus (HIV) that infects white blood cells and cause a decline in the immune system. AIDS quickly became a world epidemic disease that affects almost all countries. Therefore, mathematical modeling approach to the spread of HIV and AIDS is needed to anticipate the spread of HIV and AIDS which are widespread. The purpose of this study is to determine the parameter estimation on mathematical models of HIV transmission and AIDS using cumulative data of people with HIV and AIDS each year in Indonesia. In this model, there are parameters of r ∈ [0,1) which is the effectiveness of the treatment in patients with HIV. If the value of r is close to 1, the number of people with HIV and AIDS will decline toward zero. The estimation results indicate when the value of r is close to unity, there will be a significant decline in HIV patients, whereas in AIDS patients constantly decreases towards zero.Keywords: HIV, AIDS, parameter estimation, mathematical models
Procedia PDF Downloads 25110574 Four-dimensional (4D) Decoding Information Presented in Reports of Project Progress in Developing Countries
Authors: Vahid Khadjeh Anvary, Hamideh Karimi Yazdi
Abstract:
Generally, the tool of comparison between performance of each stage in the life of a project, is the number of project progress during that period, which in most cases is only determined as one-dimensional with referring to one of three factors (physical, time, and financial). In many projects in developing countries there are controversies on accuracy and the way of analyzing progress report of projects that hinders getting definitive and engineering conclusions on the status of project.Identifying weakness points of this kind of one-dimensional look on project and determining a reliable and engineering approach for multi-dimensional decoding information receivable from project is of great importance in project management.This can be a tool to help identification of hidden diseases of project before appearing irreversible symptoms that are usually delays or increased costs of execution. The method used in this paper is defining and evaluating a hypothetical project as an example analyzing different scenarios and numerical comparison of them along with related graphs and tables. Finally, by analyzing different possible scenarios in the project, possibility or impossibility of predicting their occurrence is examine through the evidence.Keywords: physical progress, time progress, financial progress, delays, critical path
Procedia PDF Downloads 37410573 Creating Complementary Bi-Modal Learning Environments: An Exploratory Study Combining Online and Classroom Techniques
Authors: Justin P. Pool, Haruyo Yoshida
Abstract:
This research focuses on the effects of creating an English as a foreign language curriculum that combines online learning and classroom teaching in a complementary manner. Through pre- and post-test results, teacher observation, and learner reflection, it will be shown that learners can benefit from online programs focusing on receptive skills if combined with a communicative classroom environment that encourages learners to develop their productive skills. Much research has lamented the fact that many modern mobile assisted language learning apps do not take advantage of the affordances of modern technology by focusing only on receptive skills rather than inviting learners to interact with one another and develop communities of practice. This research takes into account the realities of the state of such apps and focuses on how to best create a curriculum that complements apps which focus on receptive skills. The research involved 15 adult learners working for a business in Japan simultaneously engaging in 1) a commercial online English language learning application that focused on reading, listening, grammar, and vocabulary and 2) a 15-week class focused on communicative language teaching, presentation skills, and mitigation of error aversion tendencies. Participants of the study experienced large gains on a standardized test, increased motivation and willingness to communicate, and asserted that they felt more confident regarding English communication. Moreover, learners continued to study independently at higher rates after the study than they had before the onset of the program. This paper will include the details of the program, reveal the improvement in test scores, share learner reflections, and critically view current evaluation models for mobile assisted language learning applications.Keywords: adult learners, communicative language teaching, mobile assisted language learning, motivation
Procedia PDF Downloads 13510572 Zebrafish Larvae Model: A High Throughput Screening Tool to Study Autism
Authors: Shubham Dwivedi, Raghavender Medishetti, Rita Rani, Aarti Sevilimedu, Pushkar Kulkarni, Yogeeswari Perumal
Abstract:
Autism Spectrum Disorder (ASD) is a complex neurodevelopmental disorder of early onset, characterized by impaired sociability, cognitive function and stereotypies. There is a significant urge to develop and establish new animal models with ASD-like characteristics for better understanding of underlying mechanisms. The aim of the present study was to develop a cost and time effective zebrafish model with quantifiable parameters to facilitate mechanistic studies as well as high-throughput screening of new molecules for autism. Zebrafish embryos were treated with valproic acid and a battery of behavioral tests (anxiety, inattentive behavior, irritability and social impairment) was performed on larvae at 7th day post fertilization, followed by study of molecular markers of autism. This model shows a significant behavioural impairment in valproic acid treated larvae in comparison to control which was again supported by alteration in few marker genes and proteins of autism. The model also shows a rescue of behavioural despair with positive control drugs. The model shows robust parameters to study behavior, molecular mechanism and drug screening approach in a single frame. Thus we postulate that our 7 days zebrafish larval model for autism can help in high throughput screening of new molecules on autism.Keywords: autism, zebrafish, valproic acid, neurodevelopment, behavioral assay
Procedia PDF Downloads 16210571 Rapid Building Detection in Population-Dense Regions with Overfitted Machine Learning Models
Authors: V. Mantey, N. Findlay, I. Maddox
Abstract:
The quality and quantity of global satellite data have been increasing exponentially in recent years as spaceborne systems become more affordable and the sensors themselves become more sophisticated. This is a valuable resource for many applications, including disaster management and relief. However, while more information can be valuable, the volume of data available is impossible to manually examine. Therefore, the question becomes how to extract as much information as possible from the data with limited manpower. Buildings are a key feature of interest in satellite imagery with applications including telecommunications, population models, and disaster relief. Machine learning tools are fast becoming one of the key resources to solve this problem, and models have been developed to detect buildings in optical satellite imagery. However, by and large, most models focus on affluent regions where buildings are generally larger and constructed further apart. This work is focused on the more difficult problem of detection in populated regions. The primary challenge with detecting small buildings in densely populated regions is both the spatial and spectral resolution of the optical sensor. Densely packed buildings with similar construction materials will be difficult to separate due to a similarity in color and because the physical separation between structures is either non-existent or smaller than the spatial resolution. This study finds that training models until they are overfitting the input sample can perform better in these areas than a more robust, generalized model. An overfitted model takes less time to fine-tune from a generalized pre-trained model and requires fewer input data. The model developed for this study has also been fine-tuned using existing, open-source, building vector datasets. This is particularly valuable in the context of disaster relief, where information is required in a very short time span. Leveraging existing datasets means that little to no manpower or time is required to collect data in the region of interest. The training period itself is also shorter for smaller datasets. Requiring less data means that only a few quality areas are necessary, and so any weaknesses or underpopulated regions in the data can be skipped over in favor of areas with higher quality vectors. In this study, a landcover classification model was developed in conjunction with the building detection tool to provide a secondary source to quality check the detected buildings. This has greatly reduced the false positive rate. The proposed methodologies have been implemented and integrated into a configurable production environment and have been employed for a number of large-scale commercial projects, including continent-wide DEM production, where the extracted building footprints are being used to enhance digital elevation models. Overfitted machine learning models are often considered too specific to have any predictive capacity. However, this study demonstrates that, in cases where input data is scarce, overfitted models can be judiciously applied to solve time-sensitive problems.Keywords: building detection, disaster relief, mask-RCNN, satellite mapping
Procedia PDF Downloads 16910570 Effectiveness of Software Quality Assurance in Offshore Development Enterprises in Sri Lanka
Authors: Malinda Gayan Sirisena
Abstract:
The aim of this research is to evaluate the effectiveness of software quality assurance approaches of Sri Lankan offshore software development organizations, and to propose a framework which could be used across all offshore software development organizations. An empirical study was conducted using derived framework from popular software quality evaluation models. The research instrument employed was a questionnaire survey among thirty seven Sri Lankan registered offshore software development organizations. The findings demonstrate a positive view of Effectiveness of Software Quality Assurance – the stronger predictors of Stability, Installability, Correctness, Testability and Changeability. The present study’s recommendations indicate a need for much emphasis on software quality assurance for the Sri Lankan offshore software development organizations.Keywords: software quality assurance (SQA), offshore software development, quality assurance evaluation models, effectiveness of quality assurance
Procedia PDF Downloads 42110569 Non-linear Model of Elasticity of Compressive Strength of Concrete
Authors: Charles Horace Ampong
Abstract:
Non-linear models have been found to be useful in modeling the elasticity (measure of degree of responsiveness) of a dependent variable with respect to a set of independent variables ceteris paribus. This constant elasticity principle was applied to the dependent variable (Compressive Strength of Concrete in MPa) which was found to be non-linearly related to the independent variable (Water-Cement ratio in kg/m3) for given Ages of Concrete in days (3, 7, 28) at different levels of admixtures Superplasticizer (in kg/m3), Blast Furnace Slag (in kg/m3) and Fly Ash (in kg/m3). The levels of the admixtures were categorized as: S1=Some Plasticizer added & S0=No Plasticizer added; B1=some Blast Furnace Slag added & B0=No Blast Furnace Slag added; F1=Some Fly Ash added & F0=No Fly Ash added. The number of observations (samples) used for the research was one-hundred and thirty-two (132) in all. For Superplasticizer, it was found that Compressive Strength of Concrete was more elastic with regards to Water-Cement ratio at S1 level than at S0 level for the given ages of concrete 3, 7and 28 days. For Blast Furnace Slag, Compressive Strength with regards to Water-Cement ratio was more elastic at B0 level than at B1 level for concrete ages 3, 7 and 28 days. For Fly Ash, Compressive Strength with regards to Water-Cement ratio was more elastic at B0 level than at B1 level for Ages 3, 7 and 28 days. The research also tested for different combinations of the levels of Superplasticizer, Blast Furnace Slag and Fly Ash. It was found that Compressive Strength elasticity with regards to Water-Cement ratio was lowest (Elasticity=-1.746) with a combination of S0, B0 and F0 for concrete age of 3 days. This was followed by Elasticity of -1.611 with a combination of S0, B0 and F0 for a concrete of age 7 days. Next, the highest was an Elasticity of -1.414 with combination of S0, B0 and F0 for a concrete age of 28 days. Based on preceding outcomes, three (3) non-linear model equations for predicting the output elasticity of Compressive Strength of Concrete (in %) or the value of Compressive Strength of Concrete (in MPa) with regards to Water to Cement was formulated. The model equations were based on the three different ages of concrete namely 3, 7 and 28 days under investigation. The three models showed that higher elasticity translates into higher compressive strength. And the models revealed a trend of increasing concrete strength from 3 to 28 days for a given amount of water to cement ratio. Using the models, an increasing modulus of elasticity from 3 to 28 days was deduced.Keywords: concrete, compressive strength, elasticity, water-cement
Procedia PDF Downloads 29310568 Black-Box-Base Generic Perturbation Generation Method under Salient Graphs
Authors: Dingyang Hu, Dan Liu
Abstract:
DNN (Deep Neural Network) deep learning models are widely used in classification, prediction, and other task scenarios. To address the difficulties of generic adversarial perturbation generation for deep learning models under black-box conditions, a generic adversarial ingestion generation method based on a saliency map (CJsp) is proposed to obtain salient image regions by counting the factors that influence the input features of an image on the output results. This method can be understood as a saliency map attack algorithm to obtain false classification results by reducing the weights of salient feature points. Experiments also demonstrate that this method can obtain a high success rate of migration attacks and is a batch adversarial sample generation method.Keywords: adversarial sample, gradient, probability, black box
Procedia PDF Downloads 10410567 OmniDrive Model of a Holonomic Mobile Robot
Authors: Hussein Altartouri
Abstract:
In this paper the kinematic and kinetic models of an omnidirectional holonomic mobile robot is presented. The kinematic and kinetic models form the OmniDrive model. Therefore, a mathematical model for the robot equipped with three- omnidirectional wheels is derived. This model which takes into consideration the kinematics and kinetics of the robot, is developed to state space representation. Relative analysis of the velocities and displacements is used for the kinematics of the robot. Lagrange’s approach is considered in this study for deriving the equation of motion. The drive train and the mechanical assembly only of the Festo Robotino® is considered in this model. Mainly the model is developed for motion control. Furthermore, the model can be used for simulation purposes in different virtual environments not only Robotino® View. Further use of the model is in the mechatronics research fields with the aim of teaching and learning the advanced control theories.Keywords: mobile robot, omni-direction wheel, mathematical model, holonomic mobile robot
Procedia PDF Downloads 60910566 Analysis of the Relationship between the Unitary Impulse Response for the nth-Volterra Kernel of a Duffing Oscillator System
Authors: Guillermo Manuel Flores Figueroa, Juan Alejandro Vazquez Feijoo, Jose Navarro Antonio
Abstract:
A continuous nonlinear system response may be obtained by an infinite sum of the so-called Volterra operators. Each operator is obtained from multidimensional convolution of nth-order between the nth-order Volterra kernel and the system input. These operators can also be obtained from the Associated Linear Equations (ALEs) that are linear models of subsystems which inputs and outputs are of the same nth-order. Each ALEs produces a particular nth-Volterra operator. As linear models a unitary impulse response can be obtained from them. This work shows the relationship between this unitary impulse responses and the corresponding order Volterra kernel.Keywords: Volterra series, frequency response functions FRF, associated linear equations ALEs, unitary response function, Voterra kernel
Procedia PDF Downloads 67010565 Causal Relation Identification Using Convolutional Neural Networks and Knowledge Based Features
Authors: Tharini N. de Silva, Xiao Zhibo, Zhao Rui, Mao Kezhi
Abstract:
Causal relation identification is a crucial task in information extraction and knowledge discovery. In this work, we present two approaches to causal relation identification. The first is a classification model trained on a set of knowledge-based features. The second is a deep learning based approach training a model using convolutional neural networks to classify causal relations. We experiment with several different convolutional neural networks (CNN) models based on previous work on relation extraction as well as our own research. Our models are able to identify both explicit and implicit causal relations as well as the direction of the causal relation. The results of our experiments show a higher accuracy than previously achieved for causal relation identification tasks.Keywords: causal realtion extraction, relation extracton, convolutional neural network, text representation
Procedia PDF Downloads 732