Search results for: statistical estimation problem
9961 Stability Indicating RP – HPLC Method Development, Validation and Kinetic Study for Amiloride Hydrochloride and Furosemide in Pharmaceutical Dosage Form
Authors: Jignasha Derasari, Patel Krishna M, Modi Jignasa G.
Abstract:
Chemical stability of pharmaceutical molecules is a matter of great concern as it affects the safety and efficacy of the drug product.Stability testing data provides the basis to understand how the quality of a drug substance and drug product changes with time under the influence of various environmental factors. Besides this, it also helps in selecting proper formulation and package as well as providing proper storage conditions and shelf life, which is essential for regulatory documentation. The ICH guideline states that stress testing is intended to identify the likely degradation products which further help in determination of the intrinsic stability of the molecule and establishing degradation pathways, and to validate the stability indicating procedures. A simple, accurate and precise stability indicating RP- HPLC method was developed and validated for simultaneous estimation of Amiloride Hydrochloride and Furosemide in tablet dosage form. Separation was achieved on an Phenomenexluna ODS C18 (250 mm × 4.6 mm i.d., 5 µm particle size) by using a mobile phase consisting of Ortho phosphoric acid: Acetonitrile (50:50 %v/v) at a flow rate of 1.0 ml/min (pH 3.5 adjusted with 0.1 % TEA in Water) isocratic pump mode, Injection volume 20 µl and wavelength of detection was kept at 283 nm. Retention time for Amiloride Hydrochloride and Furosemide was 1.810 min and 4.269 min respectively. Linearity of the proposed method was obtained in the range of 40-60 µg/ml and 320-480 µg/ml and Correlation coefficient was 0.999 and 0.998 for Amiloride hydrochloride and Furosemide, respectively. Forced degradation study was carried out on combined dosage form with various stress conditions like hydrolysis (acid and base hydrolysis), oxidative and thermal conditions as per ICH guideline Q2 (R1). The RP- HPLC method has shown an adequate separation for Amiloride hydrochloride and Furosemide from its degradation products. Proposed method was validated as per ICH guidelines for specificity, linearity, accuracy; precision and robustness for estimation of Amiloride hydrochloride and Furosemide in commercially available tablet dosage form and results were found to be satisfactory and significant. The developed and validated stability indicating RP-HPLC method can be used successfully for marketed formulations. Forced degradation studies help in generating degradants in much shorter span of time, mostly a few weeks can be used to develop the stability indicating method which can be applied later for the analysis of samples generated from accelerated and long term stability studies. Further, kinetic study was also performed for different forced degradation parameters of the same combination, which help in determining order of reaction.Keywords: amiloride hydrochloride, furosemide, kinetic study, stability indicating RP-HPLC method validation
Procedia PDF Downloads 4689960 Lineup Optimization Model of Basketball Players Based on the Prediction of Recursive Neural Networks
Authors: Wang Yichen, Haruka Yamashita
Abstract:
In recent years, in the field of sports, decision making such as member in the game and strategy of the game based on then analysis of the accumulated sports data are widely attempted. In fact, in the NBA basketball league where the world's highest level players gather, to win the games, teams analyze the data using various statistical techniques. However, it is difficult to analyze the game data for each play such as the ball tracking or motion of the players in the game, because the situation of the game changes rapidly, and the structure of the data should be complicated. Therefore, it is considered that the analysis method for real time game play data is proposed. In this research, we propose an analytical model for "determining the optimal lineup composition" using the real time play data, which is considered to be difficult for all coaches. In this study, because replacing the entire lineup is too complicated, and the actual question for the replacement of players is "whether or not the lineup should be changed", and “whether or not Small Ball lineup is adopted”. Therefore, we propose an analytical model for the optimal player selection problem based on Small Ball lineups. In basketball, we can accumulate scoring data for each play, which indicates a player's contribution to the game, and the scoring data can be considered as a time series data. In order to compare the importance of players in different situations and lineups, we combine RNN (Recurrent Neural Network) model, which can analyze time series data, and NN (Neural Network) model, which can analyze the situation on the field, to build the prediction model of score. This model is capable to identify the current optimal lineup for different situations. In this research, we collected all the data of accumulated data of NBA from 2019-2020. Then we apply the method to the actual basketball play data to verify the reliability of the proposed model.Keywords: recurrent neural network, players lineup, basketball data, decision making model
Procedia PDF Downloads 1369959 Mini Coal Gasifier for Fulfilling Small-Scale Industries Energy Consumption in Indonesia
Authors: Muhammad Ade Andriansyah Efendi, Ika Monika
Abstract:
Mini coal gasifier (GasMin) is a small reactor that could convert coal into combustible gas or producer gas which is designed to fulfill energy needs of small-scale industries. The producer gas can be utilized for both external and internal combustion. The design of coal gasifier is suitable for community require because it is easy to handle, affordable and environmentally friendly. The feasibility study shows that the substitution of 12 kg LPG or specially 50 kg LPG into GasMin of 20 kg coal capacity per hour is very attractive. The estimation price of 20 kg coal per hour capacity GasMin is 40 million rupiahs. In the year 2016, the implementation of GasMin conducted at alumunium industry and batik industry at Yogyakarta, Indonesia.Keywords: biomass, coal, energy, gasification
Procedia PDF Downloads 3429958 Contactless Heart Rate Measurement System based on FMCW Radar and LSTM for Automotive Applications
Authors: Asma Omri, Iheb Sifaoui, Sofiane Sayahi, Hichem Besbes
Abstract:
Future vehicle systems demand advanced capabilities, notably in-cabin life detection and driver monitoring systems, with a particular emphasis on drowsiness detection. To meet these requirements, several techniques employ artificial intelligence methods based on real-time vital sign measurements. In parallel, Frequency-Modulated Continuous-Wave (FMCW) radar technology has garnered considerable attention in the domains of healthcare and biomedical engineering for non-invasive vital sign monitoring. FMCW radar offers a multitude of advantages, including its non-intrusive nature, continuous monitoring capacity, and its ability to penetrate through clothing. In this paper, we propose a system utilizing the AWR6843AOP radar from Texas Instruments (TI) to extract precise vital sign information. The radar allows us to estimate Ballistocardiogram (BCG) signals, which capture the mechanical movements of the body, particularly the ballistic forces generated by heartbeats and respiration. These signals are rich sources of information about the cardiac cycle, rendering them suitable for heart rate estimation. The process begins with real-time subject positioning, followed by clutter removal, computation of Doppler phase differences, and the use of various filtering methods to accurately capture subtle physiological movements. To address the challenges associated with FMCW radar-based vital sign monitoring, including motion artifacts due to subjects' movement or radar micro-vibrations, Long Short-Term Memory (LSTM) networks are implemented. LSTM's adaptability to different heart rate patterns and ability to handle real-time data make it suitable for continuous monitoring applications. Several crucial steps were taken, including feature extraction (involving amplitude, time intervals, and signal morphology), sequence modeling, heart rate estimation through the analysis of detected cardiac cycles and their temporal relationships, and performance evaluation using metrics such as Root Mean Square Error (RMSE) and correlation with reference heart rate measurements. For dataset construction and LSTM training, a comprehensive data collection system was established, integrating the AWR6843AOP radar, a Heart Rate Belt, and a smart watch for ground truth measurements. Rigorous synchronization of these devices ensured data accuracy. Twenty participants engaged in various scenarios, encompassing indoor and real-world conditions within a moving vehicle equipped with the radar system. Static and dynamic subject’s conditions were considered. The heart rate estimation through LSTM outperforms traditional signal processing techniques that rely on filtering, Fast Fourier Transform (FFT), and thresholding. It delivers an average accuracy of approximately 91% with an RMSE of 1.01 beat per minute (bpm). In conclusion, this paper underscores the promising potential of FMCW radar technology integrated with artificial intelligence algorithms in the context of automotive applications. This innovation not only enhances road safety but also paves the way for its integration into the automotive ecosystem to improve driver well-being and overall vehicular safety.Keywords: ballistocardiogram, FMCW Radar, vital sign monitoring, LSTM
Procedia PDF Downloads 779957 Artificial Intelligence-Aided Extended Kalman Filter for Magnetometer-Based Orbit Determination
Authors: Gilberto Goracci, Fabio Curti
Abstract:
This work presents a robust, light, and inexpensive algorithm to perform autonomous orbit determination using onboard magnetometer data in real-time. Magnetometers are low-cost and reliable sensors typically available on a spacecraft for attitude determination purposes, thus representing an interesting choice to perform real-time orbit determination without the need to add additional sensors to the spacecraft itself. Magnetic field measurements can be exploited by Extended/Unscented Kalman Filters (EKF/UKF) for orbit determination purposes to make up for GPS outages, yielding errors of a few kilometers and tens of meters per second in the position and velocity of a spacecraft, respectively. While this level of accuracy shows that Kalman filtering represents a solid baseline for autonomous orbit determination, it is not enough to provide a reliable state estimation in the absence of GPS signals. This work combines the solidity and reliability of the EKF with the versatility of a Recurrent Neural Network (RNN) architecture to further increase the precision of the state estimation. Deep learning models, in fact, can grasp nonlinear relations between the inputs, in this case, the magnetometer data and the EKF state estimations, and the targets, namely the true position, and velocity of the spacecraft. The model has been pre-trained on Sun-Synchronous orbits (SSO) up to 2126 kilometers of altitude with different initial conditions and levels of noise to cover a wide range of possible real-case scenarios. The orbits have been propagated considering J2-level dynamics, and the geomagnetic field has been modeled using the International Geomagnetic Reference Field (IGRF) coefficients up to the 13th order. The training of the module can be completed offline using the expected orbit of the spacecraft to heavily reduce the onboard computational burden. Once the spacecraft is launched, the model can use the GPS signal, if available, to fine-tune the parameters on the actual orbit onboard in real-time and work autonomously during GPS outages. In this way, the provided module shows versatility, as it can be applied to any mission operating in SSO, but at the same time, the training is completed and eventually fine-tuned, on the specific orbit, increasing performances and reliability. The results provided by this study show an increase of one order of magnitude in the precision of state estimate with respect to the use of the EKF alone. Tests on simulated and real data will be shown.Keywords: artificial intelligence, extended Kalman filter, orbit determination, magnetic field
Procedia PDF Downloads 1109956 Doing Cause-and-Effect Analysis Using an Innovative Chat-Based Focus Group Method
Authors: Timothy Whitehill
Abstract:
This paper presents an innovative chat-based focus group method for collecting qualitative data to construct a cause-and-effect analysis in business research. This method was developed in response to the research and data collection challenges faced by the Covid-19 outbreak in the United Kingdom during 2020-21. This paper discusses the methodological approaches and builds a contemporary argument for its effectiveness in exploring cause-and-effect relationships in the context of focus group research, systems thinking and problem structuring methods. The pilot for this method was conducted between October 2020 and March 2021 and collected more than 7,000 words of chat-based data which was used to construct a consensus drawn cause-and-effect analysis. This method was developed in support of an ongoing Doctorate in Business Administration (DBA) thesis, which is using Design Science Research methodology to operationalize organisational resilience in UK construction sector firms.Keywords: cause-and-effect analysis, focus group research, problem structuring methods, qualitative research, systems thinking
Procedia PDF Downloads 2269955 The Successful in Construction Project via Effectiveness of Project Team
Authors: Zarabizan Zakaria, Hayati Zainal
Abstract:
The construction industry is one of the most important sectors that contribute to the nation’s economy and catalyze towards the growth of other industries. However, some construction projects have not been completed on its stipulated time and duration, scope and budget due to several factors. This problem arises due to the weaknesses of human factors, especially from ineffective leadership quality practiced by project managers and contractors in managing project teams. Therefore, a construction project should impose the element of Project Team. The project team is formed in the implementation of the project which includes the project brief, project scope, customer requirements and provided designs. Many organizations in the construction sector use teams to meet today's global competition and customer expectations, however, team effectiveness evaluation is required. In insuring the construction team is successful and effectiveness, the construction department must encourage, measure, set up, and evaluate or review the effectiveness of project team that was formed. In order to produce a better outcome for a high-end project, an effective and efficient project team is required which also help in increasing overall productivity. The purpose of this study is to determine the role of team effectiveness in the construction project team based on the overall construction project performance. It examines several different factors which related to team effectiveness. It also examines the relationship between team effectiveness factor and project performance aspect. Team Effect Review and Project Performance Review are developed to be used for data collection. Data collected were analyzed using several statistical tests. Results obtained from data analysis are validated using semi-structured interviews. Besides that, a comprehensive survey were developed to assess the way construction project teams in order to maintain its effectiveness throughout the project phase. In order to determine a project successful it has been found that Project Team Leadership is the most important factor. In addition, the definition of team effectiveness in the construction project team is developed based on the perspective of project clients and project team members. The results of this study are expected to provide an idea on the factors that are needed to be focused on improving the team's effectiveness towards project performance aspects. At the same time, the definition of team effectiveness from team members and owner views has been developed in order to provide a better understanding of the word team's effectiveness in construction projects.Keywords: project team, leadership, construction project, project successful
Procedia PDF Downloads 1819954 Earthquake Risk Assessment Using Out-of-Sequence Thrust Movement
Authors: Rajkumar Ghosh
Abstract:
Earthquakes are natural disasters that pose a significant risk to human life and infrastructure. Effective earthquake mitigation measures require a thorough understanding of the dynamics of seismic occurrences, including thrust movement. Traditionally, estimating thrust movement has relied on typical techniques that may not capture the full complexity of these events. Therefore, investigating alternative approaches, such as incorporating out-of-sequence thrust movement data, could enhance earthquake mitigation strategies. This review aims to provide an overview of the applications of out-of-sequence thrust movement in earthquake mitigation. By examining existing research and studies, the objective is to understand how precise estimation of thrust movement can contribute to improving structural design, analyzing infrastructure risk, and developing early warning systems. The study demonstrates how to estimate out-of-sequence thrust movement using multiple data sources, including GPS measurements, satellite imagery, and seismic recordings. By analyzing and synthesizing these diverse datasets, researchers can gain a more comprehensive understanding of thrust movement dynamics during seismic occurrences. The review identifies potential advantages of incorporating out-of-sequence data in earthquake mitigation techniques. These include improving the efficiency of structural design, enhancing infrastructure risk analysis, and developing more accurate early warning systems. By considering out-of-sequence thrust movement estimates, researchers and policymakers can make informed decisions to mitigate the impact of earthquakes. This study contributes to the field of seismic monitoring and earthquake risk assessment by highlighting the benefits of incorporating out-of-sequence thrust movement data. By broadening the scope of analysis beyond traditional techniques, researchers can enhance their knowledge of earthquake dynamics and improve the effectiveness of mitigation measures. The study collects data from various sources, including GPS measurements, satellite imagery, and seismic recordings. These datasets are then analyzed using appropriate statistical and computational techniques to estimate out-of-sequence thrust movement. The review integrates findings from multiple studies to provide a comprehensive assessment of the topic. The study concludes that incorporating out-of-sequence thrust movement data can significantly enhance earthquake mitigation measures. By utilizing diverse data sources, researchers and policymakers can gain a more comprehensive understanding of seismic dynamics and make informed decisions. However, challenges exist, such as data quality difficulties, modelling uncertainties, and computational complications. To address these obstacles and improve the accuracy of estimates, further research and advancements in methodology are recommended. Overall, this review serves as a valuable resource for researchers, engineers, and policymakers involved in earthquake mitigation, as it encourages the development of innovative strategies based on a better understanding of thrust movement dynamics.Keywords: earthquake, out-of-sequence thrust, disaster, human life
Procedia PDF Downloads 819953 A Framework for Auditing Multilevel Models Using Explainability Methods
Authors: Debarati Bhaumik, Diptish Dey
Abstract:
Multilevel models, increasingly deployed in industries such as insurance, food production, and entertainment within functions such as marketing and supply chain management, need to be transparent and ethical. Applications usually result in binary classification within groups or hierarchies based on a set of input features. Using open-source datasets, we demonstrate that popular explainability methods, such as SHAP and LIME, consistently underperform inaccuracy when interpreting these models. They fail to predict the order of feature importance, the magnitudes, and occasionally even the nature of the feature contribution (negative versus positive contribution to the outcome). Besides accuracy, the computational intractability of SHAP for binomial classification is a cause of concern. For transparent and ethical applications of these hierarchical statistical models, sound audit frameworks need to be developed. In this paper, we propose an audit framework for technical assessment of multilevel regression models focusing on three aspects: (i) model assumptions & statistical properties, (ii) model transparency using different explainability methods, and (iii) discrimination assessment. To this end, we undertake a quantitative approach and compare intrinsic model methods with SHAP and LIME. The framework comprises a shortlist of KPIs, such as PoCE (Percentage of Correct Explanations) and MDG (Mean Discriminatory Gap) per feature, for each of these three aspects. A traffic light risk assessment method is furthermore coupled to these KPIs. The audit framework will assist regulatory bodies in performing conformity assessments of AI systems using multilevel binomial classification models at businesses. It will also benefit businesses deploying multilevel models to be future-proof and aligned with the European Commission’s proposed Regulation on Artificial Intelligence.Keywords: audit, multilevel model, model transparency, model explainability, discrimination, ethics
Procedia PDF Downloads 999952 Adapting Tools for Text Monitoring and for Scenario Analysis Related to the Field of Social Disasters
Authors: Svetlana Cojocaru, Mircea Petic, Inga Titchiev
Abstract:
Humanity faces more and more often with different social disasters, which in turn can generate new accidents and catastrophes. To mitigate their consequences, it is important to obtain early possible signals about the events which are or can occur and to prepare the corresponding scenarios that could be applied. Our research is focused on solving two problems in this domain: identifying signals related that an accident occurred or may occur and mitigation of some consequences of disasters. To solve the first problem, methods of selecting and processing texts from global network Internet are developed. Information in Romanian is of special interest for us. In order to obtain the mentioned tools, we should follow several steps, divided into preparatory stage and processing stage. Throughout the first stage, we manually collected over 724 news articles and classified them into 10 categories of social disasters. It constitutes more than 150 thousand words. Using this information, a controlled vocabulary of more than 300 keywords was elaborated, that will help in the process of classification and identification of the texts related to the field of social disasters. To solve the second problem, the formalism of Petri net has been used. We deal with the problem of inhabitants’ evacuation in useful time. The analysis methods such as reachability or coverability tree and invariants technique to determine dynamic properties of the modeled systems will be used. To perform a case study of properties of extended evacuation system by adding time, the analysis modules of PIPE such as Generalized Stochastic Petri Nets (GSPN) Analysis, Simulation, State Space Analysis, and Invariant Analysis have been used. These modules helped us to obtain the average number of persons situated in the rooms and the other quantitative properties and characteristics related to its dynamics.Keywords: lexicon of disasters, modelling, Petri nets, text annotation, social disasters
Procedia PDF Downloads 2019951 On Fault Diagnosis of Asynchronous Sequential Machines with Parallel Composition
Authors: Jung-Min Yang
Abstract:
Fault diagnosis of composite asynchronous sequential machines with parallel composition is addressed in this paper. An adversarial input can infiltrate one of two submachines comprising the composite asynchronous machine, causing an unauthorized state transition. The objective is to characterize the condition under which the controller can diagnose any fault occurrence. Two control configurations, state feedback and output feedback, are considered in this paper. In the case of output feedback, the exact estimation of the state is impossible since the current state is inaccessible and the output feedback is given as the form of burst. A simple example is provided to demonstrate the proposed methodology.Keywords: asynchronous sequential machines, parallel composition, fault diagnosis, corrective control
Procedia PDF Downloads 3009950 Influential Factors Affecting the Creativity Scientific Problem Finding Ability of Social Science Ph.D. Students
Authors: Yuanyuan Song
Abstract:
For Ph.D. students, the skill of formulating incisive inquiries holds immense importance, as adept questioning can significantly unravel research complexities. Social Science Ph.D. students should possess specific abilities to formulate creative research questions, and identifying the most influential factors is essential. To respond to these questions, in this study, we engaged with Ph.D. candidates with social sciences backgrounds through interviews and questionnaires. Our objective was to identify the predominant determinants influencing their capacity to formulate inventive research queries, ultimately aiming to enhance the academic journey of social science doctoral candidates. Insights gleaned from semi-structured interviews and questionnaires with 15 doctoral scholars from different universities around the world highlighted that mentorship and scholarly exchanges, prior knowledge, positive mindset, and personal interests played pivotal roles in catalyzing these students' contemplation of research inquiries.Keywords: Ph.D. education, higher education, creativity cultivation, creativity scientific problem finding ability
Procedia PDF Downloads 739949 Thyroid Stimulating Hormone in Relation with Cardiometabolic and Metabolic Syndrome Risks among Obese Children
Authors: Mustafa Metin Donma
Abstract:
Thyroid dysfunction is a great health problem frequently observed in obesity. Thyroid stimulating hormone (TSH) governs the complicated network confined to glucose and fat metabolism. The close relations between obesity and the performance of TSH point out future potential health problems related to cardiometabolic risk (CMR) associated with cardiovascular diseases (CVDs) and metabolic syndrome (MetS). These matter particularly in childhood obesity. The aim of this study was to confirm the associations in pediatric age group between TSH and CMR, which may lead to CVDs and MetS in adulthood, using the recently introduced cardiac and MetS indices. Three groups, being obese (OB), morbid obese (MO) and metabolic syndrome (MetS), comprise forty-seven, ninety-two and thirty-six children, respectively. Informed consent forms were taken from parents or participants. The study protocol was approved by Ethics Committee of the institution. Groups were constituted according to WHO body mass index percentiles tables prepared based on age and gender. These percentiles for OB and MO groups were defined as between ‘95 and 99’ and ‘above 99’, respectively. The third group had MetS components. Anthropometric measurements and routine laboratory tests were performed. Advanced Donma Cardiac Index (ADCI) and Diagnostic Obesity Notation Model Assessment Metabolic Syndrome Index (DMetSI) were calculated. Statistical analysis was performed. The same concentrations in three groups were obtained for each thyroid hormone, triiodothyronin, and thyroxin. Thyroid stimulating hormone level was higher in MO than OB and in MetS than MO group. In MetS group, increased values were obtained for ADCI and DMetSI compared to values calculated for MO group (p<0.001). In the same group, there were positive correlations between TSH and ADCI as well as DMetSI. Any such correlation was not observed in OB or MO group. The associations found between TSH and two indices, ADCI and DMetSI, in MetS group but not in OB or MO group, suggested that the consideration of TSH, as well as these two indices during the evaluation of children from MetS point of view, may point out the potential cardiometabolic risk and contribute much to the correct diagnosis of the syndrome.Keywords: cardiometabolic, metabolic syndrome, obese children, thyroid stimulating hormone
Procedia PDF Downloads 129948 A Machine Learning Approach for Detecting and Locating Hardware Trojans
Authors: Kaiwen Zheng, Wanting Zhou, Nan Tang, Lei Li, Yuanhang He
Abstract:
The integrated circuit industry has become a cornerstone of the information society, finding widespread application in areas such as industry, communication, medicine, and aerospace. However, with the increasing complexity of integrated circuits, Hardware Trojans (HTs) implanted by attackers have become a significant threat to their security. In this paper, we proposed a hardware trojan detection method for large-scale circuits. As HTs introduce physical characteristic changes such as structure, area, and power consumption as additional redundant circuits, we proposed a machine-learning-based hardware trojan detection method based on the physical characteristics of gate-level netlists. This method transforms the hardware trojan detection problem into a machine-learning binary classification problem based on physical characteristics, greatly improving detection speed. To address the problem of imbalanced data, where the number of pure circuit samples is far less than that of HTs circuit samples, we used the SMOTETomek algorithm to expand the dataset and further improve the performance of the classifier. We used three machine learning algorithms, K-Nearest Neighbors, Random Forest, and Support Vector Machine, to train and validate benchmark circuits on Trust-Hub, and all achieved good results. In our case studies based on AES encryption circuits provided by trust-hub, the test results showed the effectiveness of the proposed method. To further validate the method’s effectiveness for detecting variant HTs, we designed variant HTs using open-source HTs. The proposed method can guarantee robust detection accuracy in the millisecond level detection time for IC, and FPGA design flows and has good detection performance for library variant HTs.Keywords: hardware trojans, physical properties, machine learning, hardware security
Procedia PDF Downloads 1539947 Investigation into Relationship between Spaced Repetitions and Problems Solving Efficiency
Authors: Sidharth Talan, Rajlakshmi G. Majumdar
Abstract:
Problem-solving skill is one the few skills which is constantly endeavored to improve upon by the professionals and academicians around the world in order to sustain themselves in the ever-growing competitive environment. The given paper focuses on evaluating a hypothesized relationship between the problems solving efficiency of an individual with spaced repetitions, conducted with a time interval of one day over a period of two weeks. The paper has utilized uni-variate regression analysis technique to assess the best fit curve that can explain the significant relationship between the given two variables. The paper has incorporated Anagrams solving as the appropriate testing process for the analysis. Since Anagrams solving involves rearranging a jumbled word to form a correct word, it projects to be an efficient process to observe the attention span, visual- motor coordination and the verbal ability of an individual. Based on the analysis for a sample population of 30, it was observed that problem-solving efficiency of an individual, measured in terms of the score in each test was found to be significantly correlated with time period measured in days.Keywords: Anagrams, histogram plot, moving average curve, spacing effect
Procedia PDF Downloads 1669946 Temporal and Spatio-Temporal Stability Analyses in Mixed Convection of a Viscoelastic Fluid in a Porous Medium
Authors: P. Naderi, M. N. Ouarzazi, S. C. Hirata, H. Ben Hamed, H. Beji
Abstract:
The stability of mixed convection in a Newtonian fluid medium heated from below and cooled from above, also known as the Poiseuille-Rayleigh-Bénard problem, has been extensively investigated in the past decades. To our knowledge, mixed convection in porous media has received much less attention in the published literature. The present paper extends the mixed convection problem in porous media for the case of a viscoelastic fluid flow owing to its numerous environmental and industrial applications such as the extrusion of polymer fluids, solidification of liquid crystals, suspension solutions and petroleum activities. Without a superimposed through-flow, the natural convection problem of a viscoelastic fluid in a saturated porous medium has already been treated. The effects of the viscoelastic properties of the fluid on the linear and nonlinear dynamics of the thermoconvective instabilities have also been treated in this work. Consequently, the elasticity of the fluid can lead either to a Hopf bifurcation, giving rise to oscillatory structures in the strongly elastic regime, or to a stationary bifurcation in the weakly elastic regime. The objective of this work is to examine the influence of the main horizontal flow on the linear and characteristics of these two types of instabilities. Under the Boussinesq approximation and Darcy's law extended to a viscoelastic fluid, a temporal stability approach shows that the conditions for the appearance of longitudinal rolls are identical to those found in the absence of through-flow. For the general three-dimensional (3D) perturbations, a Squire transformation allows the deduction of the complex frequencies associated with the 3D problem using those obtained by solving the two-dimensional one. The numerical resolution of the eigenvalue problem concludes that the through-flow has a destabilizing effect and selects a convective configuration organized in purely transversal rolls which oscillate in time and propagate in the direction of the main flow. In addition, by using the mathematical formalism of absolute and convective instabilities, we study the nature of unstable three-dimensional disturbances. It is shown that for a non-vanishing through-flow, general three-dimensional instabilities are convectively unstable which means that in the absence of a continuous noise source these instabilities are drifted outside the porous medium, and no long-term pattern is observed. In contrast, purely transversal rolls may exhibit a transition to absolute instability regime and therefore affect the porous medium everywhere including in the absence of a noise source. The absolute instability threshold, the frequency and the wave number associated with purely transversal rolls are determined as a function of the Péclet number and the viscoelastic parameters. Results are discussed and compared to those obtained from laboratory experiments in the case of Newtonian fluids.Keywords: instability, mixed convection, porous media, and viscoelastic fluid
Procedia PDF Downloads 3439945 Hybrid Knowledge and Data-Driven Neural Networks for Diffuse Optical Tomography Reconstruction in Medical Imaging
Authors: Paola Causin, Andrea Aspri, Alessandro Benfenati
Abstract:
Diffuse Optical Tomography (DOT) is an emergent medical imaging technique which employs NIR light to estimate the spatial distribution of optical coefficients in biological tissues for diagnostic purposes, in a noninvasive and non-ionizing manner. DOT reconstruction is a severely ill-conditioned problem due to prevalent scattering of light in the tissue. In this contribution, we present our research in adopting hybrid knowledgedriven/data-driven approaches which exploit the existence of well assessed physical models and build upon them neural networks integrating the availability of data. Namely, since in this context regularization procedures are mandatory to obtain a reasonable reconstruction [1], we explore the use of neural networks as tools to include prior information on the solution. 2. Materials and Methods The idea underlying our approach is to leverage neural networks to solve PDE-constrained inverse problems of the form 𝒒 ∗ = 𝒂𝒓𝒈 𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃), (1) where D is a loss function which typically contains a discrepancy measure (or data fidelity) term plus other possible ad-hoc designed terms enforcing specific constraints. In the context of inverse problems like (1), one seeks the optimal set of physical parameters q, given the set of observations y. Moreover, 𝑦̃ is the computable approximation of y, which may be as well obtained from a neural network but also in a classic way via the resolution of a PDE with given input coefficients (forward problem, Fig.1 box ). Due to the severe ill conditioning of the reconstruction problem, we adopt a two-fold approach: i) we restrict the solutions (optical coefficients) to lie in a lower-dimensional subspace generated by auto-decoder type networks. This procedure forms priors of the solution (Fig.1 box ); ii) we use regularization procedures of type 𝒒̂ ∗ = 𝒂𝒓𝒈𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃)+ 𝑹(𝒒), where 𝑹(𝒒) is a regularization functional depending on regularization parameters which can be fixed a-priori or learned via a neural network in a data-driven modality. To further improve the generalizability of the proposed framework, we also infuse physics knowledge via soft penalty constraints (Fig.1 box ) in the overall optimization procedure (Fig.1 box ). 3. Discussion and Conclusion DOT reconstruction is severely hindered by ill-conditioning. The combined use of data-driven and knowledgedriven elements is beneficial and allows to obtain improved results, especially with a restricted dataset and in presence of variable sources of noise.Keywords: inverse problem in tomography, deep learning, diffuse optical tomography, regularization
Procedia PDF Downloads 809944 Regional Problems of Electronic Governance in Autonomous Republic of Adjara
Authors: Manvelidze irakli, Iashvili Genadi
Abstract:
Research has shown that public institutions in Autonomous Republic of Ajara try their best to make their official electronic data (web-pages, social websites) more informative and improve them. Part of public institutions offer interesting electronic services and initiatives to the public although they are seldom used in communication process. The statistical analysis of the use of web-pages and social websites of public institutions for example their facebook page show lack of activity. The reason could be the fact that public institutions give people less possibility of interaction in official web-pages. Second reason could be the fact that these web-pages are less known to the public and the third reason could be the fact that heads of these institutions lack awareness about the necessity of strengthening citizens’ involvement. In order to increase people’s involvement in this process it is necessary to have at least 23 e-services in one web-page. The research has shown that 11 of the 16 public institutions have only 5 services which are contact, social networks and hotline. Besides introducing innovative services government institutions should evaluate them and make them popular and easily accessible for the public. It would be easy to solve this problem if public institutions had concrete strategic plan of public relations which involved matters connected with maximum usage of electronic services while interaction with citizens. For this moment only one governmental body has a functioning action plan of public relations. As a result of the research organizational, social, methodological and technical problems have been revealed. It should be considered that there are many feedback possibilities like forum, RSS, blogs, wiki, twitter, social networks, etc. usage of only one or three of such instruments indicate that there is no strategy of regional electronic governance. It is necessary to develop more mechanisms of feedback which will increase electronic interaction, discussions and it is necessary to introduce the service of online petitions. It is important to reduce the so-called “digital inequality” and increase internet access for the public. State actions should decrease such problems. In the end if such shortcomings will be improved the role of electronic interactions in democratic processes will increase.Keywords: e-Government, electronic services, information technology, regional government, regional government
Procedia PDF Downloads 3159943 Angle of Arrival Estimation Using Maximum Likelihood Method
Authors: Olomon Wu, Hung Lu, Nick Wilkins, Daniel Kerr, Zekeriya Aliyazicioglu, H. K. Hwang
Abstract:
Multiple Input Multiple Output (MIMO) radar has received increasing attention in recent years. MIMO radar has many advantages over conventional phased array radar such as target detection, resolution enhancement, and interference suppression. In this paper, the results are presented from a simulation study of MIMO Uniformly-Spaced Linear Array (ULA) antennas. The performance is investigated under varied parameters, including varied array size, Pseudo Random (PN) sequence length, number of snapshots, and Signal to Noise Ratio (SNR). The results of MIMO are compared to a traditional array antenna.Keywords: MIMO radar, phased array antenna, target detection, radar signal processing
Procedia PDF Downloads 5469942 Undrained Bearing Capacity of Circular Foundations on two Layered Clays
Authors: S. Benmebarek, S. Benmoussa, N. Benmebarek
Abstract:
Natural soils are often deposited in layers. The estimation of the bearing capacity of the soil using conventional bearing capacity theory based on the properties of the upper layer introduces significant inaccuracies if the thickness of the top layer is comparable to the width of the foundation placed on the soil surface. In this paper, numerical computations using the FLAC code are reported to evaluate the two clay layers effect on the bearing capacity beneath rigid circular rough footing subject to axial static load. The computation results of the parametric study are used to illustrate the sensibility of the bearing capacity, the shape factor and the failure mechanisms to the layered strength and layered thickness.Keywords: numerical modeling, circular footings, layered clays, bearing capacity, failure
Procedia PDF Downloads 5019941 SC-LSH: An Efficient Indexing Method for Approximate Similarity Search in High Dimensional Space
Authors: Sanaa Chafik, Imane Daoudi, Mounim A. El Yacoubi, Hamid El Ouardi
Abstract:
Locality Sensitive Hashing (LSH) is one of the most promising techniques for solving nearest neighbour search problem in high dimensional space. Euclidean LSH is the most popular variation of LSH that has been successfully applied in many multimedia applications. However, the Euclidean LSH presents limitations that affect structure and query performances. The main limitation of the Euclidean LSH is the large memory consumption. In order to achieve a good accuracy, a large number of hash tables is required. In this paper, we propose a new hashing algorithm to overcome the storage space problem and improve query time, while keeping a good accuracy as similar to that achieved by the original Euclidean LSH. The Experimental results on a real large-scale dataset show that the proposed approach achieves good performances and consumes less memory than the Euclidean LSH.Keywords: approximate nearest neighbor search, content based image retrieval (CBIR), curse of dimensionality, locality sensitive hashing, multidimensional indexing, scalability
Procedia PDF Downloads 3249940 The Improved Laplace Homotopy Perturbation Method for Solving Non-integrable PDEs
Authors: Noufe H. Aljahdaly
Abstract:
The Laplace homotopy perturbation method (LHPM) is an approximate method that help to compute the approximate solution for partial differential equations. The method has been used for solving several problems in science. It requires the initial condition, so it solves the initial value problem. In physics, when some important terms are taken in account, we may obtain non-integrable partial differential equations that do not have analytical integrals. This type of PDEs do not have exact solution, therefore, we need to compute the solution without initial condition. In this work, we improved the LHPM to be able to solve non-integrable problem, especially the damped PDEs, which are the PDEs that include a damping term which makes the PDEs non-integrable. We improved the LHPM by setting a perturbation parameter and an embedding parameter as the damping parameter and using the initial condition for damped PDE as the initial condition for non-damped PDE.Keywords: non-integrable PDEs, modified Kawahara equation;, laplace homotopy perturbation method, damping term
Procedia PDF Downloads 1069939 Autonomous Rendezvous for Underactuated Spacecraft
Authors: Espen Oland
Abstract:
This paper presents a solution to the problem of autonomous rendezvous for spacecraft equipped with one main thruster for translational control and three reaction wheels for rotational control. With fewer actuators than degrees of freedom, this constitutes an underactuated control problem, requiring a coupling between the translational and rotational dynamics to facilitate control. This paper shows how to obtain this coupling, and applies the results to autonomous rendezvous between a follower spacecraft and a leader spacecraft. Additionally, since the thrust is constrained between zero and an upper bound, no negative forces can be generated to slow down the speed of the spacecraft. A combined speed and attitude control logic is therefore created that can be divided into three main phases: 1) The orbital velocity vector is pointed towards the desired position and the thrust is used to obtain the desired speed, 2) during the coasting phase, the attitude is changed to facilitate deceleration using the main thruster, 3) the speed is decreased as the spacecraft reaches its desired position. The results are validated through simulations, showing the capabilities of the proposed approach.Keywords: attitude control, spacecraft rendezvous, translational control, underactuated rigid body
Procedia PDF Downloads 2949938 An Improved Model of Estimation Global Solar Irradiation from in situ Data: Case of Oran Algeria Region
Authors: Houcine Naim, Abdelatif Hassini, Noureddine Benabadji, Alex Van Den Bossche
Abstract:
In this paper, two models to estimate the overall monthly average daily radiation on a horizontal surface were applied to the site of Oran (35.38 ° N, 0.37 °W). We present a comparison between the first one is a regression equation of the Angstrom type and the second model is developed by the present authors some modifications were suggested using as input parameters: the astronomical parameters as (latitude, longitude, and altitude) and meteorological parameters as (relative humidity). The comparisons are made using the mean bias error (MBE), root mean square error (RMSE), mean percentage error (MPE), and mean absolute bias error (MABE). This comparison shows that the second model is closer to the experimental values that the model of Angstrom.Keywords: meteorology, global radiation, Angstrom model, Oran
Procedia PDF Downloads 2399937 The Effectiveness of Humanoid Diagram Teaching Strategy on Retention Rate of Novice Nurses in Taiwan
Authors: Yung-Hui Tang, Yan-Chiou Ku, Li-Chi Huang
Abstract:
Aim: The aim of this study is to explore the effect of the Humanoid Diagram Teaching (HDT) strategy on novice nurses’ care ability and retention rate. Methods: This study was a quasi-experimental study using two groups concurrently with repeat measurements sample consisted of 24 novice nurses (12 in each experimental and control group) in a medical center in southern Taiwan. Both groups all received regular training program (nursing standard techniques and practices, concept map, mini-CEX, CbD, and clinical education and training), and experimental group added the HDT program. The HDT strategy includes the contents of patients’ body humanoid drawing and discussion for 30 minutes each time, three times a week, and continually for four weeks. The effectiveness of HDT was evaluated by mini-CEX, CbD and clinical assessment and retention rate at the 3rd month and 6th month. Results: The novice nurses' care ability were examined, only CbD score in the control group was improved in the 3rd month and with statistical difference, p = .003. The mini-CEX and CbD in the experimental group were significantly improved in both the first and third month with statistical differences p = .00. Although mini-CEX and CbD in the experimental group were higher than the control group, but there was no significant difference p > .05. Retention rate of the experimental group in the third month and sixth month was significantly higher than the control group, and there was a statistically significant difference p < .05. Conclusions: The study reveals that HDT strategy can help novice nurses learning, enhancing their knowledge and technical capability, analytical skills in case-based caring, and retention. The HDT strategy can be served as an effective strategy for novice training for better nurse retention rate.Keywords: humanoid diagram teaching strategy, novice nurses retention, teaching strategy of nurse retention, visual learning mode
Procedia PDF Downloads 1759936 A Deterministic Approach for Solving the Hull and White Interest Rate Model with Jump Process
Authors: Hong-Ming Chen
Abstract:
This work considers the resolution of the Hull and White interest rate model with the jump process. A deterministic process is adopted to model the random behavior of interest rate variation as deterministic perturbations, which is depending on the time t. The Brownian motion and jumps uncertainty are denoted as the integral functions piecewise constant function w(t) and point function θ(t). It shows that the interest rate function and the yield function of the Hull and White interest rate model with jump process can be obtained by solving a nonlinear semi-infinite programming problem. A relaxed cutting plane algorithm is then proposed for solving the resulting optimization problem. The method is calibrated for the U.S. treasury securities at 3-month data and is used to analyze several effects on interest rate prices, including interest rate variability, and the negative correlation between stock returns and interest rates. The numerical results illustrate that our approach essentially generates the yield functions with minimal fitting errors and small oscillation.Keywords: optimization, interest rate model, jump process, deterministic
Procedia PDF Downloads 1629935 The Role of Logistics Services in Influencing Customer Satisfaction and Reviews in an Online Marketplace
Authors: nafees mahbub, blake tindol, utkarsh shrivastava, kuanchin chen
Abstract:
Online shopping has become an integral part of businesses today. Big players such as Amazon are setting the bar for delivery services, and many businesses are working towards meeting them. However, what happens if a seller underestimates or overestimates the delivery time? Does it translate to consumer comments, ratings, or lost sales? Although several prior studies have investigated the impact of poor logistics on customer satisfaction, that impact of under estimation of delivery times has been rarely considered. The study uses real-time customer online purchase data to study the impact of missed delivery times on satisfaction.Keywords: LOST SALES, DELIVERY TIME, CUSTOMER SATISFACTION, CUSTOMER REVIEWS
Procedia PDF Downloads 2209934 Diffuse CO₂ Degassing to Study Blind Geothermal Systems: The Acoculco, Puebla (Mexico) Case Study
Authors: Mirna Guevara, Edgar Santoyo, Daniel Perez-Zarate, Erika Almirudis
Abstract:
The Acoculco caldera located in Puebla (Mexico) has been preliminary identified as a blind hot-dry rock geothermal system. Two drilled wells suggest the existence of high temperatures >300°C and non-conventional tools are been applied to study this system. A comprehensive survey of soil-gas (CO₂) flux measurements (1,500 sites) was carried out during the dry seasons over almost two years (2015 and 2016). Isotopic analyses of δ¹³CCO₂ were performed to discriminate the origin source of the CO2 fluxes. The soil CO2 flux measurements were made in situ by the accumulation chamber method, whereas gas samples for δ13CCO2 were selectively collected from the accumulation chamber with evacuated gas vials via a septum. Two anomalous geothermal zones were identified as a result of these campaigns: Los Azufres (19°55'29.4'' N; 98°08'39.9'' W; 2,839 masl) and Alcaparrosa (19°55'20.6'' N; 98°08'38.3'' W; 2,845 masl). To elucidate the origin of the C in soil CO₂ fluxes, the isotopic signature of δ¹³C was used. Graphical Statistical Analysis (GSA) and a three end-member mixing diagram were used to corroborate the presence of distinctive statistical samples, and trends for the diffuse gas fluxes. Spatial and temporal distributions of the CO₂ fluxes were studied. High CO₂ emission rates up to 38,217 g/m2/d and 33,706 g/m2/d were measured for the Los Azufres and Alcaparrosa respectively; whereas the δ¹³C signatures showed values ranging from -3.4 to -5.5 o/oo for both zones, confirming their magmatic origin. This study has provided a valuable framework to set the direction of further exploration campaigns in the Acoculco caldera. Acknowledgements: The authors acknowledge the funding received from CeMIE-Geo P09 project (SENER-CONACyT).Keywords: accumulation chamber method, carbon dioxide, diffusive degassing, geothermal exploration
Procedia PDF Downloads 2679933 Seroprevalence of Hepatitis B and C among Healthcare Workers in Dutse Metropolis, Jigawa State, Nigeria
Authors: N. M. Sani, I. Bitrus, A. M. Sarki, N. S. Mujahid
Abstract:
Hepatitis is one of the neglected infectious diseases in sub Saharan Africa, and most of the available data is based on blood donors. Health care workers (HCWs) often get infected as a result of their close contact with patients. A cross-sectional study was conducted to determine the prevalence of hepatitis B and C among this group of professionals with a view to improving the quality of care to their patients. Hepatitis B and C infections pose a major public health problem worldwide. While infection is highest in the developing world particularly Asia and sub-Saharan Africa, healthcare workers are at higher risk of acquiring blood-borne viral infections, particularly Hepatitis B and C which are mostly asymptomatic. This study was aimed at determining the prevalence of Hepatitis B and C infections and associated risk factors among health care workers in Dutse Metropolis, Jigawa State - Nigeria. A standard rapid immuno-chromatographic technique i.e. rapid ELISA was used to screen all sera for Hepatitis B surface antigen (HBsAg) and Hepatitis C viral antibody (HCVAb) respectively. Strips containing coated antibodies and antigens to HBV and HCV respectively were removed from the foil. Strips were labeled according to samples. Using a separate disposable pipette, 2 drops of the sample (plasma) were added into each test strip and allowed to run across the absorbent pad. Results were read after 15 minutes. The prevalence of HBV and HCV infection in 100 healthcare workers was determined by testing the plasma collected from the clients during their normal checkup using HBsAg and HCVAb test strips. Results were subjected to statistical analysis using chi-square test. The prevalence of HBV among HCWs was 19 out of 100 (19.0%) and that of HCV was 5 out of 100 (5.0%) where in both cases, higher prevalence was observed among female nurses. It was also observed that all HCV positive cases were recorded among nurses only. The study revealed that nurses are at greater risk of contracting HBV and HCV due to their frequent contact with patients. It is therefore recommended that effective vaccination and other infection control measures be encouraged among healthcare workers.Keywords: prevalence, hepatitis, viruses, healthcare workers, infection
Procedia PDF Downloads 3639932 Application of Adaptive Particle Filter for Localizing a Mobile Robot Using 3D Camera Data
Authors: Maysam Shahsavari, Seyed Jamalaldin Haddadi
Abstract:
There are several methods to localize a mobile robot such as relative, absolute and probabilistic. In this paper, particle filter due to its simple implementation and the fact that it does not need to know to the starting position will be used. This method estimates the position of the mobile robot using a probabilistic distribution, relying on a known map of the environment instead of predicting it. Afterwards, it updates this estimation by reading input sensors and control commands. To receive information from the surrounding world, distance to obstacles, for example, a Kinect is used which is much cheaper than a laser range finder. Finally, after explaining the Adaptive Particle Filter method and its implementation in detail, we will compare this method with the dead reckoning method and show that this method is much more suitable for situations in which we have a map of the environment.Keywords: particle filter, localization, methods, odometry, kinect
Procedia PDF Downloads 272