Search results for: complexity measurement
3818 Healthcare Big Data Analytics Using Hadoop
Authors: Chellammal Surianarayanan
Abstract:
Healthcare industry is generating large amounts of data driven by various needs such as record keeping, physician’s prescription, medical imaging, sensor data, Electronic Patient Record(EPR), laboratory, pharmacy, etc. Healthcare data is so big and complex that they cannot be managed by conventional hardware and software. The complexity of healthcare big data arises from large volume of data, the velocity with which the data is accumulated and different varieties such as structured, semi-structured and unstructured nature of data. Despite the complexity of big data, if the trends and patterns that exist within the big data are uncovered and analyzed, higher quality healthcare at lower cost can be provided. Hadoop is an open source software framework for distributed processing of large data sets across clusters of commodity hardware using a simple programming model. The core components of Hadoop include Hadoop Distributed File System which offers way to store large amount of data across multiple machines and MapReduce which offers way to process large data sets with a parallel, distributed algorithm on a cluster. Hadoop ecosystem also includes various other tools such as Hive (a SQL-like query language), Pig (a higher level query language for MapReduce), Hbase(a columnar data store), etc. In this paper an analysis has been done as how healthcare big data can be processed and analyzed using Hadoop ecosystem.Keywords: big data analytics, Hadoop, healthcare data, towards quality healthcare
Procedia PDF Downloads 4133817 Pricing European Options under Jump Diffusion Models with Fast L-stable Padé Scheme
Authors: Salah Alrabeei, Mohammad Yousuf
Abstract:
The goal of option pricing theory is to help the investors to manage their money, enhance returns and control their financial future by theoretically valuing their options. Modeling option pricing by Black-School models with jumps guarantees to consider the market movement. However, only numerical methods can solve this model. Furthermore, not all the numerical methods are efficient to solve these models because they have nonsmoothing payoffs or discontinuous derivatives at the exercise price. In this paper, the exponential time differencing (ETD) method is applied for solving partial integrodifferential equations arising in pricing European options under Merton’s and Kou’s jump-diffusion models. Fast Fourier Transform (FFT) algorithm is used as a matrix-vector multiplication solver, which reduces the complexity from O(M2) into O(M logM). A partial fraction form of Pad`e schemes is used to overcome the complexity of inverting polynomial of matrices. These two tools guarantee to get efficient and accurate numerical solutions. We construct a parallel and easy to implement a version of the numerical scheme. Numerical experiments are given to show how fast and accurate is our scheme.Keywords: Integral differential equations, , L-stable methods, pricing European options, Jump–diffusion model
Procedia PDF Downloads 1513816 Performance Evaluation of the CareSTART S1 Analyzer for Quantitative Point-Of-Care Measurement of Glucose-6-Phosphate Dehydrogenase Activity
Authors: Haiyoung Jung, Mi Joung Leem, Sun Hwa Lee
Abstract:
Background & Objective: Glucose-6-phosphate dehydrogenase (G6PD) deficiency is a genetic abnormality that results in an inadequate amount of G6PD, leading to increased susceptibility of red blood cells to reactive oxygen species and hemolysis. The present study aimed to evaluate the careSTARTTM S1 analyzer for measuring G6PD activity to hemoglobin (Hb) ratio. Methods: Precision for G6PD activity and hemoglobin measurement was evaluated using control materials with two levels on five repeated runs per day for five days. The analytic performance of the careSTARTTM S1 analyzer was compared with spectrophotometry in 40 patient samples. Reference ranges suggested by the manufacturer were validated in 20 healthy males and females each. Results: The careSTARTTM S1 analyzer demonstrated precision of 6.0% for low-level (14~45 U/dL) and 2.7% for high-level (60~90 U/dL) control in G6PD activity, and 1.4% in hemoglobin (7.9~16.3 u/g Hb). A comparison study of G6PD to Hb ratio between the careSTARTTM S1 analyzer and spectrophotometry showed an average difference of 29.1% with a positive bias of the careSTARTTM S1 analyzer. All normal samples from the healthy population were validated for the suggested reference range for males (≥2.19 U/g Hb) and females (≥5.83 U/g Hb). Conclusion: The careSTARTTM S1 analyzer demonstrated good analytical performance and can replace the current spectrophotometric measurement of G6PD enzyme activity. In the aspect of the management of clinical laboratories, it can be a reasonable option as a point-of-care analyzer with minimal handling of samples and reagents, in addition to the automatic calculation of the ratio of measured G6PD activity and Hb concentration, to minimize any clerical errors involved with manual calculation.Keywords: POCT, G6PD, performance evaluation, careSTART
Procedia PDF Downloads 643815 Mending Broken Fences Policing: Developing the Intelligence-Led/Community-Based Policing Model(IP-CP) and Quality/Quantity/Crime(QQC) Model
Authors: Anil Anand
Abstract:
Despite enormous strides made during the past decade, particularly with the adoption and expansion of community policing, there remains much that police leaders can do to improve police-public relations. The urgency is particularly evident in cities across the United States and Europe where an increasing number of police interactions over the past few years have ignited large, sometimes even national, protests against police policy and strategy, highlighting a gap between what police leaders feel they have archived in terms of public satisfaction, support, and legitimacy and the perception of bias among many marginalized communities. The decision on which one policing strategy is chosen over another, how many resources are allocated, and how strenuously the policy is applied resides primarily with the police and the units and subunits tasked with its enforcement. The scope and opportunity for police officers in impacting social attitudes and social policy are important elements that cannot be overstated. How do police leaders, for instance, decide when to apply one strategy—say community-based policing—over another, like intelligence-led policing? How do police leaders measure performance and success? Should these measures be based on quantitative preferences over qualitative, or should the preference be based on some other criteria? And how do police leaders define, allow, and control discretionary decision-making? Mending Broken Fences Policing provides police and security services leaders with a model based on social cohesion, that incorporates intelligence-led and community policing (IP-CP), supplemented by a quality/quantity/crime (QQC) framework to provide a four-step process for the articulable application of police intervention, performance measurement, and application of discretion.Keywords: social cohesion, quantitative performance measurement, qualitative performance measurement, sustainable leadership
Procedia PDF Downloads 2953814 Investigating a Modern Accident Analysis Model for Textile Building Fires through Numerical Reconstruction
Authors: Mohsin Ali Shaikh, Weiguo Song, Rehmat Karim, Muhammad Kashan Surahio, Muhammad Usman Shahid
Abstract:
Fire investigations face challenges due to the complexity of fire development, and real-world accidents lack repeatability, making it difficult to apply standardized approaches. The unpredictable nature of fires and the unique conditions of each incident contribute to the complexity, requiring innovative methods and tools for effective analysis and reconstruction. This study proposes to provide the modern accident analysis model through numerical reconstruction for fire investigation in textile buildings. This method employs computer simulation to enhance the overall effectiveness of textile-building investigations. The materials and evidence collected from past incidents reconstruct fire occurrences, progressions, and catastrophic processes. The approach is demonstrated through a case study involving a tragic textile factory fire in Karachi, Pakistan, which claimed 257 lives. The reconstruction method proves invaluable for determining fire origins, assessing losses, establishing accountability, and, significantly, providing preventive insights for complex fire incidents.Keywords: fire investigation, numerical simulation, fire safety, fire incident, textile building
Procedia PDF Downloads 653813 Application Research of Stilbene Crystal for the Measurement of Accelerator Neutron Sources
Authors: Zhao Kuo, Chen Liang, Zhang Zhongbing, Ruan Jinlu. He Shiyi, Xu Mengxuan
Abstract:
Stilbene, C₁₄H₁₂, is well known as one of the most useful organic scintillators for pulse shape discrimination (PSD) technique for its good scintillation properties. An on-line acquisition system and an off-line acquisition system were developed with several CAMAC standard plug-ins, NIM plug-ins, neutron/γ discriminating plug-in named 2160A and a digital oscilloscope with high sampling rate respectively for which stilbene crystals and photomultiplier tube detectors (PMT) as detector for accelerator neutron sources measurement carried out in China Institute of Atomic Energy. Pulse amplitude spectrums and charge amplitude spectrums were real-time recorded after good neutron/γ discrimination whose best PSD figure-of-merits (FoMs) are 1.756 for D-D accelerator neutron source and 1.393 for D-T accelerator neutron source. The probability of neutron events in total events was 80%, and neutron detection efficiency was 5.21% for D-D accelerator neutron sources, which were 50% and 1.44% for D-T accelerator neutron sources after subtracting the background of scattering observed by the on-line acquisition system. Pulse waveform signals were acquired by the off-line acquisition system randomly while the on-line acquisition system working. The PSD FoMs obtained by the off-line acquisition system were 2.158 for D-D accelerator neutron sources and 1.802 for D-T accelerator neutron sources after waveform digitization off-line processing named charge integration method for just 1000 pulses. In addition, the probabilities of neutron events in total events obtained by the off-line acquisition system matched very well with the probabilities of the on-line acquisition system. The pulse information recorded by the off-line acquisition system could be repetitively used to adjust the parameters or methods of PSD research and obtain neutron charge amplitude spectrums or pulse amplitude spectrums after digital analysis with a limited number of pulses. The off-line acquisition system showed equivalent or better measurement effects compared with the online system with a limited number of pulses which indicated a feasible method based on stilbene crystals detectors for the measurement of prompt neutrons neutron sources like prompt accelerator neutron sources emit a number of neutrons in a short time.Keywords: stilbene crystal, accelerator neutron source, neutron / γ discrimination, figure-of-merits, CAMAC, waveform digitization
Procedia PDF Downloads 1873812 Analysis of the Acoustic Performance of Vertical Internal Seals with Pet Wool as NBR 15.575-4NO Green Towers Building-DF
Authors: Lucas Aerre, Wallesson Faria, Roberto Pimentel, Juliana Santos
Abstract:
An extremely disturbing and irritating element in the lives of people and organizations is the noise, the consequences that can bring us has a lot of connection with human health as well as financial and economic aspects. In order to improve the efficiency of buildings in Brazil in general, a performance standard was created, NBR 15.575 in which all buildings are seen in a more systemic and peculiar way, while following the requirements of the standard. The acoustic performance present in these buildings is one such requirement. Based on this, the present work was elaborated with the objective of evaluating through acoustic measurements the acoustic performance of vertical internal fences that are under the incidence of aerial noise of a building in the city of Brasilia-DF. A short theoretical basis is made and soon after the procedures of measurement are described through the control method established by the standard, and its results are evaluated according to the parameters of the same. The measurement performed between rooms of the same unit, presented a standardized sound pressure level difference (D nT, w) equal to 40 dB, thus being classified within the minimum performance required by the standard in question.Keywords: airborne noise, performance standard, soundproofing, vertical seal
Procedia PDF Downloads 2973811 Temperament and Psychopathology in Children of Patients Suffering from Schizophrenia
Authors: Rushi Naaz, Diksha Suchdeva
Abstract:
Background: Temperament is a very important aspect of functioning that needs to be understood in children of patients suffering from schizophrenia. The children of parents with mental disorder have substantially increased risk of psychiatric illness in them and may exhibit a range of problems from minor variations in temperament and adjustment to manifest psychiatric disorder. Method: A case control study was conducted to study the temperament characteristics and psychopathology in children of patients suffering from schizophrenia as compared to those of healthy controls. Both the groups were evaluated on Temperament Measurement Schedule and Childhood Psychopathology Measurement Schedule. Results: The results showed that children of patients suffering from schizophrenia were withdrawing, less adaptable, less sociable and had lower activity level than children of healthy parents. However, on the measure of psychopathology, no significant difference was found. Conclusion: Since temperament can be identified at an early age, children at risk for the disorder later on could be identified early enough for possible primary intervention.Keywords: children, childhood psychopathology, parental psychopathology, psychiatric disorders, schizophrenia, temperament
Procedia PDF Downloads 3723810 An Approach on Intelligent Tolerancing of Car Body Parts Based on Historical Measurement Data
Authors: Kai Warsoenke, Maik Mackiewicz
Abstract:
To achieve a high quality of assembled car body structures, tolerancing is used to ensure a geometric accuracy of the single car body parts. There are two main techniques to determine the required tolerances. The first is tolerance analysis which describes the influence of individually tolerated input values on a required target value. Second is tolerance synthesis to determine the location of individual tolerances to achieve a target value. Both techniques are based on classical statistical methods, which assume certain probability distributions. To ensure competitiveness in both saturated and dynamic markets, production processes in vehicle manufacturing must be flexible and efficient. The dimensional specifications selected for the individual body components and the resulting assemblies have a major influence of the quality of the process. For example, in the manufacturing of forming tools as operating equipment or in the higher level of car body assembly. As part of the metrological process monitoring, manufactured individual parts and assemblies are recorded and the measurement results are stored in databases. They serve as information for the temporary adjustment of the production processes and are interpreted by experts in order to derive suitable adjustments measures. In the production of forming tools, this means that time-consuming and costly changes of the tool surface have to be made, while in the body shop, uncertainties that are difficult to control result in cost-intensive rework. The stored measurement results are not used to intelligently design tolerances in future processes or to support temporary decisions based on real-world geometric data. They offer potential to extend the tolerancing methods through data analysis and machine learning models. The purpose of this paper is to examine real-world measurement data from individual car body components, as well as assemblies, in order to develop an approach for using the data in short-term actions and future projects. For this reason, the measurement data will be analyzed descriptively in the first step in order to characterize their behavior and to determine possible correlations. In the following, a database is created that is suitable for developing machine learning models. The objective is to create an intelligent way to determine the position and number of measurement points as well as the local tolerance range. For this a number of different model types are compared and evaluated. The models with the best result are used to optimize equally distributed measuring points on unknown car body part geometries and to assign tolerance ranges to them. The current results of this investigation are still in progress. However, there are areas of the car body parts which behave more sensitively compared to the overall part and indicate that intelligent tolerancing is useful here in order to design and control preceding and succeeding processes more efficiently.Keywords: automotive production, machine learning, process optimization, smart tolerancing
Procedia PDF Downloads 1163809 The Material-Process Perspective: Design and Engineering
Authors: Lars Andersen
Abstract:
The development of design and engineering in large construction projects are characterized by an increased degree of flattening out of formal structures, extended use of parallel and integrated processes (‘Integrated Concurrent Engineering’) and an increased number of expert disciplines. The integration process is based on ongoing collaborations, dialogues, intercommunication and comments on each other’s work (iterations). This process based on reciprocal communication between actors and disciplines triggers value creation. However, communication between equals is not in itself sufficient to create effective decision making. The complexity of the process and time pressure contribute to an increased risk of a deficit of decisions and loss of process control. The paper refers to a study that aims at developing a resilient decision-making system that does not come in conflict with communication processes based on equality between the disciplines in the process. The study includes the construction of a hospital, following the phases design, engineering and physical building. The Research method is a combination of formative process research, process tracking and phenomenological analyses. The study tracked challenges and problems in the building process to the projection substrates (drawing and models) and further to the organization of the engineering and design phase. A comparative analysis of traditional and new ways of organizing the projecting made it possible to uncover an implicit material order or structure in the process. This uncovering implied a development of a material process perspective. According to this perspective the complexity of the process is rooted in material-functional differentiation. This differentiation presupposes a structuring material (the skeleton of the building) that coordinates the other types of material. Each expert discipline´s competence is related to one or a set of materials. The architect, consulting engineer construction etc. have their competencies related to structuring material, and inherent in this; coordination competence. When dialogues between the disciplines concerning the coordination between them do not result in agreement, the disciplines with responsibility for the structuring material decide the interface issues. Based on these premises, this paper develops a self-organized expert-driven interdisciplinary decision-making system.Keywords: collaboration, complexity, design, engineering, materiality
Procedia PDF Downloads 2213808 On Board Measurement of Real Exhaust Emission of Light-Duty Vehicles in Algeria
Authors: R. Kerbachi, S. Chikhi, M. Boughedaoui
Abstract:
The study presents an analysis of the Algerian vehicle fleet and resultant emissions. The emission measurement of air pollutants emitted by road transportation (CO, THC, NOX and CO2) was conducted on 17 light duty vehicles in real traffic. This sample is representative of the Algerian light vehicles in terms of fuel quality (gasoline, diesel and liquefied petroleum gas) and the technology quality (injection system and emission control). The experimental measurement methodology of unit emission of vehicles in real traffic situation is based on the use of the mini-Constant Volume Sampler for gas sampling and a set of gas analyzers for CO2, CO, NOx and THC, with an instrumentation to measure kinematics, gas temperature and pressure. The apparatus is also equipped with data logging instrument and data transfer. The results were compared with the database of the European light vehicles (Artemis). It was shown that the technological injection liquefied petroleum gas (LPG) has significant impact on air pollutants emission. Therefore, with the exception of nitrogen oxide compounds, uncatalyzed LPG vehicles are more effective in reducing emissions unit of air pollutants compared to uncatalyzed gasoline vehicles. LPG performance seems to be lower under real driving conditions than expected on chassis dynamometer. On the other hand, the results show that uncatalyzed gasoline vehicles emit high levels of carbon monoxide, and nitrogen oxides. Overall, and in the absence of standards in Algeria, unit emissions are much higher than Euro 3. The enforcement of pollutant emission standard in developing countries is an important step towards introducing cleaner technology and reducing vehicular emissions.Keywords: on-board measurements of unit emissions of CO, HC, NOx and CO2, light vehicles, mini-CVS, LPG-fuel, artemis, Algeria
Procedia PDF Downloads 2753807 A Systems Approach to Modelling Emergent Behaviour in Maritime Control Systems Using the Composition, Environment, Structure, and Mechanisms Metamodel
Authors: Odd Ivar Haugen
Abstract:
Society increasingly relies on complex systems whose behaviour is determined, not by the properties of each part, but by the interaction between them. The behaviour of such systems is emergent. Modelling emergent system behaviour requires a systems approach that incorporates the necessary concepts that are capable of determining such behaviour. The CESM metamodel is a model of system models. A set of system models needs to address the elements of CESM at different levels of abstraction to be able to model the behaviour of a complex system. Modern ships contain numerous sophisticated equipment, often accompanied by a local safety system to protect its integrity. These control systems are then connected into a larger integrated system in order to achieve the ship’s objective or mission. The integrated system becomes what is commonly known as a system of systems, which can be termed a complex system. Examples of such complex systems are the dynamic positioning system and the power management system. Three ship accidents are provided as examples of how system complexity may contribute to accidents. Then, the three accidents are discussed in terms of how the Multi-Level/Multi-Model Safety Analysis might catch scenarios such as those leading to the accidents described.Keywords: emergent properties, CESM metamodel, multi-level/multi-model safety analysis, safety, system complexity, system models, systems thinking
Procedia PDF Downloads 53806 Evaluation of Egg Quality Parameters in the Isa Brown Line in Intensive Production Systems in the Ocaña Region, Norte de Santander
Authors: Meza-Quintero Myriam, Lobo Torrado Katty Andrea, Sanchez Picon Yesenia, Hurtado-Lugo Naudin
Abstract:
The objective of the study was to evaluate the internal and external quality of the egg in the three production housing systems: floor, cage, and grazing of laying birds of the Isa Brown line, in the laying period between weeks 35 to 41; 135 hens distributed in 3 treatments of 45 birds per repetition were used (the replicas were the seven weeks of the trial). The feeding treatment supplied in the floor and cage systems contained 114 g/bird/day; for the grazing system, 14 grams less concentrate was provided. Nine eggs were collected to be studied and analyzed in the animal nutrition laboratory (3 eggs per housing system). The random statistical model was implemented: for the statistical analysis of the data, the statistical software of IBM® Statistical Products and Services Solution (SPSS) version 2.3 was used. The evaluation and follow-up instruments were the vernier caliper for the measurement in millimeters, a YolkFan™16 from Roche DSM for the evaluation of the egg yolk pigmentation, a digital scale for the measurement in grams, a micrometer for the measurement in millimeters and evaluation in the laboratory using dry matter, ashes, and ethereal extract. The results suggested that equivalent to the size of the egg (0.04 ± 3.55) and the thickness of the shell (0.46 ± 3.55), where P-Value> 0.05 was obtained, weight albumen (0.18 ± 3.55), albumen height (0.38 ± 3.55), yolk weight (0.64 ± 3.55), yolk height (0.54 ± 3.55) and for yolk pigmentation (1.23 ± 3.55). It was concluded that the hens in the three production systems, floor, cage, and grazing, did not show significant statistical differences in the internal and external quality of the chicken in the parameters studied egg for the production system.Keywords: biological, territories, genetic resource, egg
Procedia PDF Downloads 803805 Enhancing Disaster Response Capabilities in Asia-Pacific: An Explorative Study Applied to Decision Support Tools for Logistics Network Design
Authors: Giuseppe Timperio, Robert de Souza
Abstract:
Logistics operations in the context of disaster response are characterized by a high degree of complexity due to the combined effect of a large number of stakeholders involved, time pressure, uncertainties at various levels, massive deployment of goods and personnel, and gigantic financial flow to be managed. It also involves several autonomous parties such as government agencies, militaries, NGOs, UN agencies, private sector to name few, to have a highly collaborative approach especially in the critical phase of the immediate response. This is particularly true in the context of L3 emergencies that are the most severe, large-scale humanitarian crises. Decision-making processes in disaster management are thus extremely difficult due to the presence of multiple decision-makers involved, and the complexity of the tasks being tackled. Hence, in this paper, we look at applying ICT based solutions to enable a speedy and effective decision making in the golden window of humanitarian operations. A high-level view of ICT based solutions in the context of logistics operations for humanitarian response in Southeast Asia is presented, and their viability in a real-life case about logistics network design is explored.Keywords: decision support, disaster preparedness, humanitarian logistics, network design
Procedia PDF Downloads 1673804 The Impact of Iso 9001 Certification on Brazilian Firms’ Performance: Insights from Multiple Case Studies
Authors: Matheus Borges Carneiro, Fabiane Leticia Lizarelli, José Carlos De Toledo
Abstract:
The evolution of quality management by companies was strongly enabled by, among others, ISO 9001 certification, which is considered a crucial requirement for several customers. Likewise, performance measurement provides useful insights for companies to identify the reflection of their decision-making process on their improvement. One of the most used performance measurement models is the balanced scorecard (BSC), which uses four perspectives to address a firm’s performance: financial, internal process, customer satisfaction, and learning and growth. Studies related to ISO 9001 and business performance have mostly adopted a quantitative approach to identify the standard’s causal effect on a firm’s performance. However, to verify how this influence may occur, an in-depth analysis within a qualitative approach is required. Therefore, this paper aims to verify the impact of ISO 9001:2015 on Brazilian firms’ performance based on the balanced scorecard perspective. Hence, nine certified companies located in the Southeast region of Brazil were studied through a multiple case study approach. Within this study, it was possible to identify the positive impact of ISO 9001 on firms’ overall performance, and four Critical Success Factors (CSFs) were identified as relevant on the linkage among ISO 9001 and firms’ performance: employee involvement, top management, process management, and customer focus. Due to the COVID-19 pandemic, the number of interviews was limited to the quality manager specialist, and the sample was limited since several companies were closed during the period of the study. This study presents an in-depth analysis of how the relationship between ISO 9001 certification and firms’ performance in a developing country is.Keywords: balanced scorecard, Brazilian firms’ performance, critical success factors, ISO 9001 certification, performance measurement
Procedia PDF Downloads 1983803 A Mixed Integer Programming Model for Optimizing the Layout of an Emergency Department
Authors: Farhood Rismanchian, Seong Hyeon Park, Young Hoon Lee
Abstract:
During the recent years, demand for healthcare services has dramatically increased. As the demand for healthcare services increases, so does the necessity of constructing new healthcare buildings and redesigning and renovating existing ones. Increasing demands necessitate the use of optimization techniques to improve the overall service efficiency in healthcare settings. However, high complexity of care processes remains the major challenge to accomplish this goal. This study proposes a method based on process mining results to address the high complexity of care processes and to find the optimal layout of the various medical centers in an emergency department. ProM framework is used to discover clinical pathway patterns and relationship between activities. Sequence clustering plug-in is used to remove infrequent events and to derive the process model in the form of Markov chain. The process mining results served as an input for the next phase which consists of the development of the optimization model. Comparison of the current ED design with the one obtained from the proposed method indicated that a carefully designed layout can significantly decrease the distances that patients must travel.Keywords: Mixed Integer programming, Facility layout problem, Process Mining, Healthcare Operation Management
Procedia PDF Downloads 3393802 The Effects of Three Levels of Contextual Inference among adult Athletes
Authors: Abdulaziz Almustafa
Abstract:
Considering the critical role permanence has on predictions related to the contextual interference effect on laboratory and field research, this study sought to determine whether the paradigm of the effect depends on the complexity of the skill during the acquisition and transfer phases. The purpose of the present study was to investigate the effects of contextual interference CI by extending previous laboratory and field research with adult athletes through the acquisition and transfer phases. Male (n=60) athletes age 18-22 years-old, were chosen randomly from Eastern Province Clubs. They were assigned to complete blocked, random, or serial practices. Analysis of variance with repeated measures MANOVA indicated that, the results did not support the notion of CI. There were no significant differences in acquisition phase between blocked, serial and random practice groups. During the transfer phase, there were no major differences between the practice groups. Apparently, due to the task complexity, participants were probably confused and not able to use the advantages of contextual interference. This is another contradictory result to contextual interference effects in acquisition and transfer phases in sport settings. One major factor that can influence the effect of contextual interference is task characteristics as the nature of level of difficulty in sport-related skill.Keywords: contextual interference, acquisition, transfer, task difficulty
Procedia PDF Downloads 4663801 The Effect of Bath Composition for Hot-Dip Aluminizing of AISI 4140 Steel
Authors: Aptullah Karakas, Murat Baydogan
Abstract:
Hot-dip aluminizing (HDA) is one of the several aluminizing methods to form a wear-, corrosion- and oxidation-resistant aluminide layers on the surface. In this method, the substrate is dipped into a molten aluminum bath, hold in the bath for several minutes, and cooled down to the room temperature in air. A subsequent annealing after the HDA process is generally performed. The main advantage of HDA is its very low investment cost in comparison with other aluminizing methods such as chemical vapor deposition (CVD), pack aluminizing and metalizing. In the HDA process, Al or Al-Si molten baths are mostly used. However, in this study, three different Al alloys such as Al4043 (Al-Mg), Al5356 (Al-Si) and Al7020 (Al-Zn) were used as the molten bath in order to see their effects on morphological and mechanical properties of the resulting aluminide layers. AISI 4140 low alloyed steel was used as the substrate. Parameters of the HDA process were bath composition, bath temperature, and dipping time. These parameters were considered within a Taguchi L9 orthogonal array. After the HDA process and subsequent diffusion annealing, coating thickness measurement, microstructural analysis and hardness measurement of the aluminide layers were conducted. The optimum process parameters were evaluated according to coating morphology, such as cracks, Kirkendall porosity and hardness of the coatings. According to the results, smooth and clean aluminide layer with less Kirkendall porosity and cracks were observed on the sample, which was aluminized in the molten Al7020 bath at 700 C for 10 minutes and subsequently diffusion annealed at 750 C. Hardness of the aluminide layer was in between 1100-1300 HV and the coating thickness was approximately 400 µm. The results were promising such that a hard and thick aluminide layer with less Kirkendall porosity and cracks could be formed. It is, therefore, concluded that Al7020 bath may be used in the HDA process of AISI 4140 steel substrate.Keywords: hot-dip aluminizing, microstructure, hardness measurement, diffusion annealing
Procedia PDF Downloads 763800 Characterising Stable Model by Extended Labelled Dependency Graph
Authors: Asraful Islam
Abstract:
Extended dependency graph (EDG) is a state-of-the-art isomorphic graph to represent normal logic programs (NLPs) that can characterize the consistency of NLPs by graph analysis. To construct the vertices and arcs of an EDG, additional renaming atoms and rules besides those the given program provides are used, resulting in higher space complexity compared to the corresponding traditional dependency graph (TDG). In this article, we propose an extended labeled dependency graph (ELDG) to represent an NLP that shares an equal number of nodes and arcs with TDG and prove that it is isomorphic to the domain program. The number of nodes and arcs used in the underlying dependency graphs are formulated to compare the space complexity. Results show that ELDG uses less memory to store nodes, arcs, and cycles compared to EDG. To exhibit the desirability of ELDG, firstly, the stable models of the kernel form of NLP are characterized by the admissible coloring of ELDG; secondly, a relation of the stable models of a kernel program with the handles of the minimal, odd cycles appearing in the corresponding ELDG has been established; thirdly, to our best knowledge, for the first time an inverse transformation from a dependency graph to the representing NLP w.r.t. ELDG has been defined that enables transferring analytical results from the graph to the program straightforwardly.Keywords: normal logic program, isomorphism of graph, extended labelled dependency graph, inverse graph transforma-tion, graph colouring
Procedia PDF Downloads 2123799 Improving Student Programming Skills in Introductory Computer and Data Science Courses Using Generative AI
Authors: Genady Grabarnik, Serge Yaskolko
Abstract:
Generative Artificial Intelligence (AI) has significantly expanded its applicability with the incorporation of Large Language Models (LLMs) and become a technology with promise to automate some areas that were very difficult to automate before. The paper describes the introduction of generative Artificial Intelligence into Introductory Computer and Data Science courses and analysis of effect of such introduction. The generative Artificial Intelligence is incorporated in the educational process two-fold: For the instructors, we create templates of prompts for generation of tasks, and grading of the students work, including feedback on the submitted assignments. For the students, we introduce them to basic prompt engineering, which in turn will be used for generation of test cases based on description of the problems, generating code snippets for the single block complexity programming, and partitioning into such blocks of an average size complexity programming. The above-mentioned classes are run using Large Language Models, and feedback from instructors and students and courses’ outcomes are collected. The analysis shows statistically significant positive effect and preference of both stakeholders.Keywords: introductory computer and data science education, generative AI, large language models, application of LLMS to computer and data science education
Procedia PDF Downloads 583798 Estimation of Relative Subsidence of Collapsible Soils Using Electromagnetic Measurements
Authors: Henok Hailemariam, Frank Wuttke
Abstract:
Collapsible soils are weak soils that appear to be stable in their natural state, normally dry condition, but rapidly deform under saturation (wetting), thus generating large and unexpected settlements which often yield disastrous consequences for structures unwittingly built on such deposits. In this study, a prediction model for the relative subsidence of stressed collapsible soils based on dielectric permittivity measurement is presented. Unlike most existing methods for soil subsidence prediction, this model does not require moisture content as an input parameter, thus providing the opportunity to obtain accurate estimation of the relative subsidence of collapsible soils using dielectric measurement only. The prediction model is developed based on an existing relative subsidence prediction model (which is dependent on soil moisture condition) and an advanced theoretical frequency and temperature-dependent electromagnetic mixing equation (which effectively removes the moisture content dependence of the original relative subsidence prediction model). For large scale sub-surface soil exploration purposes, the spatial sub-surface soil dielectric data over wide areas and high depths of weak (collapsible) soil deposits can be obtained using non-destructive high frequency electromagnetic (HF-EM) measurement techniques such as ground penetrating radar (GPR). For laboratory or small scale in-situ measurements, techniques such as an open-ended coaxial line with widely applicable time domain reflectometry (TDR) or vector network analysers (VNAs) are usually employed to obtain the soil dielectric data. By using soil dielectric data obtained from small or large scale non-destructive HF-EM investigations, the new model can effectively predict the relative subsidence of weak soils without the need to extract samples for moisture content measurement. Some of the resulting benefits are the preservation of the undisturbed nature of the soil as well as a reduction in the investigation costs and analysis time in the identification of weak (problematic) soils. The accuracy of prediction of the presented model is assessed by conducting relative subsidence tests on a collapsible soil at various initial soil conditions and a good match between the model prediction and experimental results is obtained.Keywords: collapsible soil, dielectric permittivity, moisture content, relative subsidence
Procedia PDF Downloads 3633797 Psychometric Properties of the Eq-5d-3l and Eq-5d-5l Instruments for Health Related Quality of Life Measurement in Indonesian Population
Authors: Dwi Endarti, Susi a Kristina, Rizki Noorizzati, Akbar E Nugraha, Fera Maharani, Kika a Putri, Asninda H Azizah, Sausanzahra Angganisaputri, Yunisa Yustikarini
Abstract:
Cost utility analysis is the most recommended pharmacoeconomic method since it allows widely comparison of cost-effectiveness results from different interventions. The method uses outcome of quality-adjusted life year (QALY) or disability-adjusted life year (DALY). Measurement of QALY requires the data of utility dan life years gained. Utility is measured with the instrument for quality of life measurement such as EQ-5D. Recently, the EQ-5D is available in two versions which are EQ-5D-3L and EQ-5D-5L. This study aimed to compare the EQ-5D-3L and EQ-5D-5L to examine the most suitable version for Indonesian population. This study was an observational study employing cross sectional approach. Data of quality of life measured with EQ-5D-3L and EQ-5D-5L were collected from several groups of population which were respondent with chronic diseases, respondent with acute diseases, and respondent from general population (without illness) in Yogyakarta Municipality, Indonesia. Convenience samples of hypertension patients (83), diabetes mellitus patients (80), and osteoarthritis patients (47), acute respiratory tract infection (81), cephalgia (43), dyspepsia (42), and respondent from general population (293) were recruited in this study. Responses on the 3L and 5L versions of EQ-5D were compared by examining the psychometric properties including agreement, internal consistency, ceiling effect, and convergent validity. Based on psychometric properties tests of EQ-5D-3L dan EQ-5D-5L, EQ-5D-5L tended to have better psychometric properties compared to EQ-5D-3L. Future studies for health related quality of life (HRQOL) measurements for pharmacoeconomic studies in Indonesia should apply EQ-5D-5L.Keywords: EQ-5D, Health Related Quality of Life, Indonesian Population, Psychometric Properties
Procedia PDF Downloads 4773796 DIAL Measurements of Vertical Distribution of Ozone at the Siberian Lidar Station in Tomsk
Authors: Oleg A. Romanovskii, Vladimir D. Burlakov, Sergey I. Dolgii, Olga V. Kharchenko, Alexey A. Nevzorov, Alexey V. Nevzorov
Abstract:
The paper presents the results of DIAL measurements of the vertical ozone distribution. The ozone lidar operate as part of the measurement complex at Siberian Lidar Station (SLS) of V.E. Zuev Institute of Atmospheric Optics SB RAS, Tomsk (56.5ºN; 85.0ºE) and designed for study of the vertical ozone distribution in the upper troposphere–lower stratosphere. Most suitable wavelengths for measurements of ozone profiles are selected. We present an algorithm for retrieval of vertical distribution of ozone with temperature and aerosol correction during DIAL lidar sounding of the atmosphere. The temperature correction of ozone absorption coefficients is introduced in the software to reduce the retrieval errors. Results of lidar measurement at wavelengths of 299 and 341 nm agree with model estimates, which point to acceptable accuracy of ozone sounding in the 6–18 km altitude range.Keywords: lidar, ozone distribution, atmosphere, DIAL
Procedia PDF Downloads 4973795 Nonlinear Aerodynamic Parameter Estimation of a Supersonic Air to Air Missile by Using Artificial Neural Networks
Authors: Tugba Bayoglu
Abstract:
Aerodynamic parameter estimation is very crucial in missile design phase, since accurate high fidelity aerodynamic model is required for designing high performance and robust control system, developing high fidelity flight simulations and verification of computational and wind tunnel test results. However, in literature, there is not enough missile aerodynamic parameter identification study for three main reasons: (1) most air to air missiles cannot fly with constant speed, (2) missile flight test number and flight duration are much less than that of fixed wing aircraft, (3) variation of the missile aerodynamic parameters with respect to Mach number is higher than that of fixed wing aircraft. In addition to these challenges, identification of aerodynamic parameters for high wind angles by using classical estimation techniques brings another difficulty in the estimation process. The reason for this, most of the estimation techniques require employing polynomials or splines to model the behavior of the aerodynamics. However, for the missiles with a large variation of aerodynamic parameters with respect to flight variables, the order of the proposed model increases, which brings computational burden and complexity. Therefore, in this study, it is aimed to solve nonlinear aerodynamic parameter identification problem for a supersonic air to air missile by using Artificial Neural Networks. The method proposed will be tested by using simulated data which will be generated with a six degree of freedom missile model, involving a nonlinear aerodynamic database. The data will be corrupted by adding noise to the measurement model. Then, by using the flight variables and measurements, the parameters will be estimated. Finally, the prediction accuracy will be investigated.Keywords: air to air missile, artificial neural networks, open loop simulation, parameter identification
Procedia PDF Downloads 2793794 An Improved Data Aided Channel Estimation Technique Using Genetic Algorithm for Massive Multi-Input Multiple-Output
Authors: M. Kislu Noman, Syed Mohammed Shamsul Islam, Shahriar Hassan, Raihana Pervin
Abstract:
With the increasing rate of wireless devices and high bandwidth operations, wireless networking and communications are becoming over crowded. To cope with such crowdy and messy situation, massive MIMO is designed to work with hundreds of low costs serving antennas at a time as well as improve the spectral efficiency at the same time. TDD has been used for gaining beamforming which is a major part of massive MIMO, to gain its best improvement to transmit and receive pilot sequences. All the benefits are only possible if the channel state information or channel estimation is gained properly. The common methods to estimate channel matrix used so far is LS, MMSE and a linear version of MMSE also proposed in many research works. We have optimized these methods using genetic algorithm to minimize the mean squared error and finding the best channel matrix from existing algorithms with less computational complexity. Our simulation result has shown that the use of GA worked beautifully on existing algorithms in a Rayleigh slow fading channel and existence of Additive White Gaussian Noise. We found that the GA optimized LS is better than existing algorithms as GA provides optimal result in some few iterations in terms of MSE with respect to SNR and computational complexity.Keywords: channel estimation, LMMSE, LS, MIMO, MMSE
Procedia PDF Downloads 1913793 A Generalized Sparse Bayesian Learning Algorithm for Near-Field Synthetic Aperture Radar Imaging: By Exploiting Impropriety and Noncircularity
Authors: Pan Long, Bi Dongjie, Li Xifeng, Xie Yongle
Abstract:
The near-field synthetic aperture radar (SAR) imaging is an advanced nondestructive testing and evaluation (NDT&E) technique. This paper investigates the complex-valued signal processing related to the near-field SAR imaging system, where the measurement data turns out to be noncircular and improper, meaning that the complex-valued data is correlated to its complex conjugate. Furthermore, we discover that the degree of impropriety of the measurement data and that of the target image can be highly correlated in near-field SAR imaging. Based on these observations, A modified generalized sparse Bayesian learning algorithm is proposed, taking impropriety and noncircularity into account. Numerical results show that the proposed algorithm provides performance gain, with the help of noncircular assumption on the signals.Keywords: complex-valued signal processing, synthetic aperture radar, 2-D radar imaging, compressive sensing, sparse Bayesian learning
Procedia PDF Downloads 1313792 Factors Impacting Geostatistical Modeling Accuracy and Modeling Strategy of Fluvial Facies Models
Authors: Benbiao Song, Yan Gao, Zhuo Liu
Abstract:
Geostatistical modeling is the key technic for reservoir characterization, the quality of geological models will influence the prediction of reservoir performance greatly, but few studies have been done to quantify the factors impacting geostatistical reservoir modeling accuracy. In this study, 16 fluvial prototype models have been established to represent different geological complexity, 6 cases range from 16 to 361 wells were defined to reproduce all those 16 prototype models by different methodologies including SIS, object-based and MPFS algorithms accompany with different constraint parameters. Modeling accuracy ratio was defined to quantify the influence of each factor, and ten realizations were averaged to represent each accuracy ratio under the same modeling condition and parameters association. Totally 5760 simulations were done to quantify the relative contribution of each factor to the simulation accuracy, and the results can be used as strategy guide for facies modeling in the similar condition. It is founded that data density, geological trend and geological complexity have great impact on modeling accuracy. Modeling accuracy may up to 90% when channel sand width reaches up to 1.5 times of well space under whatever condition by SIS and MPFS methods. When well density is low, the contribution of geological trend may increase the modeling accuracy from 40% to 70%, while the use of proper variogram may have very limited contribution for SIS method. It can be implied that when well data are dense enough to cover simple geobodies, few efforts were needed to construct an acceptable model, when geobodies are complex with insufficient data group, it is better to construct a set of robust geological trend than rely on a reliable variogram function. For object-based method, the modeling accuracy does not increase obviously as SIS method by the increase of data density, but kept rational appearance when data density is low. MPFS methods have the similar trend with SIS method, but the use of proper geological trend accompany with rational variogram may have better modeling accuracy than MPFS method. It implies that the geological modeling strategy for a real reservoir case needs to be optimized by evaluation of dataset, geological complexity, geological constraint information and the modeling objective.Keywords: fluvial facies, geostatistics, geological trend, modeling strategy, modeling accuracy, variogram
Procedia PDF Downloads 2643791 Software-Defined Radio Based Channel Measurement System of Wideband HF Communication System in Low-Latitude Region
Authors: P. H. Mukti, I. Kurniawati, F. Oktaviansyah, A. D. Adhitya, N. Rachmadani, R. Corputty, G. Hendrantoro, T. Fukusako
Abstract:
HF Communication system is one of the attractive fields among many researchers since it can be reached long-distance areas with low-cost. This long-distance communication can be achieved by exploiting the ionosphere as a transmission medium for the HF radio wave. However, due to the dynamic nature of ionosphere, the channel characteristic of HF communication has to be investigated in order to gives better performances. Many techniques to characterize HF channel are available in the literature. However, none of those techniques describe the HF channel characteristic in low-latitude regions, especially equatorial areas. Since the ionosphere around equatorial region has an ESF phenomenon, it becomes an important investigation to characterize the wideband HF Channel in low-latitude region. On the other sides, the appearance of software-defined radio attracts the interest of many researchers. Accordingly, in this paper a SDR-based channel measurement system is proposed to be used for characterizing the HF channel in low-latitude region.Keywords: channel characteristic, HF communication system, LabVIEW, software-defined radio, universal software radio peripheral
Procedia PDF Downloads 4863790 System Security Impact on the Dynamic Characteristics of Measurement Sensors in Smart Grids
Authors: Yiyang Su, Jörg Neumann, Jan Wetzlich, Florian Thiel
Abstract:
Smart grid is a term used to describe the next generation power grid. New challenges such as integration of renewable and decentralized energy sources, the requirement for continuous grid estimation and optimization, as well as the use of two-way flows of energy have been brought to the power gird. In order to achieve efficient, reliable, sustainable, as well as secure delivery of electric power more and more information and communication technologies are used for the monitoring and the control of power grids. Consequently, the need for cybersecurity is dramatically increased and has converged into several standards which will be presented here. These standards for the smart grid must be designed to satisfy both performance and reliability requirements. An in depth investigation of the effect of retrospectively embedded security in existing grids on it’s dynamic behavior is required. Therefore, a retrofitting plan for existing meters is offered, and it’s performance in a test low voltage microgrid is investigated. As a result of this, integration of security measures into measurement architectures of smart grids at the design phase is strongly recommended.Keywords: cyber security, performance, protocols, security standards, smart grid
Procedia PDF Downloads 3233789 Skew Planar Wheel Antenna for First Person View of Unmanned Aerial Vehicle
Authors: Raymond Yudhi Purba, Levy Olivia Nur, Radial Anwar
Abstract:
This research presents the design and measurement of a skew planar wheel antenna that is used to visualize the first person view perspective of unmanned aerial vehicles. The antenna has been designed using CST Studio Suite 2019 to have voltage standing wave ratio (VSWR) ≤ 2, return loss ≤ -10 dB, bandwidth ≥ 100 MHz to covering outdoor access point band from 5.725 to 5.825 GHz, omnidirectional radiation pattern, and elliptical polarization. Dimensions of skew planar wheel antenna have been modified using parameter sweep technique to provide good performances. The simulation results provide VSWR 1.231, return loss -19.693 dB, bandwidth 828.8 MHz, gain 3.292 dB, and axial ratio 9.229 dB. Meanwhile, the measurement results provide VSWR 1.237, return loss -19.476 dB, bandwidth 790.5 MHz, gain 3.2034 dB, and axial ratio 4.12 dB.Keywords: skew planar wheel, cloverleaf, first-person view, unmanned aerial vehicle, parameter sweep
Procedia PDF Downloads 216