Search results for: time series feature extraction
19561 Developing Research Involving Different Species: Opportunities and Empirical Foundations
Authors: A. V. Varfolomeeva, N. S. Tkachenko, A. G. Tishchenko
Abstract:
The problem of violation of internal validity in studies of psychological structures is considered. The role of epistemological attitudes of researchers in the planning of research within the methodology of the system-evolutionary approach is assessed. Alternative programs of psychological research involving representatives of different biological species are presented. On the example of the results of two research series the variants of solving the problem are discussed.Keywords: epistemological attitudes, experimental design, validity, psychological structure, learning
Procedia PDF Downloads 11419560 Spatio-Temporal Changes of Rainfall in São Paulo, Brazil (1973-2012): A Gamma Distribution and Cluster Analysis
Authors: Guilherme Henrique Gabriel, Lucí Hidalgo Nunes
Abstract:
An important feature of rainfall regimes is the variability, which is subject to the atmosphere’s general and regional dynamics, geographical position and relief. Despite being inherent to the climate system, it can harshly impact virtually all human activities. In turn, global climate change has the ability to significantly affect smaller-scale rainfall regimes by altering their current variability patterns. In this regard, it is useful to know if regional climates are changing over time and whether it is possible to link these variations to climate change trends observed globally. This study is part of an international project (Metropole-FAPESP, Proc. 2012/51876-0 and Proc. 2015/11035-5) and the objective was to identify and evaluate possible changes in rainfall behavior in the state of São Paulo, southeastern Brazil, using rainfall data from 79 rain gauges for the last forty years. Cluster analysis and gamma distribution parameters were used for evaluating spatial and temporal trends, and the outcomes are presented by means of geographic information systems tools. Results show remarkable changes in rainfall distribution patterns in São Paulo over the years: changes in shape and scale parameters of gamma distribution indicate both an increase in the irregularity of rainfall distribution and the probability of occurrence of extreme events. Additionally, the spatial outcome of cluster analysis along with the gamma distribution parameters suggest that changes occurred simultaneously over the whole area, indicating that they could be related to remote causes beyond the local and regional ones, especially in a current global climate change scenario.Keywords: climate change, cluster analysis, gamma distribution, rainfall
Procedia PDF Downloads 31819559 Asymmetrical Informative Estimation for Macroeconomic Model: Special Case in the Tourism Sector of Thailand
Authors: Chukiat Chaiboonsri, Satawat Wannapan
Abstract:
This paper used an asymmetric informative concept to apply in the macroeconomic model estimation of the tourism sector in Thailand. The variables used to statistically analyze are Thailand international and domestic tourism revenues, the expenditures of foreign and domestic tourists, service investments by private sectors, service investments by the government of Thailand, Thailand service imports and exports, and net service income transfers. All of data is a time-series index which was observed between 2002 and 2015. Empirically, the tourism multiplier and accelerator were estimated by two statistical approaches. The first was the result of the Generalized Method of Moments model (GMM) based on the assumption which the tourism market in Thailand had perfect information (Symmetrical data). The second was the result of the Maximum Entropy Bootstrapping approach (MEboot) based on the process that attempted to deal with imperfect information and reduced uncertainty in data observations (Asymmetrical data). In addition, the tourism leakages were investigated by a simple model based on the injections and leakages concept. The empirical findings represented the parameters computed from the MEboot approach which is different from the GMM method. However, both of the MEboot estimation and GMM model suggests that Thailand’s tourism sectors are in a period capable of stimulating the economy.Keywords: TThailand tourism, Maximum Entropy Bootstrapping approach, macroeconomic model, asymmetric information
Procedia PDF Downloads 29319558 Rapid Detection of MBL Genes by SYBR Green Based Real-Time PCR
Authors: Taru Singh, Shukla Das, V. G. Ramachandran
Abstract:
Objectives: To develop SYBR green based real-time PCR assay to detect carbapenemases (NDM, IMP) genes in E. coli. Methods: A total of 40 E. coli from stool samples were tested. Six were previously characterized as resistant to carbapenems and documented by PCR. The remaining 34 isolates previously tested susceptible to carbapenems and were negative for these genes. Bacterial RNA was extracted using manual method. The real-time PCR was performed using the Light Cycler III 480 instrument (Roche) and specific primers for each carbapenemase target were used. Results: Each one of the two carbapenemase gene tested presented a different melting curve after PCR amplification. The melting temperature (Tm) analysis of the amplicons identified was as follows: blaIMP type (Tm 82.18°C), blaNDM-1 (Tm 78.8°C). No amplification was detected among the negative samples. The results showed 100% concordance with the genotypes previously identified. Conclusions: The new assay was able to detect the presence of two different carbapenemase gene type by real-time PCR.Keywords: resistance, b-lactamases, E. coli, real-time PCR
Procedia PDF Downloads 40719557 Modelling of Cavity Growth in Underground Coal Gasification
Authors: Preeti Aghalayam, Jay Shah
Abstract:
Underground coal gasification (UCG) is the in-situ gasification of unmineable coals to produce syngas. In UCG, gasifying agents are injected into the coal seam, and a reactive cavity is formed due to coal consumption. The cavity formed is typically hemispherical, and this report consists of the MATLAB model of the UCG cavity to predict the composition of the output gases. There are seven radial and two time-variant ODEs. A MATLAB solver (ode15s) is used to solve the radial ODEs from the above equations. Two for-loops are implemented in the model, i.e., one for time variations and another for radial variation. In the time loop, the radial odes are solved using the MATLAB solver. The radial loop is nested inside the time loop, and the density odes are numerically solved using the Euler method. The model is validated by comparing it with the literature results of laboratory-scale experiments. The model predicts the radial and time variation of the product gases inside the cavity.Keywords: gasification agent, MATLAB model, syngas, underground coal gasification (UCG)
Procedia PDF Downloads 20219556 Chemical Composition of Volatiles Emitted from Ziziphus jujuba Miller Collected during Different Growth Stages
Authors: Rose Vanessa Bandeira Reidel, Bernardo Melai, Pier Luigi Cioni, Luisa Pistelli
Abstract:
Ziziphus jujuba Miller is a common species of the Ziziphus genus (Rhamnaceae family) native to the tropics and subtropics known for its edible fruits, fresh consumed or used in healthy food, as flavoring and sweetener. Many phytochemicals and biological activities are described for this species. In this work, the aroma profiles emitted in vivo by whole fresh organs (leaf, bud flower, flower, green and red fruits) were analyzed separately by mean of solid phase micro-extraction (SPME) coupled with gas chromatography mass spectrometry (GC-MS). The emitted volatiles from different plant parts were analysed using Supelco SPME device coated with polydimethylsiloxane (PDMS, 100µm). Fresh plant material was introduced separately into a glass conical flask and allowed to equilibrate for 20 min. After the equilibration time, the fibre was exposed to the headspace for 15 min at room temperature, the fibre was re-inserted into the needle and transferred to the injector of the CG and CG-MS system, where the fibre was desorbed. All the data were submitted to multivariate statistical analysis, evidencing many differences amongst the selected plant parts and their developmental stages. A total of 144 compounds were identified corresponding to 94.6-99.4% of the whole aroma profile of jujube samples. Sesquiterpene hydrocarbons were the main chemical class of compounds in leaves also present in similar percentage in flowers and bud flowers where (E, E)-α-farnesene was the main constituent in all cited plant parts. This behavior can be due to a protection mechanism against pathogens and herbivores as well as resistance to abiotic factors. The aroma of green fruits was characterized by high amount of perillene while the red fruits release a volatile blend mainly constituted by different monoterpenes. The terpenoid emission of flesh fruits has important function in the interaction with animals including attraction of seed dispersers and it is related to a good quality of fruits. This study provides for the first time the chemical composition of the volatile emission from different Ziziphus jujuba organs. The SPME analyses of the collected samples showed different patterns of emission and can contribute to understand their ecological interactions and fruit production management.Keywords: Rhamnaceae, aroma profile, jujube organs, HS-SPME, GC-MS
Procedia PDF Downloads 25519555 Edge Enhancement Visual Methodology for Fat Amount and Distribution Assessment in Dry-Cured Ham Slices
Authors: Silvia Grassi, Stefano Schiavon, Ernestina Casiraghi, Cristina Alamprese
Abstract:
Dry-cured ham is an uncooked meat product particularly appreciated for its peculiar sensory traits among which lipid component plays a key role in defining quality and, consequently, consumers’ acceptability. Usually, fat content and distribution are chemically determined by expensive, time-consuming, and destructive analyses. Moreover, different sensory techniques are applied to assess product conformity to desired standards. In this context, visual systems are getting a foothold in the meat market envisioning more reliable and time-saving assessment of food quality traits. The present work aims at developing a simple but systematic and objective visual methodology to assess the fat amount of dry-cured ham slices, in terms of total, intermuscular and intramuscular fractions. To the aim, 160 slices from 80 PDO dry-cured hams were evaluated by digital image analysis and Soxhlet extraction. RGB images were captured by a flatbed scanner, converted in grey-scale images, and segmented based on intensity histograms as well as on a multi-stage algorithm aimed at edge enhancement. The latter was performed applying the Canny algorithm, which consists of image noise reduction, calculation of the intensity gradient for each image, spurious response removal, actual thresholding on corrected images, and confirmation of strong edge boundaries. The approach allowed for the automatic calculation of total, intermuscular and intramuscular fat fractions as percentages of the total slice area. Linear regression models were run to estimate the relationships between the image analysis results and the chemical data, thus allowing for the prediction of the total, intermuscular and intramuscular fat content by the dry-cured ham images. The goodness of fit of the obtained models was confirmed in terms of coefficient of determination (R²), hypothesis testing and pattern of residuals. Good regression models have been found being 0.73, 0.82, and 0.73 the R2 values for the total fat, the sum of intermuscular and intramuscular fat and the intermuscular fraction, respectively. In conclusion, the edge enhancement visual procedure brought to a good fat segmentation making the simple visual approach for the quantification of the different fat fractions in dry-cured ham slices sufficiently simple, accurate and precise. The presented image analysis approach steers towards the development of instruments that can overcome destructive, tedious and time-consuming chemical determinations. As future perspectives, the results of the proposed image analysis methodology will be compared with those of sensory tests in order to develop a fast grading method of dry-cured hams based on fat distribution. Therefore, the system will be able not only to predict the actual fat content but it will also reflect the visual appearance of samples as perceived by consumers.Keywords: dry-cured ham, edge detection algorithm, fat content, image analysis
Procedia PDF Downloads 17319554 Barriers to Participation in Sport for Children without Disability: A Systematic Review
Authors: S. Somerset, D. J. Hoare
Abstract:
Participation in sport is linked to better mental and physical health in children and adults. Studies have shown children who participate in sports benefit from improved social skills, self-confidence, communication skills and a better quality of life. Children who participate in sports from a young age are also more likely to continue to have active lifestyles during adulthood. This is an important consideration with a nation where physical activity levels are declining and the incidences of obesity are rising. Getting children active and keeping them active can provide long term health benefits to the individual but also a potential reduction in health costs in the future. This systematic review aims to identify the barriers to participation in sport for children aged up to 18 years and encompasses both qualitative and quantitative studies. The bibliographic databases, EMBASE, Medline, CINAHL and SportDiscus were searched. Additional hand searches were carried out on review articles found in the searches to identify any studies that may have been missed. Studies involving children up to 18 years without additional needs focusing on barriers to participation in sport were included. Randomised control trials, policy guidelines, studies with sport as an intervention, studies focusing on the female athlete triad, tobacco abuse, alcohol abuse, drug abuse, pre exercise testing, and cardiovascular disease were excluded. Abstract review, full paper review and quality appraisal were conducted by two researchers. A consensus meeting took place to resolve any differences at the abstract, full text and data extraction / quality appraisal stages. The CASP qualitative studies appraisal tool and the CASP cohort studies tool (excluding question 3 and 4 which refer to interventions) were used for quality appraisal in this review. The review identified several salient barriers to participation in sport for children. These barriers ranged from the uniform worn during school physical education lessons to the weather during participation in sport. The most commonly identified barriers in the review include parental support, time allocation, location of the activity and the cost of the activity. Therefore, it would be beneficial for a greater provision to be made within the school environment for children to participate sport. This can reduce the cost and time commitment required from parents to encourage participation. This would help to increase activity levels of children, which ultimately can only be a good thing.Keywords: barrier, children, participation, sport
Procedia PDF Downloads 36119553 Analyzing Music Theory in Different Countries: Compare with Greece and China
Authors: Baoshan Wang
Abstract:
The present study investigates how music theory has developed across different countries due to their diverse histories, religions, and cultural differences. It is unknown how these various factors may contribute to differences in music theory across countries. Therefore, we examine the differences between China and Greece, which have developed unique music theories over time. Specifically, our analysis looks at musical notation and scales. For example, Tonal music originates from Greece, which harbors quite complex notation and scaling. There exist seven notes in each scale within seven modes of scales. Each mode of the diatonic scale has a unique temperament, two of which are most commonly used in modern music. In contrast, we find that Chinese music has only five notes in its scales. Interestingly, a unique feature of Chinese music theory is that there is no half-step, resulting in a highly divergent and culture-specific sound. Fascinatingly, these differences may arise from the contrasting ways that Western and Eastern musicians perceive music. While Western musicians tend to believe in music “without borders,” Eastern musicians generally embrace differing perspectives. Yet, the vast majority of colleges or music conservatories teach the borderless theory of Western music, which renders the music educational system incomplete. This is critically important because learning music is not simply a profession for musicians. Rather, it is an intermediary to facilitate understanding and appreciation for different countries’ cultures and religions. Education is undoubtedly the optimal mode to promote different countries’ music theory so people across the world can learn more about music and, in turn, each other. Even though Western music theory is predominantly taught, it is crucial we also pursue an understanding of other countries’ music because their unique aspects contribute to the systematic completeness of Music Theory in its entirety.Keywords: culture, development, music theory, music history, religion, western music
Procedia PDF Downloads 9119552 Recursion, Merge and Event Sequence: A Bio-Mathematical Perspective
Authors: Noury Bakrim
Abstract:
Formalization is indeed a foundational Mathematical Linguistics as demonstrated by the pioneering works. While dialoguing with this frame, we nonetheless propone, in our approach of language as a real object, a mathematical linguistics/biosemiotics defined as a dialectical synthesis between induction and computational deduction. Therefore, relying on the parametric interaction of cycles, rules, and features giving way to a sub-hypothetic biological point of view, we first hypothesize a factorial equation as an explanatory principle within Category Mathematics of the Ergobrain: our computation proposal of Universal Grammar rules per cycle or a scalar determination (multiplying right/left columns of the determinant matrix and right/left columns of the logarithmic matrix) of the transformable matrix for rule addition/deletion and cycles within representational mapping/cycle heredity basing on the factorial example, being the logarithmic exponent or power of rule deletion/addition. It enables us to propone an extension of minimalist merge/label notions to a Language Merge (as a computing principle) within cycle recursion relying on combinatorial mapping of rules hierarchies on external Entax of the Event Sequence. Therefore, to define combinatorial maps as language merge of features and combinatorial hierarchical restrictions (governing, commanding, and other rules), we secondly hypothesize from our results feature/hierarchy exponentiation on graph representation deriving from Gromov's Symbolic Dynamics where combinatorial vertices from Fe are set to combinatorial vertices of Hie and edges from Fe to Hie such as for all combinatorial group, there are restriction maps representing different derivational levels that are subgraphs: the intersection on I defines pullbacks and deletion rules (under restriction maps) then under disjunction edges H such that for the combinatorial map P belonging to Hie exponentiation by intersection there are pullbacks and projections that are equal to restriction maps RM₁ and RM₂. The model will draw on experimental biomathematics as well as structural frames with focus on Amazigh and English (cases from phonology/micro-semantics, Syntax) shift from Structure to event (especially Amazigh formant principle resolving its morphological heterogeneity).Keywords: rule/cycle addition/deletion, bio-mathematical methodology, general merge calculation, feature exponentiation, combinatorial maps, event sequence
Procedia PDF Downloads 12519551 CMMI Key Process Areas and FDD Practices
Authors: Rituraj Deka, Nomi Baruah
Abstract:
The development of information technology during the past few years resulted in designing of more and more complex software. The outsourcing of software development makes a higher requirement for the management of software development project. Various software enterprises follow various paths in their pursuit of excellence, applying various principles, methods and techniques along the way. The new research is proving that CMMI and Agile methodologies can benefit from using both methods within organizations with the potential to dramatically improve business performance. The paper describes a mapping between CMMI key process areas (KPAs) and Feature-Driven Development (FDD) communication perspective, so as to increase the understanding of how improvements can be made in the software development process.Keywords: Agile, CMMI, FDD, KPAs
Procedia PDF Downloads 45619550 An Experimental Study of Diffuser-Enhanced Propeller Hydrokinetic Turbines
Authors: Matheus Nunes, Rafael Mendes, Taygoara Felamingo Oliveira, Antonio Brasil Junior
Abstract:
Wind tunnel experiments of horizontal axis propeller hydrokinetic turbines model were carried out, in order to determine the performance behavior for different configurations and operational range. The present experiments introduce the use of two different geometries of rear diffusers to enhance the performance of the free flow machine. The present paper reports an increase of the power coefficient about 50%-80%. It represents an important feature that has to be taken into account in the design of this kind of machine.Keywords: diffuser-enhanced turbines, hydrokinetic turbine, wind tunnel experiments, micro hydro
Procedia PDF Downloads 27619549 Embedded Test Framework: A Solution Accelerator for Embedded Hardware Testing
Authors: Arjun Kumar Rath, Titus Dhanasingh
Abstract:
Embedded product development requires software to test hardware functionality during development and finding issues during manufacturing in larger quantities. As the components are getting integrated, the devices are tested for their full functionality using advanced software tools. Benchmarking tools are used to measure and compare the performance of product features. At present, these tests are based on a variety of methods involving varying hardware and software platforms. Typically, these tests are custom built for every product and remain unusable for other variants. A majority of the tests goes undocumented, not updated, unusable when the product is released. To bridge this gap, a solution accelerator in the form of a framework can address these issues for running all these tests from one place, using an off-the-shelf tests library in a continuous integration environment. There are many open-source test frameworks or tools (fuego. LAVA, AutoTest, KernelCI, etc.) designed for testing embedded system devices, with each one having several unique good features, but one single tool and framework may not satisfy all of the testing needs for embedded systems, thus an extensible framework with the multitude of tools. Embedded product testing includes board bring-up testing, test during manufacturing, firmware testing, application testing, and assembly testing. Traditional test methods include developing test libraries and support components for every new hardware platform that belongs to the same domain with identical hardware architecture. This approach will have drawbacks like non-reusability where platform-specific libraries cannot be reused, need to maintain source infrastructure for individual hardware platforms, and most importantly, time is taken to re-develop test cases for new hardware platforms. These limitations create challenges like environment set up for testing, scalability, and maintenance. A desirable strategy is certainly one that is focused on maximizing reusability, continuous integration, and leveraging artifacts across the complete development cycle during phases of testing and across family of products. To get over the stated challenges with the conventional method and offers benefits of embedded testing, an embedded test framework (ETF), a solution accelerator, is designed, which can be deployed in embedded system-related products with minimal customizations and maintenance to accelerate the hardware testing. Embedded test framework supports testing different hardwares including microprocessor and microcontroller. It offers benefits such as (1) Time-to-Market: Accelerates board brings up time with prepacked test suites supporting all necessary peripherals which can speed up the design and development stage(board bring up, manufacturing and device driver) (2) Reusability-framework components isolated from the platform-specific HW initialization and configuration makes the adaptability of test cases across various platform quick and simple (3) Effective build and test infrastructure with multiple test interface options and preintegrated with FUEGO framework (4) Continuos integration - pre-integrated with Jenkins which enabled continuous testing and automated software update feature. Applying the embedded test framework accelerator throughout the design and development phase enables to development of the well-tested systems before functional verification and improves time to market to a large extent.Keywords: board diagnostics software, embedded system, hardware testing, test frameworks
Procedia PDF Downloads 14319548 Determining Coordinates of Ultra-Light Drones Based on the Time Difference of Arrival (TDOA) Method
Authors: Nguyen Huy Hoang, Do Thanh Quan, Tran Vu Kien
Abstract:
The use of the active radar to measure the coordinates of ultra-light drones is frequently difficult due to long-distance, absolutely small radar cross-section (RCS) and obstacles. Since ultra-light drones are usually controlled by the Time Difference of Arrival (RF), the paper proposed a method to measure the coordinates of ultra-light drones in the space based on the arrival time of the signal at receiving antennas and the time difference of arrival (TDOA). The experimental results demonstrate that the proposed method is really potential and highly accurate.Keywords: ultra-light drone, TDOA, radar cross-section (RCS), RF
Procedia PDF Downloads 20619547 Classification of Digital Chest Radiographs Using Image Processing Techniques to Aid in Diagnosis of Pulmonary Tuberculosis
Authors: A. J. S. P. Nileema, S. Kulatunga , S. H. Palihawadana
Abstract:
Computer aided detection (CAD) system was developed for the diagnosis of pulmonary tuberculosis using digital chest X-rays with MATLAB image processing techniques using a statistical approach. The study comprised of 200 digital chest radiographs collected from the National Hospital for Respiratory Diseases - Welisara, Sri Lanka. Pre-processing was done to remove identification details. Lung fields were segmented and then divided into four quadrants; right upper quadrant, left upper quadrant, right lower quadrant, and left lower quadrant using the image processing techniques in MATLAB. Contrast, correlation, homogeneity, energy, entropy, and maximum probability texture features were extracted using the gray level co-occurrence matrix method. Descriptive statistics and normal distribution analysis were performed using SPSS. Depending on the radiologists’ interpretation, chest radiographs were classified manually into PTB - positive (PTBP) and PTB - negative (PTBN) classes. Features with standard normal distribution were analyzed using an independent sample T-test for PTBP and PTBN chest radiographs. Among the six features tested, contrast, correlation, energy, entropy, and maximum probability features showed a statistically significant difference between the two classes at 95% confidence interval; therefore, could be used in the classification of chest radiograph for PTB diagnosis. With the resulting value ranges of the five texture features with normal distribution, a classification algorithm was then defined to recognize and classify the quadrant images; if the texture feature values of the quadrant image being tested falls within the defined region, it will be identified as a PTBP – abnormal quadrant and will be labeled as ‘Abnormal’ in red color with its border being highlighted in red color whereas if the texture feature values of the quadrant image being tested falls outside of the defined value range, it will be identified as PTBN–normal and labeled as ‘Normal’ in blue color but there will be no changes to the image outline. The developed classification algorithm has shown a high sensitivity of 92% which makes it an efficient CAD system and with a modest specificity of 70%.Keywords: chest radiographs, computer aided detection, image processing, pulmonary tuberculosis
Procedia PDF Downloads 12519546 Designing User Interfaces for Just in Time Enterprise Solution
Authors: Romi Dey
Abstract:
Introduction: One of the most important criteria for technology to sustain and grow is through it’s elaborate and intuitive design methodology and design thinking. Designing for enterprise applications that cater to Just in Time Technology is one of the most challenging and detailed processes any User Experience Designer would come across. Description: The basic principles of Design, when applied to tailor to these technologies, creates an immense challenge and that’s how a set of redefined and revised design principles that can be applied to designing any Just In Time manufacturing solution. Findings: The thorough process of understanding the end user, their existing pain points which they’ve faced in the real world, their responsibilities and expectations, the core needs and last but not the least the demands, creates havoc nurturing of the design methodologies for the Just in Time solutions. With respect to the business aspect, design and design principles play a strong role in any form of innovation. Conclusion: Innovation and knowledge about the latest technologies are the keywords in the manufacturing industry. It becomes crucial for the product development team to be precise in their understanding of the technology and being sure of end users expectation.Keywords: design thinking, enterprise application, Just in Time, user experience design
Procedia PDF Downloads 16819545 Airy Wave Packet for a Particle in a Time-Dependant Linear Potential
Authors: M. Berrehail, F. Benamira
Abstract:
We study the quantum motion of a particle in the presence of a time- dependent linear potential using an operator invariant that is quadratic in p and linear in q within the framework of the Lewis-Riesenfeld invariant, The special invariant operator proposed in this work is demonstrated to be an Hermitian operator which has an Airy wave packet as its EigenfunctionKeywords: airy wave packet, ivariant, time-dependent linear potential, unitary transformation
Procedia PDF Downloads 49119544 Developing a Machine Learning-based Cost Prediction Model for Construction Projects using Particle Swarm Optimization
Authors: Soheila Sadeghi
Abstract:
Accurate cost prediction is essential for effective project management and decision-making in the construction industry. This study aims to develop a cost prediction model for construction projects using Machine Learning techniques and Particle Swarm Optimization (PSO). The research utilizes a comprehensive dataset containing project cost estimates, actual costs, resource details, and project performance metrics from a road reconstruction project. The methodology involves data preprocessing, feature selection, and the development of an Artificial Neural Network (ANN) model optimized using PSO. The study investigates the impact of various input features, including cost estimates, resource allocation, and project progress, on the accuracy of cost predictions. The performance of the optimized ANN model is evaluated using metrics such as Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and R-squared. The results demonstrate the effectiveness of the proposed approach in predicting project costs, outperforming traditional benchmark models. The feature selection process identifies the most influential variables contributing to cost variations, providing valuable insights for project managers. However, this study has several limitations. Firstly, the model's performance may be influenced by the quality and quantity of the dataset used. A larger and more diverse dataset covering different types of construction projects would enhance the model's generalizability. Secondly, the study focuses on a specific optimization technique (PSO) and a single Machine Learning algorithm (ANN). Exploring other optimization methods and comparing the performance of various ML algorithms could provide a more comprehensive understanding of the cost prediction problem. Future research should focus on several key areas. Firstly, expanding the dataset to include a wider range of construction projects, such as residential buildings, commercial complexes, and infrastructure projects, would improve the model's applicability. Secondly, investigating the integration of additional data sources, such as economic indicators, weather data, and supplier information, could enhance the predictive power of the model. Thirdly, exploring the potential of ensemble learning techniques, which combine multiple ML algorithms, may further improve cost prediction accuracy. Additionally, developing user-friendly interfaces and tools to facilitate the adoption of the proposed cost prediction model in real-world construction projects would be a valuable contribution to the industry. The findings of this study have significant implications for construction project management, enabling proactive cost estimation, resource allocation, budget planning, and risk assessment, ultimately leading to improved project performance and cost control. This research contributes to the advancement of cost prediction techniques in the construction industry and highlights the potential of Machine Learning and PSO in addressing this critical challenge. However, further research is needed to address the limitations and explore the identified future research directions to fully realize the potential of ML-based cost prediction models in the construction domain.Keywords: cost prediction, construction projects, machine learning, artificial neural networks, particle swarm optimization, project management, feature selection, road reconstruction
Procedia PDF Downloads 5519543 Climate Changes Impact on Artificial Wetlands
Authors: Carla Idely Palencia-Aguilar
Abstract:
Artificial wetlands play an important role at Guasca Municipality in Colombia, not only because they are used for the agroindustry, but also because more than 45 species were found, some of which are endemic and migratory birds. Remote sensing was used to determine the changes in the area occupied by water of artificial wetlands by means of Aster and Modis images for different time periods. Evapotranspiration was also determined by three methods: Surface Energy Balance System-Su (SEBS) algorithm, Surface Energy Balance- Bastiaanssen (SEBAL) algorithm, and Potential Evapotranspiration- FAO. Empirical equations were also developed to determine the relationship between Normalized Difference Vegetation Index (NDVI) versus net radiation, ambient temperature and rain with an obtained R2 of 0.83. Groundwater level fluctuations on a daily basis were studied as well. Data from a piezometer placed next to the wetland were fitted with rain changes (with two weather stations located at the proximities of the wetlands) by means of multiple regression and time series analysis, the R2 from the calculated and measured values resulted was higher than 0.98. Information from nearby weather stations provided information for ordinary kriging as well as the results for the Digital Elevation Model (DEM) developed by using PCI software. Standard models (exponential, spherical, circular, gaussian, linear) to describe spatial variation were tested. Ordinary Cokriging between height and rain variables were also tested, to determine if the accuracy of the interpolation would increase. The results showed no significant differences giving the fact that the mean result of the spherical function for the rain samples after ordinary kriging was 58.06 and a standard deviation of 18.06. The cokriging using for the variable rain, a spherical function; for height variable, the power function and for the cross variable (rain and height), the spherical function had a mean of 57.58 and a standard deviation of 18.36. Threatens of eutrophication were also studied, given the unconsciousness of neighbours and government deficiency. Water quality was determined over the years; different parameters were studied to determine the chemical characteristics of water. In addition, 600 pesticides were studied by gas and liquid chromatography. Results showed that coliforms, nitrogen, phosphorous and prochloraz were the most significant contaminants.Keywords: DEM, evapotranspiration, geostatistics, NDVI
Procedia PDF Downloads 11819542 Quantum Chemical Prediction of Standard Formation Enthalpies of Uranyl Nitrates and Its Degradation Products
Authors: Mohamad Saab, Florent Real, Francois Virot, Laurent Cantrel, Valerie Vallet
Abstract:
All spent nuclear fuel reprocessing plants use the PUREX process (Plutonium Uranium Refining by Extraction), which is a liquid-liquid extraction method. The organic extracting solvent is a mixture of tri-n-butyl phosphate (TBP) and hydrocarbon solvent such as hydrogenated tetra-propylene (TPH). By chemical complexation, uranium and plutonium (from spent fuel dissolved in nitric acid solution), are separated from fission products and minor actinides. During a normal extraction operation, uranium is extracted in the organic phase as the UO₂(NO₃)₂(TBP)₂ complex. The TBP solvent can form an explosive mixture called red oil when it comes in contact with nitric acid. The formation of this unstable organic phase originates from the reaction between TBP and its degradation products on the one hand, and nitric acid, its derivatives and heavy metal nitrate complexes on the other hand. The decomposition of the red oil can lead to violent explosive thermal runaway. These hazards are at the origin of several accidents such as the two in the United States in 1953 and 1975 (Savannah River) and, more recently, the one in Russia in 1993 (Tomsk). This raises the question of the exothermicity of reactions that involve TBP and all other degradation products, and calls for a better knowledge of the underlying chemical phenomena. A simulation tool (Alambic) is currently being developed at IRSN that integrates thermal and kinetic functions related to the deterioration of uranyl nitrates in organic and aqueous phases, but not of the n-butyl phosphate. To include them in the modeling scheme, there is an urgent need to obtain the thermodynamic and kinetic functions governing the deterioration processes in liquid phase. However, little is known about the thermodynamic properties, like standard enthalpies of formation, of the n-butyl phosphate molecules and of the UO₂(NO₃)₂(TBP)₂ UO₂(NO₃)₂(HDBP)(TBP) and UO₂(NO₃)₂(HDBP)₂ complexes. In this work, we propose to estimate the thermodynamic properties with Quantum Methods (QM). Thus, in the first part of our project, we focused on the mono, di, and tri-butyl complexes. Quantum chemical calculations have been performed to study several reactions leading to the formation of mono-(H₂MBP), di-(HDBP), and TBP in gas and liquid phases. In the gas phase, the optimal structures of all species were optimized using the B3LYP density functional. Triple-ζ def2-TZVP basis sets were used for all atoms. All geometries were optimized in the gas-phase, and the corresponding harmonic frequencies were used without scaling to compute the vibrational partition functions at 298.15 K and 0.1 Mpa. Accurate single point energies were calculated using the efficient localized LCCSD(T) method to the complete basis set limit. Whenever species in the liquid phase are considered, solvent effects are included with the COSMO-RS continuum model. The standard enthalpies of formation of TBP, HDBP, and H2MBP are finally predicted with an uncertainty of about 15 kJ mol⁻¹. In the second part of this project, we have investigated the fundamental properties of three organic species that mostly contribute to the thermal runaway: UO₂(NO₃)₂(TBP)₂, UO₂(NO₃)₂(HDBP)(TBP), and UO₂(NO₃)₂(HDBP)₂ using the same quantum chemical methods that were used for TBP and its derivatives in both the gas and the liquid phase. We will discuss the structures and thermodynamic properties of all these species.Keywords: PUREX process, red oils, quantum chemical methods, hydrolysis
Procedia PDF Downloads 18719541 Characterization on Molecular Weight of Polyamic Acids Using GPC Coupled with Multiple Detectors
Authors: Mei Hong, Wei Liu, Xuemin Dai, Yanxiong Pan, Xiangling Ji
Abstract:
Polyamic acid (PAA) is the precursor of polyimide (PI) prepared by a two-step method, its molecular weight and molecular weight distribution not only play an important role during the preparation and processing, but also influence the final performance of PI. However, precise characterization on molecular weight of PAA is still a challenge because of the existence of very complicated interactions in the solution system, including the electrostatic interaction, hydrogen bond interaction, dipole-dipole interaction, etc. Thus, it is necessary to establisha suitable strategy which can completely suppress these complex effects and get reasonable data on molecular weight. Herein, the gel permeation chromatography (GPC) coupled with differential refractive index (RI) and multi-angle laser light scattering (MALLS) detectors were applied to measure the molecular weight of (6FDA-DMB) PAA using different mobile phases, LiBr/DMF, LiBr/H3PO4/THF/DMF, LiBr/HAc/THF/DMF, and LiBr/HAc/DMF, respectively. It was found that combination of LiBr with HAc can shield the above-mentioned complex interactions and is more conducive to the separation of PAA than only addition of LiBr in DMF. LiBr/HAc/DMF was employed for the first time as a mild mobile phase to effectively separate PAA and determine its molecular weight. After a series of conditional experiments, 0.02M LiBr/0.2M HAc/DMF was fixed as an optimized mobile phase to measure the relative and absolute molecular weights of (6FDA-DMB) PAA prepared, and the obtained Mw from GPC-MALLS and GPC-RI were 35,300 g/mol and 125,000 g/mol, respectively. Particularly, such a mobile phase is also applicable to other PAA samples with different structures, and the final results on molecular weight are also reproducible.Keywords: Polyamic acids, Polyelectrolyte effects, Gel permeation chromatography, Mobile phase, Molecular weight
Procedia PDF Downloads 5219540 Revisiting Classic Triad of Japanese Spotted Fever: A Case Series of Forty-Three Patients
Authors: Y. Kunitani, Y. Nakashima, S. Yamauchi, Y. Ishigami, K. Naito, K. Numata, M. Mizobe, Y. Homma, J. Takahashi, T. Inoue, T. Shiga, H. Funakoshi
Abstract:
Background: Japanese Spotted Fever (JSF) is one of the Rickettsial infections, caused by Rickettsia japonica, which is transmitted by ticks. JSF is seen in limited area, such as Japan and South Korea. Its clinical triad is rash, eschar and fever. It often shows leukocytopenia, thrombopenia, elevated transaminase and high C-reactive protein (CRP). Sometimes it can be life-threatening due to disseminated intravascular coagulation or multiple organ failure. Study Aim: The aim of this study is to describe the features of JSF, as this unique infection is rapidly growing in Japan. Methods: This is a case series of JSF from 2009 to 2016, in Mie Prefectural Hospital in Japan. We collected JSF cases, which were diagnosed by polymerase chain reaction (PCR) of the skin or blood serum, or the elevation of the antibody titer of paired blood samples. Results: There were 43 JSF patients (19 male, 24 female) with a median age of 71 years [IQR:65-80]. The median body temperature was 38.1°C[IQR: 37.5-39.0]. 95% had a rash, 67% had eschar and 50% had fever. The median WBC count was 6,700 [IQR: 5,750-8,200] and leukocytopenia was observed in only 7%. The median platelet count was 14x104 [IQR10x104-17x104], thrombopenia was observed in 65%. The median aspartate transaminase (AST) was 53 IU/L [IQR: 41-93]; the median alanine aminotransferase (ALT) was 34 IU/L [IQR: 24-54]; the median CRP was 10.4 mg/dL [IQR:7.2-13.9]; the median lactate dehydrogenase (LDH) was 352IU/L [IQR:282-451]. CRP and LDH were elevated in almost all of the patients. Median length of stay in hospital was 8 days [IQR: 6-11]. All patients were treated with tetracycline and quinolone on the day of the presentation. There was no fatality from JSF. Conclusion: The patients with JSF classically presents with eschar, rash and fever. However, in this study, the half of the patients were afebrile. Although JSF is not a common infectious disease worldwide, if the patient had previously visited Japan or South Korea and presented with rash and eschar with or without fever, we should consider JSF as a potential diagnosis.Keywords: infectious disease, Japanese spotted fever, Rickettsial disease, Rickettsia japonica
Procedia PDF Downloads 22819539 Strength Evaluation by Finite Element Analysis of Mesoscale Concrete Models Developed from CT Scan Images of Concrete Cube
Authors: Nirjhar Dhang, S. Vinay Kumar
Abstract:
Concrete is a non-homogeneous mix of coarse aggregates, sand, cement, air-voids and interfacial transition zone (ITZ) around aggregates. Adoption of these complex structures and material properties in numerical simulation would lead us to better understanding and design of concrete. In this work, the mesoscale model of concrete has been prepared from X-ray computerized tomography (CT) image. These images are converted into computer model and numerically simulated using commercially available finite element software. The mesoscale models are simulated under the influence of compressive displacement. The effect of shape and distribution of aggregates, continuous and discrete ITZ thickness, voids, and variation of mortar strength has been investigated. The CT scan of concrete cube consists of series of two dimensional slices. Total 49 slices are obtained from a cube of 150mm and the interval of slices comes approximately 3mm. In CT scan images, the same cube can be CT scanned in a non-destructive manner and later the compression test can be carried out in a universal testing machine (UTM) for finding its strength. The image processing and extraction of mortar and aggregates from CT scan slices are performed by programming in Python. The digital colour image consists of red, green and blue (RGB) pixels. The conversion of RGB image to black and white image (BW) is carried out, and identification of mesoscale constituents is made by putting value between 0-255. The pixel matrix is created for modeling of mortar, aggregates, and ITZ. Pixels are normalized to 0-9 scale considering the relative strength. Here, zero is assigned to voids, 4-6 for mortar and 7-9 for aggregates. The value between 1-3 identifies boundary between aggregates and mortar. In the next step, triangular and quadrilateral elements for plane stress and plane strain models are generated depending on option given. Properties of materials, boundary conditions, and analysis scheme are specified in this module. The responses like displacement, stresses, and damages are evaluated by ABAQUS importing the input file. This simulation evaluates compressive strengths of 49 slices of the cube. The model is meshed with more than sixty thousand elements. The effect of shape and distribution of aggregates, inclusion of voids and variation of thickness of ITZ layer with relation to load carrying capacity, stress-strain response and strain localizations of concrete have been studied. The plane strain condition carried more load than plane stress condition due to confinement. The CT scan technique can be used to get slices from concrete cores taken from the actual structure, and the digital image processing can be used for finding the shape and contents of aggregates in concrete. This may be further compared with test results of concrete cores and can be used as an important tool for strength evaluation of concrete.Keywords: concrete, image processing, plane strain, interfacial transition zone
Procedia PDF Downloads 23819538 Fast Terminal Synergetic Converter Control
Authors: Z. Bouchama, N. Essounbouli, A. Hamzaoui, M. N. Harmas
Abstract:
A new robust finite time synergetic controller is presented based on recently developed synergetic control methodology and a terminal attractor technique. A Fast Terminal Synergetic Control (FTSC) is proposed for controlling DC-DC buck converter. Unlike Synergetic Control (SC) and sliding mode control, the proposed control scheme has the characteristics of finite time convergence and chattering free phenomena. Simulation of stabilization and reference tracking for buck converter systems illustrates the approach effectiveness while stability is assured in the Lyapunov sense and converse Lyapunov results involving scalar differential inequalities are given for finite-time stability.Keywords: dc-dc buck converter, synergetic control, finite time convergence, terminal synergetic control, fast terminal synergetic control, Lyapunov
Procedia PDF Downloads 45719537 Integrating Knowledge Distillation of Multiple Strategies
Authors: Min Jindong, Wang Mingxia
Abstract:
With the widespread use of artificial intelligence in life, computer vision, especially deep convolutional neural network models, has developed rapidly. With the increase of the complexity of the real visual target detection task and the improvement of the recognition accuracy, the target detection network model is also very large. The huge deep neural network model is not conducive to deployment on edge devices with limited resources, and the timeliness of network model inference is poor. In this paper, knowledge distillation is used to compress the huge and complex deep neural network model, and the knowledge contained in the complex network model is comprehensively transferred to another lightweight network model. Different from traditional knowledge distillation methods, we propose a novel knowledge distillation that incorporates multi-faceted features, called M-KD. In this paper, when training and optimizing the deep neural network model for target detection, the knowledge of the soft target output of the teacher network in knowledge distillation, the relationship between the layers of the teacher network and the feature attention map of the hidden layer of the teacher network are transferred to the student network as all knowledge. in the model. At the same time, we also introduce an intermediate transition layer, that is, an intermediate guidance layer, between the teacher network and the student network to make up for the huge difference between the teacher network and the student network. Finally, this paper adds an exploration module to the traditional knowledge distillation teacher-student network model. The student network model not only inherits the knowledge of the teacher network but also explores some new knowledge and characteristics. Comprehensive experiments in this paper using different distillation parameter configurations across multiple datasets and convolutional neural network models demonstrate that our proposed new network model achieves substantial improvements in speed and accuracy performance.Keywords: object detection, knowledge distillation, convolutional network, model compression
Procedia PDF Downloads 27619536 Effect of Built in Polarization on Thermal Properties of InGaN/GaN Heterostructures
Authors: Bijay Kumar Sahoo
Abstract:
An important feature of InₓGa₁-ₓN/GaN heterostructures is strong built-in polarization (BIP) electric field at the hetero-interface due to spontaneous (sp) and piezoelectric (pz) polarizations. The intensity of this electric field reaches several MV/cm. This field has profound impact on optical, electrical and thermal properties. In this work, the effect of BIP field on thermal conductivity of InₓGa₁-ₓN/GaN heterostructure has been investigated theoretically. The interaction between the elastic strain and built in electric field induces additional electric polarization. This additional polarization contributes to the elastic constant of InₓGa₁-ₓN alloy. This in turn modifies material parameters of InₓGa₁-ₓN. The BIP mechanism enhances elastic constant, phonon velocity and Debye temperature and their bowing constants in InₓGa₁-ₓN alloy. These enhanced thermal parameters increase phonon mean free path which boost thermal conduction process. The thermal conductivity (k) of InxGa1-xN alloy has been estimated for x=0, 0.1, 0.3 and 0.9. Computation finds that irrespective of In content, the room temperature k of InₓGa₁-ₓN/GaN heterostructure is enhanced by BIP mechanism. Our analysis shows that at a certain temperature both k with and without BIP show crossover. Below this temperature k with BIP field is lower than k without BIP; however, above this temperature k with BIP field is significantly contributed by BIP mechanism leading to k with BIP field become higher than k without BIP field. The crossover temperature is primary pyroelectric transition temperature. The pyroelectric transition temperature of InₓGa₁-ₓN alloy has been predicted for different x. This signature of pyroelectric nature suggests that thermal conductivity can reveal pyroelectricity in InₓGa₁-ₓN alloy. The composition dependent room temperature k for x=0.1 and 0.3 are in line with prior experimental studies. The result can be used to minimize the self-heating effect in InₓGa₁-ₓN/GaN heterostructures.Keywords: built-in polarization, phonon relaxation time, thermal properties of InₓGa₁-ₓN /GaN heterostructure, self-heating
Procedia PDF Downloads 40719535 Modeling of the Fermentation Process of Enzymatically Extracted Annona muricata L. Juice
Authors: Calister Wingang Makebe, Wilson Agwanande Ambindei, Zangue Steve Carly Desobgo, Abraham Billu, Emmanuel Jong Nso, P. Nisha
Abstract:
Traditional liquid-state fermentation processes of Annona muricata L. juice can result in fluctuating product quality and quantity due to difficulties in control and scale up. This work describes a laboratory-scale batch fermentation process to produce a probiotic Annona muricata L. enzymatically extracted juice, which was modeled using the Doehlert design with independent extraction factors being incubation time, temperature, and enzyme concentration. It aimed at a better understanding of the traditional process as an initial step for future optimization. Annona muricata L. juice was fermented with L. acidophilus (NCDC 291) (LA), L. casei (NCDC 17) (LC), and a blend of LA and LC (LCA) for 72 h at 37 °C. Experimental data were fitted into mathematical models (Monod, Logistic and Luedeking and Piret models) using MATLAB software, to describe biomass growth, sugar utilization, and organic acid production. The optimal fermentation time was obtained based on cell viability, which was 24 h for LC and 36 h for LA and LCA. The model was particularly effective in estimating biomass growth, reducing sugar consumption, and lactic acid production. The values of the determination coefficient, R2, were 0.9946, 0.9913 and 0.9946, while the residual sum of square error, SSE, was 0.2876, 0.1738 and 0.1589 for LC, LA and LCA, respectively. The growth kinetic parameters included the maximum specific growth rate, µm, which was 0.2876 h-1, 0.1738 h-1 and 0.1589 h-1, as well as the substrate saturation, Ks, with 9.0680 g/L, 9.9337 g/L and 9.0709 g/L respectively for LC, LA and LCA. For the stoichiometric parameters, the yield of biomass based on utilized substrate (YXS) was 50.7932, 3.3940 and 61.0202, and the yield of product based on utilized substrate (YPS) was 2.4524, 0.2307 and 0.7415 for LC, LA, and LCA, respectively. In addition, the maintenance energy parameter (ms) was 0.0128, 0.0001 and 0.0004 with respect to LC, LA and LCA. With the kinetic model proposed by Luedeking and Piret for lactic acid production rate, the growth associated and non-growth associated coefficients were determined as 1.0028 and 0.0109, respectively. The model was demonstrated for batch growth of LA, LC, and LCA in Annona muricata L. juice. The present investigation validates the potential of Annona muricata L. based medium for heightened economical production of a probiotic medium.Keywords: L. acidophilus, L. casei, fermentation, modelling, kinetics
Procedia PDF Downloads 6419534 Comparative Study between the Absorbed Dose of 67ga-Ecc and 68ga-Ecc
Authors: H. Yousefnia, S. Zolghadri, S. Shanesazzadeh, A.Lahooti, A. R. Jalilian
Abstract:
In this study, 68Ga-ECC and 67Ga-ECC were both prepared with the radiochemical purity of higher than 97% in less than 30 min. The biodistribution data for 68Ga-ECC showed the extraction of the most of the activity from the urinary tract. The absorbed dose was estimated based on biodistribution data in mice by the medical internal radiation dose (MIRD) method. Comparison between human absorbed dose estimation for these two agents indicated the values of approximately ten-fold higher after injection of 67Ga-ECC than 68Ga-ECC in the most organs. The results showed that 68Ga-ECC can be considered as a more potential agent for renal imaging compared to 67Ga-ECC.Keywords: effective absorbed dose, ethylenecysteamine cysteine, Ga-67, Ga-68
Procedia PDF Downloads 46819533 A Reflective Investigation on the Course Design and Coaching Strategy for Creating a Trans-Disciplinary Leaning Environment
Authors: Min-Feng Hsieh
Abstract:
Nowadays, we are facing a highly competitive environment in which the situation for survival has come even more critical than ever before. The challenge we will be confronted with is no longer can be dealt with the single system of knowledge. The abilities we urgently need to acquire is something that can lead us to cross over the boundaries between different disciplines and take us to a neutral ground that gathers and integrates powers and intelligence that surrounds us. This paper aims at discussing how a trans-disciplinary design course organized by the College of Design at Chaoyang University can react to this modern challenge. By orchestrating an experimental course format and by developing a series of coaching strategies, a trans-disciplinary learning environment has been created and practiced in which students selected from five different departments, including Architecture, Interior Design, Visual Design, Industrial Design, Landscape and Urban Design, are encouraged to think outside their familiar knowledge pool and to learn with/from each other. In the course of implementing this program, a parallel research has been conducted alongside by adopting the theory and principles of Action Research which is a research methodology that can provide the course organizer emergent, responsive, action-oriented, participative and critically reflective insights for the immediate changes and amendments in order to improve the effect of teaching and learning experience. In the conclusion, how the learning and teaching experience of this trans-disciplinary design studio can offer us some observation that can help us reflect upon the constraints and division caused by the subject base curriculum will be pointed out. A series of concepts for course design and teaching strategies developed and implemented in this trans-disciplinary course are to be introduced as a way to promote learners’ self-motivated, collaborative, cross-disciplinary and student-centered learning skills. The outcome of this experimental course can exemplify an alternative approach that we could adopt in pursuing a remedy for dealing with the problematic issues of the current educational practice.Keywords: course design, coaching strategy, subject base curriculum, trans-disciplinary
Procedia PDF Downloads 20219532 A Real-time Classification of Lying Bodies for Care Application of Elderly Patients
Authors: E. Vazquez-Santacruz, M. Gamboa-Zuniga
Abstract:
In this paper, we show a methodology for bodies classification in lying state using HOG descriptors and pressures sensors positioned in a matrix form (14 x 32 sensors) on the surface where bodies lie down. it will be done in real time. Our system is embedded in a care robot that can assist the elderly patient and medical staff around to get a better quality of life in and out of hospitals. Due to current technology a limited number of sensors is used, wich results in low-resolution data array, that will be used as image of 14 x 32 pixels. Our work considers the problem of human posture classification with few information (sensors), applying digital process to expand the original data of the sensors and so get more significant data for the classification, however, this is done with low-cost algorithms to ensure the real-time execution.Keywords: real-time classification, sensors, robots, health care, elderly patients, artificial intelligence
Procedia PDF Downloads 864