Search results for: multivariate time series data
38233 Incidence and Predictors of Mortality Among HIV Positive Children on Art in Public Hospitals of Harer Town, Enrolled From 2011 to 2021
Authors: Getahun Nigusie
Abstract:
Background; antiretroviral treatment reduce HIV-related morbidity, and prolonged survival of patients however, there is lack of up-to-date information concerning the treatment long term effect on the survival of HIV positive children especially in the study area. Objective: To assess incidence and predictors of mortality among HIV positive children on ART in public hospitals of Harer town who were enrolled from 2011 to 2021. Methodology: Institution based retrospective cohort study was conducted among 429 HIV positive children enrolled in ART clinic from January 1st 2011 to December30th 2021. Data were collected from medical cards by using a data extraction form, Descriptive analyses were used to Summarized the results, and life table was used to estimate survival probability at specific point of time after introduction of ART. Kaplan Meier survival curve together with log rank test was used to compare survival between different categories of covariates, and Multivariate Cox-proportional hazard regression model was used to estimate adjusted Hazard rate. Variables with p-values ≤0.25 in bivariable analysis were candidates to the multivariable analysis. Finally, variables with p-values < 0.05 were considered as significant variables. Results: The study participants had followed for a total of 2549.6 child-years (30596 child months) with an overall mortality rate of 1.5 (95% CI: 1.1, 2.04) per 100 child-years. Their median survival time was 112 months (95% CI: 101–117). There were 38 children with unknown outcome, 39 deaths, and 55 children transfer out to different facility. The overall survival at 6, 12, 24, 48 months were 98%, 96%, 95%, 94% respectively. being in WHO clinical Stage four (AHR=4.55, 95% CI:1.36, 15.24), having anemia(AHR=2.56, 95% CI:1.11, 5.93), baseline low absolute CD4 count (AHR=2.95, 95% CI: 1.22, 7.12), stunting (AHR=4.1, 95% CI: 1.11, 15.42), wasting (AHR=4.93, 95% CI: 1.31, 18.76), poor adherence to treatment (AHR=3.37, 95% CI: 1.25, 9.11), having TB infection at enrollment (AHR=3.26, 95% CI: 1.25, 8.49),and no history of change their regimen(AHR=7.1, 95% CI: 2.74, 18.24), were independent predictors of death. Conclusion: more than half of death occurs within 2 years. Prevalent tuberculosis, anemia, wasting, and stunting nutritional status, socioeconomic factors, and baseline opportunistic infection were independent predictors of death. Increasing early screening and managing those predictors are required.Keywords: human immunodeficiency virus-positive children, anti-retroviral therapy, survival, Ethiopia
Procedia PDF Downloads 2238232 Prediction of Scour Profile Caused by Submerged Three-Dimensional Wall Jets
Authors: Abdullah Al Faruque, Ram Balachandar
Abstract:
Series of laboratory tests were carried out to study the extent of scour caused by a three-dimensional wall jets exiting from a square cross-section nozzle and into a non-cohesive sand beds. Previous observations have indicated that the effect of the tailwater depth was significant for densimetric Froude number greater than ten. However, the present results indicate that the cut off value could be lower depending on the value of grain size-to-nozzle width ratio. Numbers of equations are drawn out for a better scaling of numerous scour parameters. Also suggested the empirical prediction of scour to predict the scour centre line profile and plan view of scour profile at any particular time.Keywords: densimetric froude number, jets, nozzle, sand, scour, tailwater, time
Procedia PDF Downloads 43538231 Futuristic Black Box Design Considerations and Global Networking for Real Time Monitoring of Flight Performance Parameters
Authors: K. Parandhama Gowd
Abstract:
The aim of this research paper is to conceptualize, discuss, analyze and propose alternate design methodologies for futuristic Black Box for flight safety. The proposal also includes global networking concepts for real time surveillance and monitoring of flight performance parameters including GPS parameters. It is expected that this proposal will serve as a failsafe real time diagnostic tool for accident investigation and location of debris in real time. In this paper, an attempt is made to improve the existing methods of flight data recording techniques and improve upon design considerations for futuristic FDR to overcome the trauma of not able to locate the block box. Since modern day communications and information technologies with large bandwidth are available coupled with faster computer processing techniques, the attempt made in this paper to develop a failsafe recording technique is feasible. Further data fusion/data warehousing technologies are available for exploitation.Keywords: flight data recorder (FDR), black box, diagnostic tool, global networking, cockpit voice and data recorder (CVDR), air traffic control (ATC), air traffic, telemetry, tracking and control centers ATTTCC)
Procedia PDF Downloads 57238230 Long-Term Indoor Air Monitoring for Students with Emphasis on Particulate Matter (PM2.5) Exposure
Authors: Seyedtaghi Mirmohammadi, Jamshid Yazdani, Syavash Etemadi Nejad
Abstract:
One of the main indoor air parameters in classrooms is dust pollution and it depends on the particle size and exposure duration. However, there is a lake of data about the exposure level to PM2.5 concentrations in rural area classrooms. The objective of the current study was exposure assessment for PM2.5 for students in the classrooms. One year monitoring was carried out for fifteen schools by time-series sampling to evaluate the indoor air PM2.5 in the rural district of Sari city, Iran. A hygrometer and thermometer were used to measure some psychrometric parameters (temperature, relative humidity, and wind speed) and Real-Time Dust Monitor, (MicroDust Pro, Casella, UK) was used to monitor particulate matters (PM2.5) concentration. The results show the mean indoor PM2.5 concentration in the studied classrooms was 135µg/m3. The regression model indicated that a positive correlation between indoor PM2.5 concentration and relative humidity, also with distance from city center and classroom size. Meanwhile, the regression model revealed that the indoor PM2.5 concentration, the relative humidity, and dry bulb temperature was significant at 0.05, 0.035, and 0.05 levels, respectively. A statistical predictive model was obtained from multiple regressions modeling for indoor PM2.5 concentration and indoor psychrometric parameters conditions.Keywords: classrooms, concentration, humidity, particulate matters, regression
Procedia PDF Downloads 33538229 The Development of E-Commerce in Mexico: An Econometric Analysis
Authors: Alma Lucero Ortiz, Mario Gomez
Abstract:
Technological advances contribute to the well-being of humanity by allowing man to perform in a more efficient way. Technology offers tangible advantages to countries with the adoption of information technologies, communication, and the Internet in all social and productive sectors. The Internet is a networking infrastructure that allows the communication of people throughout the world, exceeding the limits of time and space. Nowadays the internet has changed the way of doing business leading to a digital economy. In this way, e-commerce has emerged as a commercial transaction conducted over the Internet. For this inquiry e-commerce is seen as a source of economic growth for the country. Thereby, these research aims to answer the research question, which are the main variables that have affected the development of e-commerce in Mexico. The research includes a period of study from 1990 to 2017. This inquiry aims to get insight on how the independent variables influence the e-commerce development. The independent variables are information infrastructure construction, urbanization level, economic level, technology level, human capital level, educational level, standards of living, and price index. The results suggest that the independent variables have an impact on development of the e-commerce in Mexico. The present study is carried out in five parts. After the introduction, in the second part, a literature review about the main qualitative and quantitative studies to measure the variables subject to the study is presented. After, an empirical study is applied through time series data, and to process the data an econometric model is performed. In the fourth part, the analysis and discussion of results are presented, and finally, some conclusions are included.Keywords: digital economy, e-commerce, econometric model, economic growth, internet
Procedia PDF Downloads 23938228 Execution Time Optimization of Workflow Network with Activity Lead-Time
Authors: Xiaoping Qiu, Binci You, Yue Hu
Abstract:
The executive time of the workflow network has an important effect on the efficiency of the business process. In this paper, the activity executive time is divided into the service time and the waiting time, then the lead time can be extracted from the waiting time. The executive time formulas of the three basic structures in the workflow network are deduced based on the activity lead time. Taken the process of e-commerce logistics as an example, insert appropriate lead time for key activities by using Petri net, and the executive time optimization model is built to minimize the waiting time with the time-cost constraints. Then the solution program-using VC++6.0 is compiled to get the optimal solution, which reduces the waiting time of key activities in the workflow, and verifies the role of lead time in the timeliness of e-commerce logistics.Keywords: electronic business, execution time, lead time, optimization model, petri net, time workflow network
Procedia PDF Downloads 17638227 Numerical Simulation of Different Configurations for a Combined Gasification/Carbonization Reactors
Authors: Mahmoud Amer, Ibrahim El-Sharkawy, Shinichi Ookawara, Ahmed Elwardany
Abstract:
Gasification and carbonization are two of the most common ways for biomass utilization. Both processes are using part of the waste to be accomplished, either by incomplete combustion or for heating for both gasification and carbonization, respectively. The focus of this paper is to minimize the part of the waste that is used for heating biomass for gasification and carbonization. This will occur by combining both gasifiers and carbonization reactors in a single unit to utilize the heat in the product biogas to heating up the wastes in the carbonization reactors. Three different designs are proposed for the combined gasification/carbonization (CGC) reactor. These include a parallel combination of two gasifiers and carbonized syngas, carbonizer and combustion chamber, and one gasifier, carbonizer, and combustion chamber. They are tested numerically using ANSYS Fluent Computational Fluid Dynamics to ensure homogeneity of temperature distribution inside the carbonization part of the CGC reactor. 2D simulations are performed for the three cases after performing both mesh-size and time-step independent solutions. The carbonization part is common among the three different cases, and the difference among them is how this carbonization reactor is heated. The simulation results showed that the first design could provide only partial homogeneous temperature distribution, not across the whole reactor. This means that the produced carbonized biomass will be reduced as it will only fill a specified height of the reactor. To keep the carbonized product production high, a series combination is proposed. This series configuration resulted in a uniform temperature distribution across the whole reactor as it has only one source for heat with no temperature distribution on any surface of the carbonization section. The simulations provided a satisfactory result that either the first parallel combination of gasifier and carbonization reactor could be used with a reduced carbonized amount or a series configuration to keep the production rate high.Keywords: numerical simulation, carbonization, gasification, biomass, reactor
Procedia PDF Downloads 10238226 Harmonic Data Preparation for Clustering and Classification
Authors: Ali Asheibi
Abstract:
The rapid increase in the size of databases required to store power quality monitoring data has demanded new techniques for analysing and understanding the data. One suggested technique to assist in analysis is data mining. Preparing raw data to be ready for data mining exploration take up most of the effort and time spent in the whole data mining process. Clustering is an important technique in data mining and machine learning in which underlying and meaningful groups of data are discovered. Large amounts of harmonic data have been collected from an actual harmonic monitoring system in a distribution system in Australia for three years. This amount of acquired data makes it difficult to identify operational events that significantly impact the harmonics generated on the system. In this paper, harmonic data preparation processes to better understanding of the data have been presented. Underlying classes in this data has then been identified using clustering technique based on the Minimum Message Length (MML) method. The underlying operational information contained within the clusters can be rapidly visualised by the engineers. The C5.0 algorithm was used for classification and interpretation of the generated clusters.Keywords: data mining, harmonic data, clustering, classification
Procedia PDF Downloads 24838225 Finite Element Analysis of Hollow Structural Shape (HSS) Steel Brace with Infill Reinforcement under Cyclic Loading
Authors: Chui-Hsin Chen, Yu-Ting Chen
Abstract:
Special concentrically braced frames is one of the seismic load resisting systems, which dissipates seismic energy when bracing members within the frames undergo yielding and buckling while sustaining their axial tension and compression load capacities. Most of the inelastic deformation of a buckling bracing member concentrates in the mid-length region. While experiencing cyclic loading, the region dissipates most of the seismic energy being input into the frame. Such a concentration makes the braces vulnerable to failure modes associated with low-cycle fatigue. In this research, a strategy to improve the cyclic behavior of the conventional steel bracing member is proposed by filling the Hollow Structural Shape (HSS) member with reinforcement. It prevents the local section from concentrating large plastic deformation caused by cyclic loading. The infill helps spread over the plastic hinge region into a wider area hence postpone the initiation of local buckling or even the rupture of the braces. The finite element method is introduced to simulate the complicated bracing member behavior and member-versus-infill interaction under cyclic loading. Fifteen 3-D-element-based models are built by ABAQUS software. The verification of the FEM model is done with unreinforced (UR) HSS bracing members’ cyclic test data and aluminum honeycomb plates’ bending test data. Numerical models include UR and filled HSS bracing members with various compactness ratios based on the specification of AISC-2016 and AISC-1989. The primary variables to be investigated include the relative bending stiffness and the material of the filling reinforcement. The distributions of von Mises stress and equivalent plastic strain (PEEQ) are used as indices to tell the strengths and shortcomings of each model. The result indicates that the change of relative bending stiffness of the infill is much more influential than the change of material in use to increase the energy dissipation capacity. Strengthen the relative bending stiffness of the reinforcement results in additional energy dissipation capacity to the extent of 24% and 46% in model based on AISC-2016 (16-series) and AISC-1989 (89-series), respectively. HSS members with infill show growth in 𝜂Local Buckling, normalized energy cumulated until the happening of local buckling, comparing to UR bracing members. The 89-series infill-reinforced members have more energy dissipation capacity than unreinforced 16-series members by 117% to 166%. The flexural rigidity of infills should be less than 29% and 13% of the member section itself for 16-series and 89-series bracing members accordingly, thereby guaranteeing the spread over of the plastic hinge and the happening of it within the reinforced section. If the parameters are properly configured, the ductility, energy dissipation capacity, and fatigue-life of HSS SCBF bracing members can be improved prominently by the infill-reinforced method.Keywords: special concentrically braced frames, HSS, cyclic loading, infill reinforcement, finite element analysis, PEEQ
Procedia PDF Downloads 9338224 Geochemistry of Nutrients in the South Lagoon of Tunis, Northeast of Tunisia, Using Multivariable Methods
Authors: Abidi Myriam, Ben Amor Rim, Gueddari Moncef
Abstract:
Understanding ecosystem response to the restoration project is essential to assess its rehabilitation. Indeed, the time elapsed after restoration is a critical indicator to shows the real of the restoration success. In this order, the south lagoon of Tunis, a shallow Mediterranean coastal area, has witnessed several pollutions. To resolve this environmental problem, a large restoration project of the lagoon was undertaken. In this restoration works, the main changes are the decrease of the residence time of the lagoon water and the nutrient concentrations. In this paper, we attempt to evaluate the trophic state of lagoon water for evaluating the risk of eutrophication after almost 16 years of its restoration. To attend this objectives water quality monitoring was untaken. In order to identify and to analyze the natural and anthropogenic factor governing the nutrients concentrations of lagoon water geochemical methods and multivariate statistical tools were used. Results show that nutrients have duel sources due to the discharge of municipal wastewater of Megrine City in the south side of the lagoon. The Carlson index shows that the South lagoon of Tunis Lagoon Tunis is eutrophic, and may show limited summer anoxia.Keywords: geochemistry, nutrients, statistical analysis, the south lagoon of Tunis, trophic state
Procedia PDF Downloads 18738223 Survey on Big Data Stream Classification by Decision Tree
Authors: Mansoureh Ghiasabadi Farahani, Samira Kalantary, Sara Taghi-Pour, Mahboubeh Shamsi
Abstract:
Nowadays, the development of computers technology and its recent applications provide access to new types of data, which have not been considered by the traditional data analysts. Two particularly interesting characteristics of such data sets include their huge size and streaming nature .Incremental learning techniques have been used extensively to address the data stream classification problem. This paper presents a concise survey on the obstacles and the requirements issues classifying data streams with using decision tree. The most important issue is to maintain a balance between accuracy and efficiency, the algorithm should provide good classification performance with a reasonable time response.Keywords: big data, data streams, classification, decision tree
Procedia PDF Downloads 52138222 The Trajectory of the Ball in Football Game
Authors: Mahdi Motahari, Mojtaba Farzaneh, Ebrahim Sepidbar
Abstract:
Tracking of moving and flying targets is one of the most important issues in image processing topic. Estimating of trajectory of desired object in short-term and long-term scale is more important than tracking of moving and flying targets. In this paper, a new way of identifying and estimating of future trajectory of a moving ball in long-term scale is estimated by using synthesis and interaction of image processing algorithms including noise removal and image segmentation, Kalman filter algorithm in order to estimating of trajectory of ball in football game in short-term scale and intelligent adaptive neuro-fuzzy algorithm based on time series of traverse distance. The proposed system attain more than 96% identify accuracy by using aforesaid methods and relaying on aforesaid algorithms and data base video in format of synthesis and interaction. Although the present method has high precision, it is time consuming. By comparing this method with other methods we realize the accuracy and efficiency of that.Keywords: tracking, signal processing, moving targets and flying, artificial intelligent systems, estimating of trajectory, Kalman filter
Procedia PDF Downloads 46138221 Development of a Serial Signal Monitoring Program for Educational Purposes
Authors: Jungho Moon, Lae-Jeong Park
Abstract:
This paper introduces a signal monitoring program developed with a view to helping electrical engineering students get familiar with sensors with digital output. Because the output of digital sensors cannot be simply monitored by a measuring instrument such as an oscilloscope, students tend to have a hard time dealing with digital sensors. The monitoring program runs on a PC and communicates with an MCU that reads the output of digital sensors via an asynchronous communication interface. Receiving the sensor data from the MCU, the monitoring program shows time and/or frequency domain plots of the data in real time. In addition, the monitoring program provides a serial terminal that enables the user to exchange text information with the MCU while the received data is plotted. The user can easily observe the output of digital sensors and configure the digital sensors in real time, which helps students who do not have enough experiences with digital sensors. Though the monitoring program was programmed in the Matlab programming language, it runs without the Matlab since it was compiled as a standalone executable.Keywords: digital sensor, MATLAB, MCU, signal monitoring program
Procedia PDF Downloads 49638220 Single Phase Fluid Flow in Series of Microchannel Connected via Converging-Diverging Section with or without Throat
Authors: Abhishek Kumar Chandra, Kaushal Kishor, Wasim Khan, Dhananjay Singh, M. S. Alam
Abstract:
Single phase fluid flow through series of uniform microchannels connected via transition section (converging-diverging section with or without throat) was analytically and numerically studied to characterize the flow within the channel and in the transition sections. Three sets of microchannels of diameters 100, 184, and 249 μm were considered for investigation. Each set contains 10 numbers of microchannels of length 20 mm, connected to each other in series via transition sections. Transition section consists of either converging-diverging section with throat or without throat. The effect of non-uniformity in microchannels on pressure drop was determined by passing water/air through the set of channels for Reynolds number 50 to 1000. Compressibility and rarefaction effects in transition sections were also tested analytically and numerically for air flow. The analytical and numerical results show that these configurations can be used in enhancement of transport processes. However, converging-diverging section without throat shows superior performance over with throat configuration.Keywords: contraction-expansion flow, integrated microchannel, microchannel network, single phase flow
Procedia PDF Downloads 28038219 Automated End-to-End Pipeline Processing Solution for Autonomous Driving
Authors: Ashish Kumar, Munesh Raghuraj Varma, Nisarg Joshi, Gujjula Vishwa Teja, Srikanth Sambi, Arpit Awasthi
Abstract:
Autonomous driving vehicles are revolutionizing the transportation system of the 21st century. This has been possible due to intensive research put into making a robust, reliable, and intelligent program that can perceive and understand its environment and make decisions based on the understanding. It is a very data-intensive task with data coming from multiple sensors and the amount of data directly reflects on the performance of the system. Researchers have to design the preprocessing pipeline for different datasets with different sensor orientations and alignments before the dataset can be fed to the model. This paper proposes a solution that provides a method to unify all the data from different sources into a uniform format using the intrinsic and extrinsic parameters of the sensor used to capture the data allowing the same pipeline to use data from multiple sources at a time. This also means easy adoption of new datasets or In-house generated datasets. The solution also automates the complete deep learning pipeline from preprocessing to post-processing for various tasks allowing researchers to design multiple custom end-to-end pipelines. Thus, the solution takes care of the input and output data handling, saving the time and effort spent on it and allowing more time for model improvement.Keywords: augmentation, autonomous driving, camera, custom end-to-end pipeline, data unification, lidar, post-processing, preprocessing
Procedia PDF Downloads 12338218 Graph Based Traffic Analysis and Delay Prediction Using a Custom Built Dataset
Authors: Gabriele Borg, Alexei Debono, Charlie Abela
Abstract:
There on a constant rise in the availability of high volumes of data gathered from multiple sources, resulting in an abundance of unprocessed information that can be used to monitor patterns and trends in user behaviour. Similarly, year after year, Malta is also constantly experiencing ongoing population growth and an increase in mobilization demand. This research takes advantage of data which is continuously being sourced and converting it into useful information related to the traffic problem on the Maltese roads. The scope of this paper is to provide a methodology to create a custom dataset (MalTra - Malta Traffic) compiled from multiple participants from various locations across the island to identify the most common routes taken to expose the main areas of activity. This use of big data is seen being used in various technologies and is referred to as ITSs (Intelligent Transportation Systems), which has been concluded that there is significant potential in utilising such sources of data on a nationwide scale. Furthermore, a series of traffic prediction graph neural network models are conducted to compare MalTra to large-scale traffic datasets.Keywords: graph neural networks, traffic management, big data, mobile data patterns
Procedia PDF Downloads 13138217 Multinomial Dirichlet Gaussian Process Model for Classification of Multidimensional Data
Authors: Wanhyun Cho, Soonja Kang, Sanggoon Kim, Soonyoung Park
Abstract:
We present probabilistic multinomial Dirichlet classification model for multidimensional data and Gaussian process priors. Here, we have considered an efficient computational method that can be used to obtain the approximate posteriors for latent variables and parameters needed to define the multiclass Gaussian process classification model. We first investigated the process of inducing a posterior distribution for various parameters and latent function by using the variational Bayesian approximations and important sampling method, and next we derived a predictive distribution of latent function needed to classify new samples. The proposed model is applied to classify the synthetic multivariate dataset in order to verify the performance of our model. Experiment result shows that our model is more accurate than the other approximation methods.Keywords: multinomial dirichlet classification model, Gaussian process priors, variational Bayesian approximation, importance sampling, approximate posterior distribution, marginal likelihood evidence
Procedia PDF Downloads 44438216 Ventriculo-Gallbladder Shunt: Case Series and Literature Review
Authors: Sandrieli Afornali, Adriano Keijiro Maeda, Renato Fedatto Beraldo, Carlos Alberto Mattozo, Ricardo Nascimento Brito
Abstract:
BACKGROUND: The most used variety in hydrocephalus treatment is the ventriculoperitoneal shunt (VPS). However, it may fails in 20 to 70% of cases. It makes necessary to have alternative cavities for the implantation of the distal catheter. Ventriculo-atrial shunting (VAS) is described as the second option. To our knowledge, there are 121 reported cases of VGB shunt in children until 2020 and a highly variable success rate, from 25 to 100%, with an average of 63% of patients presenting good long-term results. Our goal is to evaluate the epidemiological profile of patients submitted to ventriculo-gallbladder (VGB) shunt and, through a review of literature, to compare our results with others series. METHODS: a retrospective cross-sectional observational study of a case series of nine patients. The medical records of all patients were reviewed, who underwent VGB shunt at the Hospital Pequeno Príncipe from Curitiba, Paraná, Brazil, from January 2014 to October 2022. The inclusion criteria were: patients under 17 years of age with hydrocephalus of any etiology, currently using or prior to VGB shunt. RESULTS: There were 6 (66,7%) male and 3 (33,3%) female. The average age of 73.6 months or 6.1 years at the time of surgery. They were submitted on average 5.1 VPS reviews previous to VGB shunt. Five (55,5%) had complications of VGB shunt: infection (11.1%), atony (11.1%), hypodrainage due to kinking the distal catheter in the solution (11.1%) and ventriculoenteric fistula (22.2%); all these patients were cured at surgical reapproach, and in 2 of them the VGB shunt was reimplanted. Two patients died (22.2%), and five (55,5%) patients maintained the use of VGB shunt in the follow-up period; and in 4 (44.4%) there was never need for review. CONCLUSION: VGB shunt tends to be underestimated because it is still unconventional and little publicized in literature. Our article shows a lower risk of death and similar risk of complications when compared to others altenatives shunts. We emphasize VGB shunt as a safe procedure to be the second option when VPS fails or has contraindications.Keywords: hydrocephalus, ventricular-gallbladder shunt, VGB shunt, VPS, ventriculoperitoneal shunt, ventriculoatrial shunt
Procedia PDF Downloads 7238215 Protecting the Cloud Computing Data Through the Data Backups
Authors: Abdullah Alsaeed
Abstract:
Virtualized computing and cloud computing infrastructures are no longer fuzz or marketing term. They are a core reality in today’s corporate Information Technology (IT) organizations. Hence, developing an effective and efficient methodologies for data backup and data recovery is required more than any time. The purpose of data backup and recovery techniques are to assist the organizations to strategize the business continuity and disaster recovery approaches. In order to accomplish this strategic objective, a variety of mechanism were proposed in the recent years. This research paper will explore and examine the latest techniques and solutions to provide data backup and restoration for the cloud computing platforms.Keywords: data backup, data recovery, cloud computing, business continuity, disaster recovery, cost-effective, data encryption.
Procedia PDF Downloads 8738214 Identification and Classification of Fiber-Fortified Semolina by Near-Infrared Spectroscopy (NIR)
Authors: Amanda T. Badaró, Douglas F. Barbin, Sofia T. Garcia, Maria Teresa P. S. Clerici, Amanda R. Ferreira
Abstract:
Food fortification is the intentional addition of a nutrient in a food matrix and has been widely used to overcome the lack of nutrients in the diet or increasing the nutritional value of food. Fortified food must meet the demand of the population, taking into account their habits and risks that these foods may cause. Wheat and its by-products, such as semolina, has been strongly indicated to be used as a food vehicle since it is widely consumed and used in the production of other foods. These products have been strategically used to add some nutrients, such as fibers. Methods of analysis and quantification of these kinds of components are destructive and require lengthy sample preparation and analysis. Therefore, the industry has searched for faster and less invasive methods, such as Near-Infrared Spectroscopy (NIR). NIR is a rapid and cost-effective method, however, it is based on indirect measurements, yielding high amount of data. Therefore, NIR spectroscopy requires calibration with mathematical and statistical tools (Chemometrics) to extract analytical information from the corresponding spectra, as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). PCA is well suited for NIR, once it can handle many spectra at a time and be used for non-supervised classification. Advantages of the PCA, which is also a data reduction technique, is that it reduces the data spectra to a smaller number of latent variables for further interpretation. On the other hand, LDA is a supervised method that searches the Canonical Variables (CV) with the maximum separation among different categories. In LDA, the first CV is the direction of maximum ratio between inter and intra-class variances. The present work used a portable infrared spectrometer (NIR) for identification and classification of pure and fiber-fortified semolina samples. The fiber was added to semolina in two different concentrations, and after the spectra acquisition, the data was used for PCA and LDA to identify and discriminate the samples. The results showed that NIR spectroscopy associate to PCA was very effective in identifying pure and fiber-fortified semolina. Additionally, the classification range of the samples using LDA was between 78.3% and 95% for calibration and 75% and 95% for cross-validation. Thus, after the multivariate analysis such as PCA and LDA, it was possible to verify that NIR associated to chemometric methods is able to identify and classify the different samples in a fast and non-destructive way.Keywords: Chemometrics, fiber, linear discriminant analysis, near-infrared spectroscopy, principal component analysis, semolina
Procedia PDF Downloads 21238213 3D Numerical Study of Tsunami Loading and Inundation in a Model Urban Area
Authors: A. Bahmanpour, I. Eames, C. Klettner, A. Dimakopoulos
Abstract:
We develop a new set of diagnostic tools to analyze inundation into a model district using three-dimensional CFD simulations, with a view to generating a database against which to test simpler models. A three-dimensional model of Oregon city with different-sized groups of building next to the coastline is used to run calculations of the movement of a long period wave on the shore. The initial and boundary conditions of the off-shore water are set using a nonlinear inverse method based on Eulerian spatial information matching experimental Eulerian time series measurements of water height. The water movement is followed in time, and this enables the pressure distribution on every surface of each building to be followed in a temporal manner. The three-dimensional numerical data set is validated against published experimental work. In the first instance, we use the dataset as a basis to understand the success of reduced models - including 2D shallow water model and reduced 1D models - to predict water heights, flow velocity and forces. This is because models based on the shallow water equations are known to underestimate drag forces after the initial surge of water. The second component is to identify critical flow features, such as hydraulic jumps and choked states, which are flow regions where dissipation occurs and drag forces are large. Finally, we describe how future tsunami inundation models should be modified to account for the complex effects of buildings through drag and blocking.Financial support from UCL and HR Wallingford is greatly appreciated. The authors would like to thank Professor Daniel Cox and Dr. Hyoungsu Park for providing the data on the Seaside Oregon experiment.Keywords: computational fluid dynamics, extreme events, loading, tsunami
Procedia PDF Downloads 11538212 Development of Structural Deterioration Models for Flexible Pavement Using Traffic Speed Deflectometer Data
Authors: Sittampalam Manoharan, Gary Chai, Sanaul Chowdhury, Andrew Golding
Abstract:
The primary objective of this paper is to present a simplified approach to develop the structural deterioration model using traffic speed deflectometer data for flexible pavements. Maintaining assets to meet functional performance is not economical or sustainable in the long terms, and it would end up needing much more investments for road agencies and extra costs for road users. Performance models have to be included for structural and functional predicting capabilities, in order to assess the needs, and the time frame of those needs. As such structural modelling plays a vital role in the prediction of pavement performance. A structural condition is important for the prediction of remaining life and overall health of a road network and also major influence on the valuation of road pavement. Therefore, the structural deterioration model is a critical input into pavement management system for predicting pavement rehabilitation needs accurately. The Traffic Speed Deflectometer (TSD) is a vehicle-mounted Doppler laser system that is capable of continuously measuring the structural bearing capacity of a pavement whilst moving at traffic speeds. The device’s high accuracy, high speed, and continuous deflection profiles are useful for network-level applications such as predicting road rehabilitations needs and remaining structural service life. The methodology adopted in this model by utilizing time series TSD maximum deflection (D0) data in conjunction with rutting, rutting progression, pavement age, subgrade strength and equivalent standard axle (ESA) data. Then, regression analyses were undertaken to establish a correlation equation of structural deterioration as a function of rutting, pavement age, seal age and equivalent standard axle (ESA). This study developed a simple structural deterioration model which will enable to incorporate available TSD structural data in pavement management system for developing network-level pavement investment strategies. Therefore, the available funding can be used effectively to minimize the whole –of- life cost of the road asset and also improve pavement performance. This study will contribute to narrowing the knowledge gap in structural data usage in network level investment analysis and provide a simple methodology to use structural data effectively in investment decision-making process for road agencies to manage aging road assets.Keywords: adjusted structural number (SNP), maximum deflection (D0), equant standard axle (ESA), traffic speed deflectometer (TSD)
Procedia PDF Downloads 15138211 Curve Fitting by Cubic Bezier Curves Using Migrating Birds Optimization Algorithm
Authors: Mitat Uysal
Abstract:
A new met heuristic optimization algorithm called as Migrating Birds Optimization is used for curve fitting by rational cubic Bezier Curves. This requires solving a complicated multivariate optimization problem. In this study, the solution of this optimization problem is achieved by Migrating Birds Optimization algorithm that is a powerful met heuristic nature-inspired algorithm well appropriate for optimization. The results of this study show that the proposed method performs very well and being able to fit the data points to cubic Bezier Curves with a high degree of accuracy.Keywords: algorithms, Bezier curves, heuristic optimization, migrating birds optimization
Procedia PDF Downloads 33738210 Breakfast Skipping and Health Status Among University Professionals in Bangladesh
Authors: Shatabdi Goon
Abstract:
OBJECTIVE: To determine the prevalence and associations between breakfast skipping and health status for university professionals in Bangladesh. DESIGN: A cross-sectional descriptive study design was performed using information on respondent’s sociodemographic status and eating behavior. Factors associated with breakfast skipping were identified using multivariate regression models. SETTINGS: Data obtained from a representative sample (n 120) of university professionals randomly selected from two distinct universities in Dhaka city, Bangladesh. SUBJECT: A total number of one hundred and twenty university professionals with a mean age of 29 years. RESULT: Results indicated that approximately 35.8% of the sample skipped breakfast. Gender was the only statistically significant sociodemographic variable, with females skipping at over two times the rate of males (OR 95% CI: 1.9; 0.90-4.13). The reasons given for skipping breakfast were almost exclusively habit (39.5%), work pressure (23.2%) and lack of time (16.2%). Skippers were significantly more likely to be obese (OR 2.4; 95% CI 1.02- 5.7), less energetic (OR 3.5; 95% CI 1.5-8.6), associated with health problems (OR 4.3; 95% CI 1.8- 10.17) and eating tendency of fast food (OR 2.5; 95% CI 1.13 - 5.5). Gastric and heart burn (X2=4.19, p<0.05) and high blood pressure (X2=5.027, p<0.05) were detected among 34.9% and 27.9 % of those employees respectively identified as breakfast skippers and they showed significantly high prevalence. CONCLUSION: Breakfast skipping is highly prevalent among university professionals with significant association of different health problems in Bangladesh. Health promotion strategies should be used to encourage all adults to eat breakfast regularly.Keywords: breakfast, healthy lifestyle, breakfast skipping, health status, university professionals
Procedia PDF Downloads 34738209 Assessing Two Protocols for Positive Reinforcement Training in Captive Olive Baboons (Papio anubis)
Authors: H. Cano, P. Ferrer, N. Garcia, M. Popovic, J. Zapata
Abstract:
Positive Reinforcement Training is a well-known methodology which has been reported frequently to be used in captive non-human primates. As a matter of fact, it is an invaluable tool for different purposes related with animal welfare, such as primate husbandry and environmental enrichment. It is also essential to perform some cognitive experiments. The main propose of this pilot study was to establish an efficient protocol to train captive olive baboons (Papio anubis). This protocol seems to be vital in the context of a larger research program in which it will be necessary to train a complete population of around 40 baboons. Baboons were studied at the Veterinary Research Farm of the University of Murcia. Temporally isolated animals were trained to perform three basic tasks. Firstly, they were required to take food prices directly from the researchers’ hands. Then a clicker sound or bridge stimulus was added each time the animal acceded to the reinforcement. Finally, they were trained to touch a target, consisted of a whip with a red ball in its end, with their hands or their nose. When the subject completed correctly this task, it was also exposed to the bridge stimulus and awarded with a food price, such as a portion of banana, orange, apple, peach or a raisin. Two protocols were tested during this experiment. In both of them, there were 6 series of 2min training periods each day. However, in the first protocol, the series consisted in 3 trials, whereas in the second one, in each series there were 5 trials. A reliable performance was obtained with only 6 days of training in the case of the 5-trials protocol. However, with the 3-trials one, 26 days of training were needed. As a result, the 5-trials protocol seems to be more effective than the 3-trials one, in order to teach these three basic tasks to olive baboons. In consequence, it will be used to train the rest of the colony.Keywords: captive primates, olive baboon, positive reinforcement training, Papio anubis, training
Procedia PDF Downloads 12438208 Spatial Data Mining by Decision Trees
Authors: Sihem Oujdi, Hafida Belbachir
Abstract:
Existing methods of data mining cannot be applied on spatial data because they require spatial specificity consideration, as spatial relationships. This paper focuses on the classification with decision trees, which are one of the data mining techniques. We propose an extension of the C4.5 algorithm for spatial data, based on two different approaches Join materialization and Querying on the fly the different tables. Similar works have been done on these two main approaches, the first - Join materialization - favors the processing time in spite of memory space, whereas the second - Querying on the fly different tables- promotes memory space despite of the processing time. The modified C4.5 algorithm requires three entries tables: a target table, a neighbor table, and a spatial index join that contains the possible spatial relationship among the objects in the target table and those in the neighbor table. Thus, the proposed algorithms are applied to a spatial data pattern in the accidentology domain. A comparative study of our approach with other works of classification by spatial decision trees will be detailed.Keywords: C4.5 algorithm, decision trees, S-CART, spatial data mining
Procedia PDF Downloads 61238207 Promoting Teaching and Learning Structures Based on Innovation and Entrepreneurship in Valahia University of Targoviste
Authors: Gabriela Teodorescu, Ioana Daniela Dulama
Abstract:
In an ever-changing society, the education system needs to constantly evolve to meet market demands. During its 30 years of existence, Valahia University of Targoviste (VUT) tried to offer its students a series of teaching-learning schemes that would prepare them for a remarkable career. In VUT, the achievement of performance through innovation can be analyzed by reference to several key indicators (i.e., university climate, university resources, and innovative methods applied to classes), but it is possible to differentiate between activities in the classic format: participate to courses; interactive seminars and tutorials; laboratories, workshops, project-based learning; entrepreneurial activities, through simulated enterprises; mentoring activities. Thus, VUT has implemented over time a series of schemes and projects based on innovation and entrepreneurship, and in this paper, some of them will be briefly presented. All these schemes were implemented by facilitating an effective dialog with students and the opportunity to listen to their views at all levels of the University and in all fields of study, as well as by developing a partnership with students to set out priority areas. VUT demonstrates innovation and entrepreneurial capacity through its new activities for higher education, which will attract more partnerships and projects dedicated to students.Keywords: Romania, project-based learning, entrepreneurial activities, simulated enterprises
Procedia PDF Downloads 16338206 Data Monetisation by E-commerce Companies: A Need for a Regulatory Framework in India
Authors: Anushtha Saxena
Abstract:
This paper examines the process of data monetisation bye-commerce companies operating in India. Data monetisation is collecting, storing, and analysing consumers’ data to use further the data that is generated for profits, revenue, etc. Data monetisation enables e-commerce companies to get better businesses opportunities, innovative products and services, a competitive edge over others to the consumers, and generate millions of revenues. This paper analyses the issues and challenges that are faced due to the process of data monetisation. Some of the issues highlighted in the paper pertain to the right to privacy, protection of data of e-commerce consumers. At the same time, data monetisation cannot be prohibited, but it can be regulated and monitored by stringent laws and regulations. The right to privacy isa fundamental right guaranteed to the citizens of India through Article 21 of The Constitution of India. The Supreme Court of India recognized the Right to Privacy as a fundamental right in the landmark judgment of Justice K.S. Puttaswamy (Retd) and Another v. Union of India . This paper highlights the legal issue of how e-commerce businesses violate individuals’ right to privacy by using the data collected, stored by them for economic gains and monetisation and protection of data. The researcher has mainly focused on e-commerce companies like online shopping websitesto analyse the legal issue of data monetisation. In the Internet of Things and the digital age, people have shifted to online shopping as it is convenient, easy, flexible, comfortable, time-consuming, etc. But at the same time, the e-commerce companies store the data of their consumers and use it by selling to the third party or generating more data from the data stored with them. This violatesindividuals’ right to privacy because the consumers do not know anything while giving their data online. Many times, data is collected without the consent of individuals also. Data can be structured, unstructured, etc., that is used by analytics to monetise. The Indian legislation like The Information Technology Act, 2000, etc., does not effectively protect the e-consumers concerning their data and how it is used by e-commerce businesses to monetise and generate revenues from that data. The paper also examines the draft Data Protection Bill, 2021, pending in the Parliament of India, and how this Bill can make a huge impact on data monetisation. This paper also aims to study the European Union General Data Protection Regulation and how this legislation can be helpful in the Indian scenarioconcerning e-commerce businesses with respect to data monetisation.Keywords: data monetization, e-commerce companies, regulatory framework, GDPR
Procedia PDF Downloads 12038205 Dynamic EEG Desynchronization in Response to Vicarious Pain
Authors: Justin Durham, Chanda Rooney, Robert Mather, Mickie Vanhoy
Abstract:
The psychological construct of empathy is to understand a person’s cognitive perspective and experience the other person’s emotional state. Deciphering emotional states is conducive for interpreting vicarious pain. Observing others' physical pain activates neural networks related to the actual experience of pain itself. The study addresses empathy as a nonlinear dynamic process of simulation for individuals to understand the mental states of others and experience vicarious pain, exhibiting self-organized criticality. Such criticality follows from a combination of neural networks with an excitatory feedback loop generating bistability to resonate permutated empathy. Cortical networks exhibit diverse patterns of activity, including oscillations, synchrony and waves, however, the temporal dynamics of neurophysiological activities underlying empathic processes remain poorly understood. Mu rhythms are EEG oscillations with dominant frequencies of 8-13 Hz becoming synchronized when the body is relaxed with eyes open and when the sensorimotor system is in idle, thus, mu rhythm synchrony is expected to be highest in baseline conditions. When the sensorimotor system is activated either by performing or simulating action, mu rhythms become suppressed or desynchronize, thus, should be suppressed while observing video clips of painful injuries if previous research on mirror system activation holds. Twelve undergraduates contributed EEG data and survey responses to empathy and psychopathy scales in addition to watching consecutive video clips of sports injuries. Participants watched a blank, black image on a computer monitor before and after observing a video of consecutive sports injuries incidents. Each video condition lasted five-minutes long. A BIOPAC MP150 recorded EEG signals from sensorimotor and thalamocortical regions related to a complex neural network called the ‘pain matrix’. Physical and social pain are activated in this network to resonate vicarious pain responses to processing empathy. Five EEG single electrode locations were applied to regions measuring sensorimotor electrical activity in microvolts (μV) to monitor mu rhythms. EEG signals were sampled at a rate of 200 Hz. Mu rhythm desynchronization was measured via 8-13 Hz at electrode sites (F3 & F4). Data for each participant’s mu rhythms were analyzed via Fast Fourier Transformation (FFT) and multifractal time series analysis.Keywords: desynchronization, dynamical systems theory, electroencephalography (EEG), empathy, multifractal time series analysis, mu waveform, neurophysiology, pain simulation, social cognition
Procedia PDF Downloads 28338204 Reducing the Imbalance Penalty Through Artificial Intelligence Methods Geothermal Production Forecasting: A Case Study for Turkey
Authors: Hayriye Anıl, Görkem Kar
Abstract:
In addition to being rich in renewable energy resources, Turkey is one of the countries that promise potential in geothermal energy production with its high installed power, cheapness, and sustainability. Increasing imbalance penalties become an economic burden for organizations since geothermal generation plants cannot maintain the balance of supply and demand due to the inadequacy of the production forecasts given in the day-ahead market. A better production forecast reduces the imbalance penalties of market participants and provides a better imbalance in the day ahead market. In this study, using machine learning, deep learning, and, time series methods, the total generation of the power plants belonging to Zorlu Natural Electricity Generation, which has a high installed capacity in terms of geothermal, was estimated for the first one and two weeks of March, then the imbalance penalties were calculated with these estimates and compared with the real values. These modeling operations were carried out on two datasets, the basic dataset and the dataset created by extracting new features from this dataset with the feature engineering method. According to the results, Support Vector Regression from traditional machine learning models outperformed other models and exhibited the best performance. In addition, the estimation results in the feature engineering dataset showed lower error rates than the basic dataset. It has been concluded that the estimated imbalance penalty calculated for the selected organization is lower than the actual imbalance penalty, optimum and profitable accounts.Keywords: machine learning, deep learning, time series models, feature engineering, geothermal energy production forecasting
Procedia PDF Downloads 110