Search results for: continuous data
25654 Association between Noise Levels, Particulate Matter Concentrations and Traffic Intensities in a Near-Highway Urban Area
Authors: Mohammad Javad Afroughi, Vahid Hosseini, Jason S. Olfert
Abstract:
Both traffic-generated particles and noise have been associated with the development of cardiovascular diseases, especially in near-highway environments. Although noise and particulate matters (PM) have different mechanisms of dispersion, sharing the same emission source in urban areas (road traffics) can result in a similar degree of variability in their levels. This study investigated the temporal variation of and correlation between noise levels, PM concentrations and traffic intensities near a major highway in Tehran, Iran. Tehran particulate concentration is highly influenced by road traffic. Additionally, Tehran ultrafine particles (UFP, PM<0.1 µm) are mostly emitted from combustion processes of motor vehicles. This gives a high possibility of a strong association between traffic-related noise and UFP in near-highway environments of this megacity. Hourly average of equivalent continuous sound pressure level (Leq), total number concentration of UFPs, mass concentration of PM2.5 and PM10, as well as traffic count and speed were simultaneously measured over a period of three days in winter. Additionally, meteorological data including temperature, relative humidity, wind speed and direction were collected in a weather station, located 3 km from the monitoring site. Noise levels showed relatively low temporal variability in near-highway environments compared to PM concentrations. Hourly average of Leq ranged from 63.8 to 69.9 dB(A) (mean ~ 68 dB(A)), while hourly concentration of particles varied from 30,800 to 108,800 cm-3 for UFP (mean ~ 64,500 cm-3), 41 to 75 µg m-3 for PM2.5 (mean ~ 53 µg m-3), and 62 to 112 µg m-3 for PM10 (mean ~ 88 µg m-3). The Pearson correlation coefficient revealed strong relationship between noise and UFP (r ~ 0.61) overall. Under downwind conditions, UFP number concentration showed the strongest association with noise level (r ~ 0.63). The coefficient decreased to a lesser degree under upwind conditions (r ~ 0.24) due to the significant role of wind and humidity in UFP dynamics. Furthermore, PM2.5 and PM10 correlated moderately with noise (r ~ 0.52 and 0.44 respectively). In general, traffic counts were more strongly associated with noise and PM compared to traffic speeds. It was concluded that noise level combined with meteorological data can be used as a proxy to estimate PM concentrations (specifically UFP number concentration) in near-highway environments of Tehran. However, it is important to measure joint variability of noise and particles to study their health effects in epidemiological studies.Keywords: noise, particulate matter, PM10, PM2.5, ultrafine particle
Procedia PDF Downloads 19425653 Mapping Tunnelling Parameters for Global Optimization in Big Data via Dye Laser Simulation
Authors: Sahil Imtiyaz
Abstract:
One of the biggest challenges has emerged from the ever-expanding, dynamic, and instantaneously changing space-Big Data; and to find a data point and inherit wisdom to this space is a hard task. In this paper, we reduce the space of big data in Hamiltonian formalism that is in concordance with Ising Model. For this formulation, we simulate the system using dye laser in FORTRAN and analyse the dynamics of the data point in energy well of rhodium atom. After mapping the photon intensity and pulse width with energy and potential we concluded that as we increase the energy there is also increase in probability of tunnelling up to some point and then it starts decreasing and then shows a randomizing behaviour. It is due to decoherence with the environment and hence there is a loss of ‘quantumness’. This interprets the efficiency parameter and the extent of quantum evolution. The results are strongly encouraging in favour of the use of ‘Topological Property’ as a source of information instead of the qubit.Keywords: big data, optimization, quantum evolution, hamiltonian, dye laser, fermionic computations
Procedia PDF Downloads 19425652 Applying Different Stenography Techniques in Cloud Computing Technology to Improve Cloud Data Privacy and Security Issues
Authors: Muhammad Muhammad Suleiman
Abstract:
Cloud Computing is a versatile concept that refers to a service that allows users to outsource their data without having to worry about local storage issues. However, the most pressing issues to be addressed are maintaining a secure and reliable data repository rather than relying on untrustworthy service providers. In this study, we look at how stenography approaches and collaboration with Digital Watermarking can greatly improve the system's effectiveness and data security when used for Cloud Computing. The main requirement of such frameworks, where data is transferred or exchanged between servers and users, is safe data management in cloud environments. Steganography is the cloud is among the most effective methods for safe communication. Steganography is a method of writing coded messages in such a way that only the sender and recipient can safely interpret and display the information hidden in the communication channel. This study presents a new text steganography method for hiding a loaded hidden English text file in a cover English text file to ensure data protection in cloud computing. Data protection, data hiding capability, and time were all improved using the proposed technique.Keywords: cloud computing, steganography, information hiding, cloud storage, security
Procedia PDF Downloads 19225651 Investigation on Performance of Change Point Algorithm in Time Series Dynamical Regimes and Effect of Data Characteristics
Authors: Farhad Asadi, Mohammad Javad Mollakazemi
Abstract:
In this paper, Bayesian online inference in models of data series are constructed by change-points algorithm, which separated the observed time series into independent series and study the change and variation of the regime of the data with related statistical characteristics. variation of statistical characteristics of time series data often represent separated phenomena in the some dynamical system, like a change in state of brain dynamical reflected in EEG signal data measurement or a change in important regime of data in many dynamical system. In this paper, prediction algorithm for studying change point location in some time series data is simulated. It is verified that pattern of proposed distribution of data has important factor on simpler and smother fluctuation of hazard rate parameter and also for better identification of change point locations. Finally, the conditions of how the time series distribution effect on factors in this approach are explained and validated with different time series databases for some dynamical system.Keywords: time series, fluctuation in statistical characteristics, optimal learning, change-point algorithm
Procedia PDF Downloads 42625650 Establishment of a Classifier Model for Early Prediction of Acute Delirium in Adult Intensive Care Unit Using Machine Learning
Authors: Pei Yi Lin
Abstract:
Objective: The objective of this study is to use machine learning methods to build an early prediction classifier model for acute delirium to improve the quality of medical care for intensive care patients. Background: Delirium is a common acute and sudden disturbance of consciousness in critically ill patients. After the occurrence, it is easy to prolong the length of hospital stay and increase medical costs and mortality. In 2021, the incidence of delirium in the intensive care unit of internal medicine was as high as 59.78%, which indirectly prolonged the average length of hospital stay by 8.28 days, and the mortality rate is about 2.22% in the past three years. Therefore, it is expected to build a delirium prediction classifier through big data analysis and machine learning methods to detect delirium early. Method: This study is a retrospective study, using the artificial intelligence big data database to extract the characteristic factors related to delirium in intensive care unit patients and let the machine learn. The study included patients aged over 20 years old who were admitted to the intensive care unit between May 1, 2022, and December 31, 2022, excluding GCS assessment <4 points, admission to ICU for less than 24 hours, and CAM-ICU evaluation. The CAMICU delirium assessment results every 8 hours within 30 days of hospitalization are regarded as an event, and the cumulative data from ICU admission to the prediction time point are extracted to predict the possibility of delirium occurring in the next 8 hours, and collect a total of 63,754 research case data, extract 12 feature selections to train the model, including age, sex, average ICU stay hours, visual and auditory abnormalities, RASS assessment score, APACHE-II Score score, number of invasive catheters indwelling, restraint and sedative and hypnotic drugs. Through feature data cleaning, processing and KNN interpolation method supplementation, a total of 54595 research case events were extracted to provide machine learning model analysis, using the research events from May 01 to November 30, 2022, as the model training data, 80% of which is the training set for model training, and 20% for the internal verification of the verification set, and then from December 01 to December 2022 The CU research event on the 31st is an external verification set data, and finally the model inference and performance evaluation are performed, and then the model has trained again by adjusting the model parameters. Results: In this study, XG Boost, Random Forest, Logistic Regression, and Decision Tree were used to analyze and compare four machine learning models. The average accuracy rate of internal verification was highest in Random Forest (AUC=0.86), and the average accuracy rate of external verification was in Random Forest and XG Boost was the highest, AUC was 0.86, and the average accuracy of cross-validation was the highest in Random Forest (ACC=0.77). Conclusion: Clinically, medical staff usually conduct CAM-ICU assessments at the bedside of critically ill patients in clinical practice, but there is a lack of machine learning classification methods to assist ICU patients in real-time assessment, resulting in the inability to provide more objective and continuous monitoring data to assist Clinical staff can more accurately identify and predict the occurrence of delirium in patients. It is hoped that the development and construction of predictive models through machine learning can predict delirium early and immediately, make clinical decisions at the best time, and cooperate with PADIS delirium care measures to provide individualized non-drug interventional care measures to maintain patient safety, and then Improve the quality of care.Keywords: critically ill patients, machine learning methods, delirium prediction, classifier model
Procedia PDF Downloads 7625649 Determination of the Risks of Heart Attack at the First Stage as Well as Their Control and Resource Planning with the Method of Data Mining
Authors: İbrahi̇m Kara, Seher Arslankaya
Abstract:
Frequently preferred in the field of engineering in particular, data mining has now begun to be used in the field of health as well since the data in the health sector have reached great dimensions. With data mining, it is aimed to reveal models from the great amounts of raw data in agreement with the purpose and to search for the rules and relationships which will enable one to make predictions about the future from the large amount of data set. It helps the decision-maker to find the relationships among the data which form at the stage of decision-making. In this study, it is aimed to determine the risk of heart attack at the first stage, to control it, and to make its resource planning with the method of data mining. Through the early and correct diagnosis of heart attacks, it is aimed to reveal the factors which affect the diseases, to protect health and choose the right treatment methods, to reduce the costs in health expenditures, and to shorten the durations of patients’ stay at hospitals. In this way, the diagnosis and treatment costs of a heart attack will be scrutinized, which will be useful to determine the risk of the disease at the first stage, to control it, and to make its resource planning.Keywords: data mining, decision support systems, heart attack, health sector
Procedia PDF Downloads 35625648 Bayesian Borrowing Methods for Count Data: Analysis of Incontinence Episodes in Patients with Overactive Bladder
Authors: Akalu Banbeta, Emmanuel Lesaffre, Reynaldo Martina, Joost Van Rosmalen
Abstract:
Including data from previous studies (historical data) in the analysis of the current study may reduce the sample size requirement and/or increase the power of analysis. The most common example is incorporating historical control data in the analysis of a current clinical trial. However, this only applies when the historical control dataare similar enough to the current control data. Recently, several Bayesian approaches for incorporating historical data have been proposed, such as the meta-analytic-predictive (MAP) prior and the modified power prior (MPP) both for single control as well as for multiple historical control arms. Here, we examine the performance of the MAP and the MPP approaches for the analysis of (over-dispersed) count data. To this end, we propose a computational method for the MPP approach for the Poisson and the negative binomial models. We conducted an extensive simulation study to assess the performance of Bayesian approaches. Additionally, we illustrate our approaches on an overactive bladder data set. For similar data across the control arms, the MPP approach outperformed the MAP approach with respect to thestatistical power. When the means across the control arms are different, the MPP yielded a slightly inflated type I error (TIE) rate, whereas the MAP did not. In contrast, when the dispersion parameters are different, the MAP gave an inflated TIE rate, whereas the MPP did not.We conclude that the MPP approach is more promising than the MAP approach for incorporating historical count data.Keywords: count data, meta-analytic prior, negative binomial, poisson
Procedia PDF Downloads 11725647 Strategic Citizen Participation in Applied Planning Investigations: How Planners Use Etic and Emic Community Input Perspectives to Fill-in the Gaps in Their Analysis
Authors: John Gaber
Abstract:
Planners regularly use citizen input as empirical data to help them better understand community issues they know very little about. This type of community data is based on the lived experiences of local residents and is known as "emic" data. What is becoming more common practice for planners is their use of data from local experts and stakeholders (known as "etic" data or the outsider perspective) to help them fill in the gaps in their analysis of applied planning research projects. Utilizing international Health Impact Assessment (HIA) data, I look at who planners invite to their citizen input investigations. Research presented in this paper shows that planners access a wide range of emic and etic community perspectives in their search for the “community’s view.” The paper concludes with how planners can chart out a new empirical path in their execution of emic/etic citizen participation strategies in their applied planning research projects.Keywords: citizen participation, emic data, etic data, Health Impact Assessment (HIA)
Procedia PDF Downloads 48425646 The Effect of Fish and Krill Oil on Warfarin Control
Authors: Rebecca Pryce, Nijole Bernaitis, Andrew K. Davey, Shailendra Anoopkumar-Dukie
Abstract:
Background: Warfarin is an oral anticoagulant widely used in the prevention of strokes in patients with atrial fibrillation (AF) and in the treatment and prevention of deep vein thrombosis (DVT). Regular monitoring of Internationalised Normalised Ratio (INR) is required to ensure therapeutic benefit with time in therapeutic range (TTR) used to measure warfarin control. A number of factors influence TTR including diet, concurrent illness, and drug interactions. Extensive literature exists regarding the effect of conventional medicines on warfarin control, but documented interactions relating to complementary medicines are limited. It has been postulated that fish oil and krill oil supplementation may affect warfarin due to their association with bleeding events. However, to date little is known as to whether fish and krill oil significantly alter the incidence of bleeding with warfarin or impact on warfarin control. Aim:To assess the influence of fish oil and krill oil supplementation on warfarin control in AF and DVT patients by determining the influence of these supplements on TTR and bleeding events. Methods:A retrospective cohort analysis was conducted utilising patient information from a large private pathology practice in Queensland. AF and DVT patients receiving warfarin management by the pathology practice were identified and their TTR calculated using the Rosendaal method. Concurrent medications were analysed and patients taking no other interacting medicines were identified and divided according to users of fish oil and krill oil supplements and those taking no supplements. Study variables included TTR and the incidence of bleeding with exclusion criteria being less than 30 days of treatment with warfarin. Subject characteristics were reported as the mean and standard deviation for continuous data and number and percentages for nominal or categorical data. Data was analysed using GraphPad InStat Version 3 with a p value of <0.05 considered to be statistically significant. Results:Of the 2081 patients assessed for inclusion into this study, a total of 573 warfarin users met the inclusion criteria. Of these, 416 (72.6%) patients were AF patients and 157 (27.4%) DVT patients and overall there were 316 (55.1%) male and 257 (44.9%) female patients. 145 patients were included in the fish oil/krill oil group (supplement) and 428 were included in the control group. The mean TTR of supplement users was 86.9% and for the control group 84.7% with no significant difference between these groups. Control patients experienced 1.6 times the number of minor bleeds per person compared to supplement patients and 1.2 times the number of major bleeds per person. However, this was not statistically significant nor was the comparison between thrombotic events. Conclusion: No significant difference was found between supplement and control patients in terms of mean TTR, the number of bleeds and thrombotic events. Fish oil and krill oil supplements when used concurrently with warfarin do not significantly affect warfarin control as measured by TTR and bleeding incidence.Keywords: atrial fibrillation, deep vein thormbosis, fish oil, krill oil, warfarin
Procedia PDF Downloads 30525645 The Internet of Things in Luxury Hotels: Generating Customized Multisensory Guest Experiences
Authors: Jean-Eric Pelet, Erhard Lick, Basma Taieb
Abstract:
Purpose This research bridges the gap between sensory marketing and the use of the Internet of Things (IoT) in luxury hotels. We investigated how stimulating guests’ senses through IoT devices influenced their emotions, affective experiences, eudaimonism (well-being), and, ultimately, guest behavior. We examined potential moderating effects of gender. Design/methodology/approach We adopted a mixed method approach, combining qualitative research (semi-structured interviews) to explore hotel managers’ perspectives on the potential use of IoT in luxury hotels and quantitative research (surveying hotel guests; n=357). Findings The results showed that while the senses of smell, hearing, and sight had an impact on guests’ emotions, the senses of touch, hearing, and sight impacted guests’ affective experiences. The senses of smell and taste influenced guests’ eudaimonism. The sense of smell had a greater effect on eudaimonism and behavioral intentions among women compared to men. Originality IoT can be applied in creating customized multi-sensory hotel experiences. For example, hotels may offer unique and diverse ambiences in their rooms and suites to improve guest experiences. Research limitations/implications This study concentrated on luxury hotels located in Europe. Further research may explore the generalizability of the findings (e.g., in other cultures, comparison between high-end and low-end hotels). Practical implications Context awareness and hyper-personalization, through intensive and continuous data collection (hyper-connectivity) and real time processing, are key trends in the service industry. Therefore, big data plays a crucial role in the collection of information since it allows hoteliers to retrieve, analyze, and visualize data to provide personalized services in real time. Together with their guests, hotels may co-create customized sensory experiences. For instance, if the hotel knows about the guest’s music preferences based on social media as well as their age and gender, etc. and considers the temperature and size (standard, suite, etc.) of the guest room, this may determine the playlist of the concierge-tablet made available in the guest room. Furthermore, one may record the guest’s voice to use it for voice command purposes once the guest arrives at the hotel. Based on our finding that the sense of smell has a greater impact on eudaimonism and behavioral intentions among women than men, hotels may deploy subtler scents with lower intensities, or even different scents, for female guests in comparison to male guests.Keywords: affective experience, emotional value, eudaimonism, hospitality industry, Internet of Things, sensory marketing
Procedia PDF Downloads 5725644 On the Approximate Solution of Continuous Coefficients for Solving Third Order Ordinary Differential Equations
Authors: A. M. Sagir
Abstract:
This paper derived four newly schemes which are combined in order to form an accurate and efficient block method for parallel or sequential solution of third order ordinary differential equations of the form y^'''= f(x,y,y^',y^'' ), y(α)=y_0,〖y〗^' (α)=β,y^('' ) (α)=μ with associated initial or boundary conditions. The implementation strategies of the derived method have shown that the block method is found to be consistent, zero stable and hence convergent. The derived schemes were tested on stiff and non-stiff ordinary differential equations, and the numerical results obtained compared favorably with the exact solution.Keywords: block method, hybrid, linear multistep, self-starting, third order ordinary differential equations
Procedia PDF Downloads 27125643 Data Augmentation for Automatic Graphical User Interface Generation Based on Generative Adversarial Network
Authors: Xulu Yao, Moi Hoon Yap, Yanlong Zhang
Abstract:
As a branch of artificial neural network, deep learning is widely used in the field of image recognition, but the lack of its dataset leads to imperfect model learning. By analysing the data scale requirements of deep learning and aiming at the application in GUI generation, it is found that the collection of GUI dataset is a time-consuming and labor-consuming project, which is difficult to meet the needs of current deep learning network. To solve this problem, this paper proposes a semi-supervised deep learning model that relies on the original small-scale datasets to produce a large number of reliable data sets. By combining the cyclic neural network with the generated countermeasure network, the cyclic neural network can learn the sequence relationship and characteristics of data, make the generated countermeasure network generate reasonable data, and then expand the Rico dataset. Relying on the network structure, the characteristics of collected data can be well analysed, and a large number of reasonable data can be generated according to these characteristics. After data processing, a reliable dataset for model training can be formed, which alleviates the problem of dataset shortage in deep learning.Keywords: GUI, deep learning, GAN, data augmentation
Procedia PDF Downloads 18425642 Modelling Rainfall-Induced Shallow Landslides in the Northern New South Wales
Authors: S. Ravindran, Y.Liu, I. Gratchev, D.Jeng
Abstract:
Rainfall-induced shallow landslides are more common in the northern New South Wales (NSW), Australia. From 2009 to 2017, around 105 rainfall-induced landslides occurred along the road corridors and caused temporary road closures in the northern NSW. Rainfall causing shallow landslides has different distributions of rainfall varying from uniform, normal, decreasing to increasing rainfall intensity. The duration of rainfall varied from one day to 18 days according to historical data. The objective of this research is to analyse slope instability of some of the sites in the northern NSW by varying cumulative rainfall using SLOPE/W and SEEP/W and compare with field data of rainfall causing shallow landslides. The rainfall data and topographical data from public authorities and soil data obtained from laboratory tests will be used for this modelling. There is a likelihood of shallow landslides if the cumulative rainfall is between 100 mm to 400 mm in accordance with field data.Keywords: landslides, modelling, rainfall, suction
Procedia PDF Downloads 17925641 Machine Learning-Enabled Classification of Climbing Using Small Data
Authors: Nicholas Milburn, Yu Liang, Dalei Wu
Abstract:
Athlete performance scoring within the climbing do-main presents interesting challenges as the sport does not have an objective way to assign skill. Assessing skill levels within any sport is valuable as it can be used to mark progress while training, and it can help an athlete choose appropriate climbs to attempt. Machine learning-based methods are popular for complex problems like this. The dataset available was composed of dynamic force data recorded during climbing; however, this dataset came with challenges such as data scarcity, imbalance, and it was temporally heterogeneous. Investigated solutions to these challenges include data augmentation, temporal normalization, conversion of time series to the spectral domain, and cross validation strategies. The investigated solutions to the classification problem included light weight machine classifiers KNN and SVM as well as the deep learning with CNN. The best performing model had an 80% accuracy. In conclusion, there seems to be enough information within climbing force data to accurately categorize climbers by skill.Keywords: classification, climbing, data imbalance, data scarcity, machine learning, time sequence
Procedia PDF Downloads 14325640 Price Effect Estimation of Tobacco on Low-wage Male Smokers: A Causal Mediation Analysis
Authors: Kawsar Ahmed, Hong Wang
Abstract:
The study's goal was to estimate the causal mediation impact of tobacco tax before and after price hikes among low-income male smokers, with a particular emphasis on the effect estimating pathways framework for continuous and dichotomous variables. From July to December 2021, a cross-sectional investigation of observational data (n=739) was collected from Bangladeshi low-wage smokers. The Quasi-Bayesian technique, binomial probit model, and sensitivity analysis using a simulation of the computational tools R mediation package had been used to estimate the effect. After a price rise for tobacco products, the average number of cigarettes or bidis sticks taken decreased from 6.7 to 4.56. Tobacco product rising prices have a direct effect on low-income people's decisions to quit or lessen their daily smoking habits of Average Causal Mediation Effect (ACME) [effect=2.31, 95 % confidence interval (C.I.) = (4.71-0.00), p<0.01], Average Direct Effect (ADE) [effect=8.6, 95 percent (C.I.) = (6.8-0.11), p<0.001], and overall significant effects (p<0.001). Tobacco smoking choice is described by the mediated proportion of income effect, which is 26.1% less of following price rise. The curve of ACME and ADE is based on observational figures of the coefficients of determination that asses the model of hypothesis as the substantial consequence after price rises in the sensitivity analysis. To reduce smoking product behaviors, price increases through taxation have a positive causal mediation with income that affects the decision to limit tobacco use and promote low-income men's healthcare policy.Keywords: causal mediation analysis, directed acyclic graphs, tobacco price policy, sensitivity analysis, pathway estimation
Procedia PDF Downloads 11225639 Analysis of Expression Data Using Unsupervised Techniques
Authors: M. A. I Perera, C. R. Wijesinghe, A. R. Weerasinghe
Abstract:
his study was conducted to review and identify the unsupervised techniques that can be employed to analyze gene expression data in order to identify better subtypes of tumors. Identifying subtypes of cancer help in improving the efficacy and reducing the toxicity of the treatments by identifying clues to find target therapeutics. Process of gene expression data analysis described under three steps as preprocessing, clustering, and cluster validation. Feature selection is important since the genomic data are high dimensional with a large number of features compared to samples. Hierarchical clustering and K Means are often used in the analysis of gene expression data. There are several cluster validation techniques used in validating the clusters. Heatmaps are an effective external validation method that allows comparing the identified classes with clinical variables and visual analysis of the classes.Keywords: cancer subtypes, gene expression data analysis, clustering, cluster validation
Procedia PDF Downloads 14925638 Learning Analytics in a HiFlex Learning Environment
Authors: Matthew Montebello
Abstract:
Student engagement within a virtual learning environment generates masses of data points that can significantly contribute to the learning analytics that lead to decision support. Ideally, similar data is collected during student interaction with a physical learning space, and as a consequence, data is present at a large scale, even in relatively small classes. In this paper, we report of such an occurrence during classes held in a HiFlex modality as we investigate the advantages of adopting such a methodology. We plan to take full advantage of the learner-generated data in an attempt to further enhance the effectiveness of the adopted learning environment. This could shed crucial light on operating modalities that higher education institutions around the world will switch to in a post-COVID era.Keywords: HiFlex, big data in higher education, learning analytics, virtual learning environment
Procedia PDF Downloads 20125637 Li-Fi Technology: Data Transmission through Visible Light
Authors: Shahzad Hassan, Kamran Saeed
Abstract:
People are always in search of Wi-Fi hotspots because Internet is a major demand nowadays. But like all other technologies, there is still room for improvement in the Wi-Fi technology with regards to the speed and quality of connectivity. In order to address these aspects, Harald Haas, a professor at the University of Edinburgh, proposed what we know as the Li-Fi (Light Fidelity). Li-Fi is a new technology in the field of wireless communication to provide connectivity within a network environment. It is a two-way mode of wireless communication using light. Basically, the data is transmitted through Light Emitting Diodes which can vary the intensity of light very fast, even faster than the blink of an eye. From the research and experiments conducted so far, it can be said that Li-Fi can increase the speed and reliability of the transfer of data. This paper pays particular attention on the assessment of the performance of this technology. In other words, it is a 5G technology which uses LED as the medium of data transfer. For coverage within the buildings, Wi-Fi is good but Li-Fi can be considered favorable in situations where large amounts of data are to be transferred in areas with electromagnetic interferences. It brings a lot of data related qualities such as efficiency, security as well as large throughputs to the table of wireless communication. All in all, it can be said that Li-Fi is going to be a future phenomenon where the presence of light will mean access to the Internet as well as speedy data transfer.Keywords: communication, LED, Li-Fi, Wi-Fi
Procedia PDF Downloads 34725636 An Analysis of Humanitarian Data Management of Polish Non-Governmental Organizations in Ukraine Since February 2022 and Its Relevance for Ukrainian Humanitarian Data Ecosystem
Authors: Renata Kurpiewska-Korbut
Abstract:
Making an assumption that the use and sharing of data generated in humanitarian action constitute a core function of humanitarian organizations, the paper analyzes the position of the largest Polish humanitarian non-governmental organizations in the humanitarian data ecosystem in Ukraine and their approach to non-personal and personal data management since February of 2022. Both expert interviews and document analysis of non-profit organizations providing a direct response in the Ukrainian crisis context, i.e., the Polish Humanitarian Action, Caritas, Polish Medical Mission, Polish Red Cross, and the Polish Center for International Aid and the applicability of theoretical perspective of contingency theory – with its central point that the context or specific set of conditions determining the way of behavior and the choice of methods of action – help to examine the significance of data complexity and adaptive approach to data management by relief organizations in the humanitarian supply chain network. The purpose of this study is to determine how the existence of well-established and accurate internal procedures and good practices of using and sharing data (including safeguards for sensitive data) by the surveyed organizations with comparable human and technological capabilities are implemented and adjusted to Ukrainian humanitarian settings and data infrastructure. The study also poses a fundamental question of whether this crisis experience will have a determining effect on their future performance. The obtained finding indicate that Polish humanitarian organizations in Ukraine, which have their own unique code of conduct and effective managerial data practices determined by contingencies, have limited influence on improving the situational awareness of other assistance providers in the data ecosystem despite their attempts to undertake interagency work in the area of data sharing.Keywords: humanitarian data ecosystem, humanitarian data management, polish NGOs, Ukraine
Procedia PDF Downloads 9225635 An Approach for Estimation in Hierarchical Clustered Data Applicable to Rare Diseases
Authors: Daniel C. Bonzo
Abstract:
Practical considerations lead to the use of unit of analysis within subjects, e.g., bleeding episodes or treatment-related adverse events, in rare disease settings. This is coupled with data augmentation techniques such as extrapolation to enlarge the subject base. In general, one can think about extrapolation of data as extending information and conclusions from one estimand to another estimand. This approach induces hierarchichal clustered data with varying cluster sizes. Extrapolation of clinical trial data is being accepted increasingly by regulatory agencies as a means of generating data in diverse situations during drug development process. Under certain circumstances, data can be extrapolated to a different population, a different but related indication, and different but similar product. We consider here the problem of estimation (point and interval) using a mixed-models approach under an extrapolation. It is proposed that estimators (point and interval) be constructed using weighting schemes for the clusters, e.g., equally weighted and with weights proportional to cluster size. Simulated data generated under varying scenarios are then used to evaluate the performance of this approach. In conclusion, the evaluation result showed that the approach is a useful means for improving statistical inference in rare disease settings and thus aids not only signal detection but risk-benefit evaluation as well.Keywords: clustered data, estimand, extrapolation, mixed model
Procedia PDF Downloads 13625634 Seasonal Lambing in Crossbred of Katahdin Ewes in Tropical Regions of Chiapas, Mexico
Authors: Juan C. Martínez-Alfaro, Aracely Zúñiga, Fernando Ruíz-Zarate
Abstract:
In recent years, the Katahdin sheep breeds have been one of the breeds with greater acceptance by sheep farmers in southwestern Mexico. The Hair Sheep breeds from tropical latitudes (16° to 21° North Latitude) show low estrus activity from January to May. By contrast, these breeds of sheep exhibit high estrus activity from August to December. However, the reproductive management of Hair Sheep crossbred is very limited, independently of the socioeconomic levels of sheep farmers. Thus, in crossbred of Hair Sheep, occurrence of lambing is greater in autumn (84%) than spring (16%). In this sense, the aim of this study was to determine the lambing in Crossbred of Katahdin sheep during different seasons of the year. The Hypothesis was that in crossbred of Katahdin sheep, the lambing period has a behavior seasonal in the Southwestern Mexico. The study design consisted in evaluating the lambing proportion in one herds of Katahdin ewes crossbred during one year (October 1st, 2015 to October 1st, 2016). The study was realized in a farm located in the municipality of Jiquipilas, in the State of Chiapas, Mexico (16° North Latitude). A total of 40 female sheep homogeneous in terms of physical condition, age and physiological state were selected; and they were fed in grazing continuous, mainly with Africa star grass (Cynodon lemfuensis) and they are provided with water and mineral salts ad libitum; during the dry season, the ewes were supplemented with a diet of maize and sorghum, and the reproductive management was continuous mating. The lambing proportion was analyzed by chi-squared test, using SAS statistical software. The proportion of Katahdin ewes crossbred that lambed during the study period was high (100%; 40/40), the prolificacy was 1.42 (lamb/lambing). The proportion of lambing was higher (P<0.05) in autumn (67.5%; 27/40), than winter, spring and summer (32.5%; 13/40; 0%; 0/40; 0%; 0/40; respectively). The proportion of lambing was greater (P<0.05) in November (50%; 20/40), compared to October, December and January (2.5%; 1/40; 27.5%; 11/40; 20%; 8/40, respectively). The results are consistent with the fact that in the Hair Sheep Breeds, the lambing appears behave seasonally. The most important finding is that the lambing period in the crossbred of Katahdin Sheep is similar to the crossbred of Hair Sheep in tropical regions of Mexico. Therefore, the period of greater sexual activity occurs in the spring season. In conclusion, the period of lambing in crossbred of Katahdin ewes appears behave seasonally. Further researches to assess the ovarian activity in different breeds of Hair Ewes are under assessment.Keywords: Katahdin ewes, lambing, prolificacy, seasonality
Procedia PDF Downloads 26325633 The Rehabilitation of Drug Addiction by Thai Indigenous Knowledge: A Case Study of Thamkrabok Monastery
Authors: Wanwimon Mekwimon
Abstract:
Drug addiction is a serious problem in Thailand which has occurred continuously and repeatedly and enormously impacting health and economy of drug users. The indigenous wisdom and folk medicine is an attractive alternative choice, especially in detoxification and rehabilitation period. There are two objectives: First is to study about rehabilitation process and the curing for drug eaters and 2nd is to investigate the effectiveness of the curing and rehabilitation process by indigenous wisdom at Tamkrabok monastery, Pra-Puttabat district, Saraburi province. The main informants are 10 curers, 15 patients and 17 after-1-year rehabilitators. In the process, the semi-structured questionnaire is administered, the data are analyzed and proved by triangulation. The curing and rehabilitation process which use herbal remedies has a period of 15 days (5 days for detoxification and 10 days for recovery period) and the occupational training and self-consciousness awakening were delivered. The follow-up process includes twice-a-month recall for 6 months, follow-up letters and in depth interview with their families. The outcome of 1 year post-treatment was 94% (16 from 17). There are many reasons for not relapsing: the recovering patients have drawn on their inner strength, self-awareness and coping skill as well as their family and social support while rehabilitation process which includes difficulties in contacting with family members. They can void themselves from high risk situations to relapse. Recommendations: The follow-up system should be improved for continuous quality improvement, there should be the qualification standard for herbal remedies and the comparison among rehabilitation process of Tamkrabok and another methods are to be guideline for the further development.Keywords: rehabilitation, drug addiction, Thai indigenous knowledge, herbal remedies
Procedia PDF Downloads 24425632 Detection of Patient Roll-Over Using High-Sensitivity Pressure Sensors
Authors: Keita Nishio, Takashi Kaburagi, Yosuke Kurihara
Abstract:
Recent advances in medical technology have served to enhance average life expectancy. However, the total time for which the patients are prescribed complete bedrest has also increased. With patients being required to maintain a constant lying posture- also called bedsore- development of a system to detect patient roll-over becomes imperative. For this purpose, extant studies have proposed the use of cameras, and favorable results have been reported. Continuous on-camera monitoring, however, tends to violate patient privacy. We have proposed unconstrained bio-signal measurement system that could detect body-motion during sleep and does not violate patient’s privacy. Therefore, in this study, we propose a roll-over detection method by the date obtained from the bi-signal measurement system. Signals recorded by the sensor were assumed to comprise respiration, pulse, body motion, and noise components. Compared the body-motion and respiration, pulse component, the body-motion, during roll-over, generate large vibration. Thus, analysis of the body-motion component facilitates detection of the roll-over tendency. The large vibration associated with the roll-over motion has a great effect on the Root Mean Square (RMS) value of time series of the body motion component calculated during short 10 s segments. After calculation, the RMS value during each segment was compared to a threshold value set in advance. If RMS value in any segment exceeded the threshold, corresponding data were considered to indicate occurrence of a roll-over. In order to validate the proposed method, we conducted experiment. A bi-directional microphone was adopted as a high-sensitivity pressure sensor and was placed between the mattress and bedframe. Recorded signals passed through an analog Band-pass Filter (BPF) operating over the 0.16-16 Hz bandwidth. BPF allowed the respiration, pulse, and body-motion to pass whilst removing the noise component. Output from BPF was A/D converted with the sampling frequency 100Hz, and the measurement time was 480 seconds. The number of subjects and data corresponded to 5 and 10, respectively. Subjects laid on a mattress in the supine position. During data measurement, subjects—upon the investigator's instruction—were asked to roll over into four different positions—supine to left lateral, left lateral to prone, prone to right lateral, and right lateral to supine. Recorded data was divided into 48 segments with 10 s intervals, and the corresponding RMS value for each segment was calculated. The system was evaluated by the accuracy between the investigator’s instruction and the detected segment. As the result, an accuracy of 100% was achieved. While reviewing the time series of recorded data, segments indicating roll-over tendencies were observed to demonstrate a large amplitude. However, clear differences between decubitus and the roll-over motion could not be confirmed. Extant researches possessed a disadvantage in terms of patient privacy. The proposed study, however, demonstrates more precise detection of patient roll-over tendencies without violating their privacy. As a future prospect, decubitus estimation before and after roll-over could be attempted. Since in this paper, we could not confirm the clear differences between decubitus and the roll-over motion, future studies could be based on utilization of the respiration and pulse components.Keywords: bedsore, high-sensitivity pressure sensor, roll-over, unconstrained bio-signal measurement
Procedia PDF Downloads 12125631 The Development of Encrypted Near Field Communication Data Exchange Format Transmission in an NFC Passive Tag for Checking the Genuine Product
Authors: Tanawat Hongthai, Dusit Thanapatay
Abstract:
This paper presents the development of encrypted near field communication (NFC) data exchange format transmission in an NFC passive tag for the feasibility of implementing a genuine product authentication. We propose a research encryption and checking the genuine product into four major categories; concept, infrastructure, development and applications. This result shows the passive NFC-forum Type 2 tag can be configured to be compatible with the NFC data exchange format (NDEF), which can be automatically partially data updated when there is NFC field.Keywords: near field communication, NFC data exchange format, checking the genuine product, encrypted NFC
Procedia PDF Downloads 28025630 Training Isolated Respiration in Rehabilitation
Authors: Marketa Kotova, Jana Kolarova, Ludek Zalud, Petr Dobsak
Abstract:
A game for training of breath (TRABR) for continuous monitoring of pulmonary ventilation during the patients’ therapy focuses especially on monitoring of their ventilation processes. It is necessary to detect, monitor and differentiate abdominal and thoracic breathing during the therapy. It is a fun form of rehabilitation where the patient plays and also practicing isolated breathing. Finally the game to practice breath was designed to evaluate whether the patient uses two types of breathing or not.Keywords: pulmonary ventilation, thoracic breathing, abdominal breathing, breath monitoring using pressure sensors, game TRABR TRAining of BReath)
Procedia PDF Downloads 49125629 Data Hiding by Vector Quantization in Color Image
Authors: Yung Gi Wu
Abstract:
With the growing of computer and network, digital data can be spread to anywhere in the world quickly. In addition, digital data can also be copied or tampered easily so that the security issue becomes an important topic in the protection of digital data. Digital watermark is a method to protect the ownership of digital data. Embedding the watermark will influence the quality certainly. In this paper, Vector Quantization (VQ) is used to embed the watermark into the image to fulfill the goal of data hiding. This kind of watermarking is invisible which means that the users will not conscious the existing of embedded watermark even though the embedded image has tiny difference compared to the original image. Meanwhile, VQ needs a lot of computation burden so that we adopt a fast VQ encoding scheme by partial distortion searching (PDS) and mean approximation scheme to speed up the data hiding process. The watermarks we hide to the image could be gray, bi-level and color images. Texts are also can be regarded as watermark to embed. In order to test the robustness of the system, we adopt Photoshop to fulfill sharpen, cropping and altering to check if the extracted watermark is still recognizable. Experimental results demonstrate that the proposed system can resist the above three kinds of tampering in general cases.Keywords: data hiding, vector quantization, watermark, color image
Procedia PDF Downloads 36425628 Dynamic Changes of Shifting Cultivation: Past, Present and Future Perspective of an Agroforestry System from Sri Lanka
Authors: Thavananthan Sivananthawerl
Abstract:
Shifting cultivation (Chena, Slash & Burn) is a cultivation method of raising, primarily, food crops (mainly annual) where an area of land is cleared off for its vegetation and cultivated for a period, and the abandoned (fallow) for its fertility to be naturally restored. Although this is the oldest (more than 5000 years) farming system, it is still practiced by indigenous communities of several countries such as Sri Lanka, India, Indonesia, Malaysia, Myanmar, West & Central Africa, and Amazon rainforest area. In Sri Lanka, shifting cultivation is mainly practiced during the North-East monsoon (called as Maha season, from Sept. to Dec.) with no irrigation. The traditional system allows farmers to cultivate for a short period of cultivation and a long period fallow period. This was facilitated mainly by the availability of land with less population. In addition, in the old system, cultivation practices were mostly related to religious and spiritual practices (Astrology, dynamic farming, etc.). At present, the majority of the shifting cultivators (SC’s) are cultivating in government lands, and most of them are adopting new technology (seeds, agrochemicals, machineries). Due to the local demand, almost 70% of the SC’s growing maize is mono-crop, and the rest with mixed-crop, such as groundnut, cowpea, millet, and vegetables. To ensure continuous cultivation and reduce moisture stress, they established ‘dug wells’ and used pumps to lift water from nearby sources. Due to this, the fallow period has been reduced drastically to 1- 2 years. To have the future prosperous of system, farmers should be educated so that they can understand the harmful effects of shifting cultivation and require new policies and a framework for converting the land use pattern towards high economic returns (new crop varieties, maintaining soil fertility, reducing soil erosion) while protecting the natural forests. The practice of agroforestry should be encouraged in which both the crops and the tall trees are cared for by farmers simultaneously. To facilitate the continuous cultivation, the system needs to develop water harvesting, water-conserving technologies, and scientific water management for the limited rainy season. Even though several options are available, all the solutions vary from region to region. Therefore, it is only the government and cultivators together who can find solutions to the problems of the specific areas.Keywords: shifting cultivation, agroforestry, fallow, economic returns, government, Sri Lanka
Procedia PDF Downloads 9425627 Anomaly Detection in a Data Center with a Reconstruction Method Using a Multi-Autoencoders Model
Authors: Victor Breux, Jérôme Boutet, Alain Goret, Viviane Cattin
Abstract:
Early detection of anomalies in data centers is important to reduce downtimes and the costs of periodic maintenance. However, there is little research on this topic and even fewer on the fusion of sensor data for the detection of abnormal events. The goal of this paper is to propose a method for anomaly detection in data centers by combining sensor data (temperature, humidity, power) and deep learning models. The model described in the paper uses one autoencoder per sensor to reconstruct the inputs. The auto-encoders contain Long-Short Term Memory (LSTM) layers and are trained using the normal samples of the relevant sensors selected by correlation analysis. The difference signal between the input and its reconstruction is then used to classify the samples using feature extraction and a random forest classifier. The data measured by the sensors of a data center between January 2019 and May 2020 are used to train the model, while the data between June 2020 and May 2021 are used to assess it. Performances of the model are assessed a posteriori through F1-score by comparing detected anomalies with the data center’s history. The proposed model outperforms the state-of-the-art reconstruction method, which uses only one autoencoder taking multivariate sequences and detects an anomaly with a threshold on the reconstruction error, with an F1-score of 83.60% compared to 24.16%.Keywords: anomaly detection, autoencoder, data centers, deep learning
Procedia PDF Downloads 19425626 Integration Process and Analytic Interface of different Environmental Open Data Sets with Java/Oracle and R
Authors: Pavel H. Llamocca, Victoria Lopez
Abstract:
The main objective of our work is the comparative analysis of environmental data from Open Data bases, belonging to different governments. This means that you have to integrate data from various different sources. Nowadays, many governments have the intention of publishing thousands of data sets for people and organizations to use them. In this way, the quantity of applications based on Open Data is increasing. However each government has its own procedures to publish its data, and it causes a variety of formats of data sets because there are no international standards to specify the formats of the data sets from Open Data bases. Due to this variety of formats, we must build a data integration process that is able to put together all kind of formats. There are some software tools developed in order to give support to the integration process, e.g. Data Tamer, Data Wrangler. The problem with these tools is that they need data scientist interaction to take part in the integration process as a final step. In our case we don’t want to depend on a data scientist, because environmental data are usually similar and these processes can be automated by programming. The main idea of our tool is to build Hadoop procedures adapted to data sources per each government in order to achieve an automated integration. Our work focus in environment data like temperature, energy consumption, air quality, solar radiation, speeds of wind, etc. Since 2 years, the government of Madrid is publishing its Open Data bases relative to environment indicators in real time. In the same way, other governments have published Open Data sets relative to the environment (like Andalucia or Bilbao). But all of those data sets have different formats and our solution is able to integrate all of them, furthermore it allows the user to make and visualize some analysis over the real-time data. Once the integration task is done, all the data from any government has the same format and the analysis process can be initiated in a computational better way. So the tool presented in this work has two goals: 1. Integration process; and 2. Graphic and analytic interface. As a first approach, the integration process was developed using Java and Oracle and the graphic and analytic interface with Java (jsp). However, in order to open our software tool, as second approach, we also developed an implementation with R language as mature open source technology. R is a really powerful open source programming language that allows us to process and analyze a huge amount of data with high performance. There are also some R libraries for the building of a graphic interface like shiny. A performance comparison between both implementations was made and no significant differences were found. In addition, our work provides with an Official Real-Time Integrated Data Set about Environment Data in Spain to any developer in order that they can build their own applications.Keywords: open data, R language, data integration, environmental data
Procedia PDF Downloads 31525625 OptiBaha: Design of a Web Based Analytical Tool for Enhancing Quality of Education at AlBaha University
Authors: Nadeem Hassan, Farooq Ahmad
Abstract:
The quality of education has a direct impact on individual, family, society, economy in general and the mankind as a whole. Because of that thousands of research papers and articles are written on the quality of education, billions of dollars are spent and continuously being spent on research and enhancing the quality of education. Academic programs accredited agencies define the various criterion of quality of education; academic institutions obtain accreditation from these agencies to ensure degree programs offered at their institution are of international standards. This R&D aims to build a web based analytical tool (OptiBaha) that finds the gaps in AlBaha University education system by taking input from stakeholders, including students, faculty, staff and management. The input/online-data collected by this tool will be analyzed on core areas of education as proposed by accredited agencies, CAC of ABET and NCAAA of KSA, including student background, language, culture, motivation, curriculum, teaching methodology, assessment and evaluation, performance and progress, facilities, availability of teaching materials, faculty qualification, monitoring, policies and procedures, and more. Based on different analytical reports, gaps will be highlighted, and remedial actions will be proposed. If the tool is implemented and made available through a continuous process the quality of education at AlBaha University can be enhanced, it will also help in fulfilling criterion of accreditation agencies. The tool will be generic in nature and ultimately can be used by any academic institution.Keywords: academic quality, accreditation agencies, higher education, policies and procedures
Procedia PDF Downloads 301