Search results for: continuous data
22853 Relational Attention Shift on Images Using Bu-Td Architecture and Sequential Structure Revealing
Authors: Alona Faktor
Abstract:
In this work, we present a NN-based computational model that can perform attention shifts according to high-level instruction. The instruction specifies the type of attentional shift using explicit geometrical relation. The instruction also can be of cognitive nature, specifying more complex human-human interaction or human-object interaction, or object-object interaction. Applying this approach sequentially allows obtaining a structural description of an image. A novel data-set of interacting humans and objects is constructed using a computer graphics engine. Using this data, we perform systematic research of relational segmentation shifts.Keywords: cognitive science, attentin, deep learning, generalization
Procedia PDF Downloads 19922852 Emergence of Information Centric Networking and Web Content Mining: A Future Efficient Internet Architecture
Authors: Sajjad Akbar, Rabia Bashir
Abstract:
With the growth of the number of users, the Internet usage has evolved. Due to its key design principle, there is an incredible expansion in its size. This tremendous growth of the Internet has brought new applications (mobile video and cloud computing) as well as new user’s requirements i.e. content distribution environment, mobility, ubiquity, security and trust etc. The users are more interested in contents rather than their communicating peer nodes. The current Internet architecture is a host-centric networking approach, which is not suitable for the specific type of applications. With the growing use of multiple interactive applications, the host centric approach is considered to be less efficient as it depends on the physical location, for this, Information Centric Networking (ICN) is considered as the potential future Internet architecture. It is an approach that introduces uniquely named data as a core Internet principle. It uses the receiver oriented approach rather than sender oriented. It introduces the naming base information system at the network layer. Although ICN is considered as future Internet architecture but there are lot of criticism on it which mainly concerns that how ICN will manage the most relevant content. For this Web Content Mining(WCM) approaches can help in appropriate data management of ICN. To address this issue, this paper contributes by (i) discussing multiple ICN approaches (ii) analyzing different Web Content Mining approaches (iii) creating a new Internet architecture by merging ICN and WCM to solve the data management issues of ICN. From ICN, Content-Centric Networking (CCN) is selected for the new architecture, whereas, Agent-based approach from Web Content Mining is selected to find most appropriate data.Keywords: agent based web content mining, content centric networking, information centric networking
Procedia PDF Downloads 47522851 Delineating Concern Ground in Block Caving – Underground Mine Using Ground Penetrating Radar
Authors: Eric Sitorus, Septian Prahastudhi, Turgod Nainggolan, Erwin Riyanto
Abstract:
Mining by block or panel caving is a mining method that takes advantage of fractures within an ore body, coupled with gravity, to extract material from a predetermined column of ore. The caving column is weakened from beneath through the use of undercutting, after which the ore breaks up and is extracted from below in a continuous cycle. The nature of this method induces cyclical stresses on the pillars of excavations as stress is built up and released over time, which has a detrimental effect on both the installed ground support and the rock mass itself. Ground support capacity, especially on the production where excavation void ratio is highest, is subjected to heavy loading. Strain above threshold of the elongation of support capacity can yield resulting in damage to excavations. Geotechnical engineers must evaluate not only the remnant capacity of ground support systems but also investigate depth of rock mass yield within pillars, backs and floors. Ground Penetrating Radar (GPR) is a geophysical method that has the ability to evaluate rock mass damage using electromagnetic waves. This paper illustrates a case study from the Grasberg mining complex where non-invasive information on the depth of damage and condition of the remaining rock mass was required. GPR with 100 MHz antenna resolution was used to obtain images of the subsurface to determine rehabilitation requirements prior to recommencing production activities. The GPR surveys were used to calibrate the reflection coefficient response of varying rock mass conditions to known Rock Quality Designation (RQD) parameters observed at the mine. The calibrated GPR survey allowed site engineers to map subsurface conditions and plan rehabilitation accordingly.Keywords: block caving, ground penetrating radar, reflectivity, RQD
Procedia PDF Downloads 13522850 One-Class Classification Approach Using Fukunaga-Koontz Transform and Selective Multiple Kernel Learning
Authors: Abdullah Bal
Abstract:
This paper presents a one-class classification (OCC) technique based on Fukunaga-Koontz Transform (FKT) for binary classification problems. The FKT is originally a powerful tool to feature selection and ordering for two-class problems. To utilize the standard FKT for data domain description problem (i.e., one-class classification), in this paper, a set of non-class samples which exist outside of positive class (target class) describing boundary formed with limited training data has been constructed synthetically. The tunnel-like decision boundary around upper and lower border of target class samples has been designed using statistical properties of feature vectors belonging to the training data. To capture higher order of statistics of data and increase discrimination ability, the proposed method, termed one-class FKT (OC-FKT), has been extended to its nonlinear version via kernel machines and referred as OC-KFKT for short. Multiple kernel learning (MKL) is a favorable family of machine learning such that tries to find an optimal combination of a set of sub-kernels to achieve a better result. However, the discriminative ability of some of the base kernels may be low and the OC-KFKT designed by this type of kernels leads to unsatisfactory classification performance. To address this problem, the quality of sub-kernels should be evaluated, and the weak kernels must be discarded before the final decision making process. MKL/OC-FKT and selective MKL/OC-FKT frameworks have been designed stimulated by ensemble learning (EL) to weight and then select the sub-classifiers using the discriminability and diversities measured by eigenvalue ratios. The eigenvalue ratios have been assessed based on their regions on the FKT subspaces. The comparative experiments, performed on various low and high dimensional data, against state-of-the-art algorithms confirm the effectiveness of our techniques, especially in case of small sample size (SSS) conditions.Keywords: ensemble methods, fukunaga-koontz transform, kernel-based methods, multiple kernel learning, one-class classification
Procedia PDF Downloads 2422849 A Simple Algorithm for Real-Time 3D Capturing of an Interior Scene Using a Linear Voxel Octree and a Floating Origin Camera
Authors: Vangelis Drosos, Dimitrios Tsoukalos, Dimitrios Tsolis
Abstract:
We present a simple algorithm for capturing a 3D scene (focused on the usage of mobile device cameras in the context of augmented/mixed reality) by using a floating origin camera solution and storing the resulting information in a linear voxel octree. Data is derived from cloud points captured by a mobile device camera. For the purposes of this paper, we assume a scene of fixed size (known to us or determined beforehand) and a fixed voxel resolution. The resulting data is stored in a linear voxel octree using a hashtable. We commence by briefly discussing the logic behind floating origin approaches and the usage of linear voxel octrees for efficient storage. Following that, we present the algorithm for translating captured feature points into voxel data in the context of a fixed origin world and storing them. Finally, we discuss potential applications and areas of future development and improvement to the efficiency of our solution.Keywords: voxel, octree, computer vision, XR, floating origin
Procedia PDF Downloads 13322848 Levels of Microcystin in the Coastal Waters of Nigeria
Authors: Medina Kadiri
Abstract:
Blue-green otherwise called cyanobacteria, produce an array of biotoxins grouped into five categories notably hapatotoxins, neurotoxins, cytotoxins, dermatotoxins, and irritant toxins. Microcystins which are examples of hepatotoxins produced by blue-green algae Microcystins comprise the most common group of the cyanobacterial toxins. Blue-green algae flourish in aquatic environments, whether marine, brackish or freshwater, producing blooms in different forms such as microscopic, mats, or unsightly odoriferous scums. Microcystins biotoxins cause a plethora of animal and human hazards such as liver damage/cirrhosis and cancer, kidney damage, dermatitis, tinnitus, gastroenteritis, sore throat, nausea, myalgia, neurological problems, respiratory irritation and death. Water samples were collected from coastal regions of Nigeria in March 2014, June 2014, October 2014 and January 2015 and analyzed with Enzyme Linked Immunosorbent Assay (ELISA) kits. Microcystin biotoxin was recorded in all sites both during dry and wet seasons. The range of microcystins found was 0.000041-There was a seasonal trend of increasing microcystin concentrations from March till Octobers and a decrease thereafter. Generally in the oceanic waters, microcystin levels were highest at Cross Rivers in March and January, Barbeach in June and Lekki in October. In the adjoining riverine ecosystems, on the other hand, the highest concentrations of microcystin were observed at Akwa Ibom in March, June and October and in Bayelsa in January. Continuous monitoring and screening of coastal water bodies is suggested to minimize the health risks of cyanobacterial biotoxins to coastal communities of Nigeria.Keywords: biotoxins, harmful algae, marine, microcystin, Nigeria
Procedia PDF Downloads 28422847 Effects of Zinc and Vitamin A Supplementation on Prognostic Markers and Treatment Outcomes of Adults with Pulmonary Tuberculosis: A Systematic Review and Meta-Analysis
Authors: Fasil Wagnew, Kefyalew Addis Alene, Setegn Eshetie, Tom Wingfield, Matthew Kelly, Darren Gray
Abstract:
Introduction: Undernutrition is a major and under-appreciated risk factor for TB, which is estimated to be responsible for 1.9 million TB cases per year globally. The effectiveness of micronutrient supplementation on TB treatment outcomes and its prognostic markers such as sputum conversion and serum zinc, retinol, and hemoglobin levels has been poorly understood. This systematic review and meta-analysis aimed to determine the association between zinc and vitamin A supplementation and TB treatment outcomes and its prognostic markers. Methods: A systematic literature search for randomized controlled trials (RCTs) was performed in PubMed, Embase, and Scopus databases. Meta-analysis with a random effect model was performed to estimate risk ratio (RR) and mean difference (MD), with a 95% confidence interval (CI), for dichotomous and continuous outcomes, respectively. Results: Our search identified 2,195 records. Of these, nine RCTs consisting of 1,375 participants were included in the final analyses. Among adults with pulmonary TB, zinc (RR: 0.94, 95%CI: 0.86, 1.03), vitamin A (RR: 0.90, 95%CI: 0.80, 1.01), and combined zinc and vitamin A (RR: 0.98, 95%CI: 0.89, 1.08) supplementation were not significantly associated with TB treatment success. Combined zinc and vitamin A supplementation was significantly associated with increased sputum smear conversion at 2 months (RR: 1.16, 95%CI: 1.03, 1.32), serum zinc levels at 2 months (MD of 0.86umol/l, 95% CI: 0.14, 1.57), serum retinol levels at 2 months (MD: 0.06umol/l, 95 % CI: 0.04, 0.08) and 6 months (MD: 0.12umol/l, 95 % CI: 0.10, 0.14), and serum hemoglobin level at 6 months (MD: 0.29 ug/dl, 95% CI: 0.08 to 0.51), among adults with TB. Conclusions: Providing zinc and vitamin A supplementation to adults with pulmonary TB during treatment may increase early sputum smear conversion, serum zinc, retinol, and hemoglobin levels. However, the use of zinc, vitamin A, or both were not associated with TB treatment success.Keywords: zinc and vitamin A supplementation, tuberculosis, treatment outcomes, meta-analysis, RCT
Procedia PDF Downloads 17322846 The Effect of Excel on Undergraduate Students’ Understanding of Statistics and the Normal Distribution
Authors: Masomeh Jamshid Nejad
Abstract:
Nowadays, statistical literacy is no longer a necessary skill but an essential skill with broad applications across diverse fields, especially in operational decision areas such as business management, finance, and economics. As such, learning and deep understanding of statistical concepts are essential in the context of business studies. One of the crucial topics in statistical theory and its application is the normal distribution, often called a bell-shaped curve. To interpret data and conduct hypothesis tests, comprehending the properties of normal distribution (the mean and standard deviation) is essential for business students. This requires undergraduate students in the field of economics and business management to visualize and work with data following a normal distribution. Since technology is interconnected with education these days, it is important to teach statistics topics in the context of Python, R-studio, and Microsoft Excel to undergraduate students. This research endeavours to shed light on the effect of Excel-based instruction on learners’ knowledge of statistics, specifically the central concept of normal distribution. As such, two groups of undergraduate students (from the Business Management program) were compared in this research study. One group underwent Excel-based instruction and another group relied only on traditional teaching methods. We analyzed experiential data and BBA participants’ responses to statistic-related questions focusing on the normal distribution, including its key attributes, such as the mean and standard deviation. The results of our study indicate that exposing students to Excel-based learning supports learners in comprehending statistical concepts more effectively compared with the other group of learners (teaching with the traditional method). In addition, students in the context of Excel-based instruction showed ability in picturing and interpreting data concentrated on normal distribution.Keywords: statistics, excel-based instruction, data visualization, pedagogy
Procedia PDF Downloads 5522845 Novel Recommender Systems Using Hybrid CF and Social Network Information
Authors: Kyoung-Jae Kim
Abstract:
Collaborative Filtering (CF) is a popular technique for the personalization in the E-commerce domain to reduce information overload. In general, CF provides recommending items list based on other similar users’ preferences from the user-item matrix and predicts the focal user’s preference for particular items by using them. Many recommender systems in real-world use CF techniques because it’s excellent accuracy and robustness. However, it has some limitations including sparsity problems and complex dimensionality in a user-item matrix. In addition, traditional CF does not consider the emotional interaction between users. In this study, we propose recommender systems using social network and singular value decomposition (SVD) to alleviate some limitations. The purpose of this study is to reduce the dimensionality of data set using SVD and to improve the performance of CF by using emotional information from social network data of the focal user. In this study, we test the usability of hybrid CF, SVD and social network information model using the real-world data. The experimental results show that the proposed model outperforms conventional CF models.Keywords: recommender systems, collaborative filtering, social network information, singular value decomposition
Procedia PDF Downloads 29222844 Application of Customized Bioaugmentation Inocula to Alleviate Ammonia Toxicity in CSTR Anaerobic Digesters
Authors: Yixin Yan, Miao Yan, Irini Angelidaki, Ioannis Fotidis
Abstract:
Ammonia, which derives from the degradation of urea and protein-substrates, is the major toxicant of the commercial anaerobic digestion reactors causing loses of up to 1/3 of their practical biogas production, which reflects directly on the overall revenue of the plants. The current experimental work is aiming to alleviate the ammonia inhibition in anaerobic digestion (AD) process by developing an innovative bioaugmentation method of ammonia tolerant methanogenic consortia. The ammonia tolerant consortia were cultured in batch reactors and immobilized together with biochar in agar (customized inocula). Three continuous stirred-tank reactors (CSTR), fed with the organic fraction of municipal solid waste at a hydraulic retention time of 15 days and operated at thermophilic (55°C) conditions were assessed. After an ammonia shock of 4 g NH4+-N L-1, the customized inocula were bioaugmented into the CSTR reactors to alleviate ammonia toxicity effect on AD process. Recovery rate of methane production and methanogenic activity will be assessed to evaluate the bioaugmentation performance, while 16s rRNA gene sequence will be used to reveal the difference of microbial community changes through bioaugmentation. At the microbial level, the microbial community structures of the four reactors will be analysed to find the mechanism of bioaugmentation. Changes in hydrogen formation potential will be used to predict direct interspecies electron transfer (DIET) between ammonia tolerant methanogens and syntrophic bacteria. This experimental work is expected to create bioaugmentation inocula that will be easy to obtain, transport, handled and bioaugment in AD reactors to efficiently alleviate the ammonia toxicity, without alternating any of the other operational parameters including the ammonia-rich feedstocks.Keywords: artisanal fishing waste, acidogenesis, volatile fatty acids, pH, inoculum/substrate ratio
Procedia PDF Downloads 12822843 Comparison of Developed Statokinesigram and Marker Data Signals by Model Approach
Authors: Boris Barbolyas, Kristina Buckova, Tomas Volensky, Cyril Belavy, Ladislav Dedik
Abstract:
Background: Based on statokinezigram, the human balance control is often studied. Approach to human postural reaction analysis is based on a combination of stabilometry output signal with retroreflective marker data signal processing, analysis, and understanding, in this study. The study shows another original application of Method of Developed Statokinesigram Trajectory (MDST), too. Methods: In this study, the participants maintained quiet bipedal standing for 10 s on stabilometry platform. Consequently, bilateral vibration stimuli to Achilles tendons in 20 s interval was applied. Vibration stimuli caused that human postural system took the new pseudo-steady state. Vibration frequencies were 20, 60 and 80 Hz. Participant's body segments - head, shoulders, hips, knees, ankles and little fingers were marked by 12 retroreflective markers. Markers positions were scanned by six cameras system BTS SMART DX. Registration of their postural reaction lasted 60 s. Sampling frequency was 100 Hz. For measured data processing were used Method of Developed Statokinesigram Trajectory. Regression analysis of developed statokinesigram trajectory (DST) data and retroreflective marker developed trajectory (DMT) data were used to find out which marker trajectories most correlate with stabilometry platform output signals. Scaling coefficients (λ) between DST and DMT by linear regression analysis were evaluated, too. Results: Scaling coefficients for marker trajectories were identified for all body segments. Head markers trajectories reached maximal value and ankle markers trajectories had a minimal value of scaling coefficient. Hips, knees and ankles markers were approximately symmetrical in the meaning of scaling coefficient. Notable differences of scaling coefficient were detected in head and shoulders markers trajectories which were not symmetrical. The model of postural system behavior was identified by MDST. Conclusion: Value of scaling factor identifies which body segment is predisposed to postural instability. Hypothetically, if statokinesigram represents overall human postural system response to vibration stimuli, then markers data represented particular postural responses. It can be assumed that cumulative sum of particular marker postural responses is equal to statokinesigram.Keywords: center of pressure (CoP), method of developed statokinesigram trajectory (MDST), model of postural system behavior, retroreflective marker data
Procedia PDF Downloads 35122842 Improved Wearable Monitoring and Treatment System for Parkinson’s Disease
Authors: Bulcha Belay Etana, Benny Malengier, Janarthanan Krishnamoorthy, Timothy Kwa, Lieva VanLangenhove
Abstract:
Electromyography measures the electrical activity of muscles using surface electrodes or needle electrodes to monitor various disease conditions. Recent developments in the signal acquisition of electromyograms using textile electrodes facilitate wearable devices, enabling patients to monitor and control their health status outside of healthcare facilities. Here, we have developed and tested wearable textile electrodes to acquire electromyography signals from patients suffering from Parkinson’s disease and incorporated a feedback-control system to relieve muscle cramping through thermal stimulus. In brief, the textile electrodes made of stainless steel was knitted into a textile fabric as a sleeve, and their electrical characteristic, such as signal-to-noise ratio, was compared with traditional electrodes. To relieve muscle cramping, a heating element made of stainless-steel conductive yarn sewn onto cotton fabric, coupled with a vibration system, was developed. The system integrated a microcontroller and a Myoware muscle sensor to activate the heating element as well as the vibration motor when cramping occurs, and at the same time, the element gets deactivated when the muscle cramping subsides. An optimum therapeutic temperature of 35.5 °C is regulated by continuous temperature monitoring to deactivate the heating system when this threshold value is reached. The textile electrode exhibited a signal-to-noise ratio of 6.38dB, comparable to that of the traditional electrode’s value of 7.05 dB. For a given 9 V power supply, the rise time was about 6 minutes for the developed heating element to reach an optimum temperature.Keywords: smart textile system, wearable electronic textile, electromyography, heating textile, vibration therapy, Parkinson’s disease
Procedia PDF Downloads 10822841 Multi-Model Super Ensemble Based Advanced Approaches for Monsoon Rainfall Prediction
Authors: Swati Bhomia, C. M. Kishtawal, Neeru Jaiswal
Abstract:
Traditionally, monsoon forecasts have encountered many difficulties that stem from numerous issues such as lack of adequate upper air observations, mesoscale nature of convection, proper resolution, radiative interactions, planetary boundary layer physics, mesoscale air-sea fluxes, representation of orography, etc. Uncertainties in any of these areas lead to large systematic errors. Global circulation models (GCMs), which are developed independently at different institutes, each of which carries somewhat different representation of the above processes, can be combined to reduce the collective local biases in space, time, and for different variables from different models. This is the basic concept behind the multi-model superensemble and comprises of a training and a forecast phase. The training phase learns from the recent past performances of models and is used to determine statistical weights from a least square minimization via a simple multiple regression. These weights are then used in the forecast phase. The superensemble forecasts carry the highest skill compared to simple ensemble mean, bias corrected ensemble mean and the best model out of the participating member models. This approach is a powerful post-processing method for the estimation of weather forecast parameters reducing the direct model output errors. Although it can be applied successfully to the continuous parameters like temperature, humidity, wind speed, mean sea level pressure etc., in this paper, this approach is applied to rainfall, a parameter quite difficult to handle with standard post-processing methods, due to its high temporal and spatial variability. The present study aims at the development of advanced superensemble schemes comprising of 1-5 day daily precipitation forecasts from five state-of-the-art global circulation models (GCMs), i.e., European Centre for Medium Range Weather Forecasts (Europe), National Center for Environmental Prediction (USA), China Meteorological Administration (China), Canadian Meteorological Centre (Canada) and U.K. Meteorological Office (U.K.) obtained from THORPEX Interactive Grand Global Ensemble (TIGGE), which is one of the most complete data set available. The novel approaches include the dynamical model selection approach in which the selection of the superior models from the participating member models at each grid and for each forecast step in the training period is carried out. Multi-model superensemble based on the training using similar conditions is also discussed in the present study, which is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional multi-model ensemble (MME) approaches. Further, a variety of methods that incorporate a 'neighborhood' around each grid point which is available in literature to allow for spatial error or uncertainty, have also been experimented with the above mentioned approaches. The comparison of these schemes with respect to the observations verifies that the newly developed approaches provide more unified and skillful prediction of the summer monsoon (viz. June to September) rainfall compared to the conventional multi-model approach and the member models.Keywords: multi-model superensemble, dynamical model selection, similarity criteria, neighborhood technique, rainfall prediction
Procedia PDF Downloads 13922840 Text Emotion Recognition by Multi-Head Attention based Bidirectional LSTM Utilizing Multi-Level Classification
Authors: Vishwanath Pethri Kamath, Jayantha Gowda Sarapanahalli, Vishal Mishra, Siddhesh Balwant Bandgar
Abstract:
Recognition of emotional information is essential in any form of communication. Growing HCI (Human-Computer Interaction) in recent times indicates the importance of understanding of emotions expressed and becomes crucial for improving the system or the interaction itself. In this research work, textual data for emotion recognition is used. The text being the least expressive amongst the multimodal resources poses various challenges such as contextual information and also sequential nature of the language construction. In this research work, the proposal is made for a neural architecture to resolve not less than 8 emotions from textual data sources derived from multiple datasets using google pre-trained word2vec word embeddings and a Multi-head attention-based bidirectional LSTM model with a one-vs-all Multi-Level Classification. The emotions targeted in this research are Anger, Disgust, Fear, Guilt, Joy, Sadness, Shame, and Surprise. Textual data from multiple datasets were used for this research work such as ISEAR, Go Emotions, Affect datasets for creating the emotions’ dataset. Data samples overlap or conflicts were considered with careful preprocessing. Our results show a significant improvement with the modeling architecture and as good as 10 points improvement in recognizing some emotions.Keywords: text emotion recognition, bidirectional LSTM, multi-head attention, multi-level classification, google word2vec word embeddings
Procedia PDF Downloads 17422839 The Mass Attenuation Coefficients, Effective Atomic Cross Sections, Effective Atomic Numbers and Electron Densities of Some Halides
Authors: Shivalinge Gowda
Abstract:
The total mass attenuation coefficients m/r, of some halides such as, NaCl, KCl, CuCl, NaBr, KBr, RbCl, AgCl, NaI, KI, AgBr, CsI, HgCl2, CdI2 and HgI2 were determined at photon energies 279.2, 320.07, 514.0, 661.6, 1115.5, 1173.2 and 1332.5 keV in a well-collimated narrow beam good geometry set-up using a high resolution, hyper pure germanium detector. The mass attenuation coefficients and the effective atomic cross sections are found to be in good agreement with the XCOM values. From these mass attenuation coefficients, the effective atomic cross sections sa, of the compounds were determined. These effective atomic cross section sa data so obtained are then used to compute the effective atomic numbers Zeff. For this, the interpolation of total attenuation cross-sections of photons of energy E in elements of atomic number Z was performed by using the logarithmic regression analysis of the data measured by the authors and reported earlier for the above said energies along with XCOM data for standard energies. The best-fit coefficients in the photon energy range of 250 to 350 keV, 350 to 500 keV, 500 to 700 keV, 700 to 1000 keV and 1000 to 1500 keV by a piecewise interpolation method were then used to find the Zeff of the compounds with respect to the effective atomic cross section sa from the relation obtained by piece wise interpolation method. Using these Zeff values, the electron densities Nel of halides were also determined. The present Zeff and Nel values of halides are found to be in good agreement with the values calculated from XCOM data and other available published values.Keywords: mass attenuation coefficient, atomic cross-section, effective atomic number, electron density
Procedia PDF Downloads 37722838 Developing a Culturally Acceptable End of Life Survey (the VOICES-ESRD/Thai Questionnaire) for Evaluation Health Services Provision of Older Persons with End-Stage Renal Disease (ESRD) in Thailand
Authors: W. Pungchompoo, A. Richardson, L. Brindle
Abstract:
Background: The developing of a culturally acceptable end of life survey (the VOICES-ESRD/Thai questionnaire) is an essential instrument for evaluation health services provision of older persons with ESRD in Thailand. The focus of the questionnaire was on symptoms, symptom control and the health care needs of older people with ESRD who are managed without dialysis. Objective: The objective of this study was to develop and adapt VOICES to make it suitable for use in a population survey in Thailand. Methods: The mixed methods exploratory sequential design was focussed on modifying an instrument. Data collection: A cognitive interviewing technique was implemented, using two cycles of data collection with a sample of 10 bereaved carers and a prototype of the Thai VOICES questionnaire. Qualitative study was used to modify the developing a culturally acceptable end of life survey (the VOICES-ESRD/Thai questionnaire). Data analysis: The data were analysed by using content analysis. Results: The revisions to the prototype questionnaire were made. The results were used to adapt the VOICES questionnaire for use in a population-based survey with older ESRD patients in Thailand. Conclusions: A culturally specific questionnaire was generated during this second phase and issues with questionnaire design were rectified.Keywords: VOICES-ESRD/Thai questionnaire, cognitive interviewing, end of life survey, health services provision, older persons with ESRD
Procedia PDF Downloads 28622837 Cost Sensitive Feature Selection in Decision-Theoretic Rough Set Models for Customer Churn Prediction: The Case of Telecommunication Sector Customers
Authors: Emel Kızılkaya Aydogan, Mihrimah Ozmen, Yılmaz Delice
Abstract:
In recent days, there is a change and the ongoing development of the telecommunications sector in the global market. In this sector, churn analysis techniques are commonly used for analysing why some customers terminate their service subscriptions prematurely. In addition, customer churn is utmost significant in this sector since it causes to important business loss. Many companies make various researches in order to prevent losses while increasing customer loyalty. Although a large quantity of accumulated data is available in this sector, their usefulness is limited by data quality and relevance. In this paper, a cost-sensitive feature selection framework is developed aiming to obtain the feature reducts to predict customer churn. The framework is a cost based optional pre-processing stage to remove redundant features for churn management. In addition, this cost-based feature selection algorithm is applied in a telecommunication company in Turkey and the results obtained with this algorithm.Keywords: churn prediction, data mining, decision-theoretic rough set, feature selection
Procedia PDF Downloads 44922836 Applying Pre-Accident Observational Methods for Accident Assessment and Prediction at Intersections in Norrkoping City in Sweden
Authors: Ghazwan Al-Haji, Adeyemi Adedokun
Abstract:
Traffic safety at intersections is highly represented, given the fact that accidents occur randomly in time and space. It is necessary to judge whether the intersection is dangerous or not based on short-term observations, and not waiting for many years of assessing historical accident data. There are active and pro-active road infrastructure safety methods for assessing safety at intersections. This study aims to investigate the use of quantitative and qualitative pre-observational methods as the best practice for accident prediction, future black spot identification, and treatment. Historical accident data from STRADA (the Swedish Traffic Accident Data Acquisition) was used within Norrkoping city in Sweden. The ADT (Average Daily Traffic), capacity and speed were used to predict accident rates. Locations with the highest accident records and predicted accident counts were identified and hence audited qualitatively by using Street Audit. The results from these quantitative and qualitative methods were analyzed, validated and compared. The paper provides recommendations on the used methods as well as on how to reduce the accident occurrence at the chosen intersections.Keywords: intersections, traffic conflict, traffic safety, street audit, accidents predictions
Procedia PDF Downloads 23522835 Two-Stage Estimation of Tropical Cyclone Intensity Based on Fusion of Coarse and Fine-Grained Features from Satellite Microwave Data
Authors: Huinan Zhang, Wenjie Jiang
Abstract:
Accurate estimation of tropical cyclone intensity is of great importance for disaster prevention and mitigation. Existing techniques are largely based on satellite imagery data, and research and utilization of the inner thermal core structure characteristics of tropical cyclones still pose challenges. This paper presents a two-stage tropical cyclone intensity estimation network based on the fusion of coarse and fine-grained features from microwave brightness temperature data. The data used in this network are obtained from the thermal core structure of tropical cyclones through the Advanced Technology Microwave Sounder (ATMS) inversion. Firstly, the thermal core information in the pressure direction is comprehensively expressed through the maximal intensity projection (MIP) method, constructing coarse-grained thermal core images that represent the tropical cyclone. These images provide a coarse-grained feature range wind speed estimation result in the first stage. Then, based on this result, fine-grained features are extracted by combining thermal core information from multiple view profiles with a distributed network and fused with coarse-grained features from the first stage to obtain the final two-stage network wind speed estimation. Furthermore, to better capture the long-tail distribution characteristics of tropical cyclones, focal loss is used in the coarse-grained loss function of the first stage, and ordinal regression loss is adopted in the second stage to replace traditional single-value regression. The selection of tropical cyclones spans from 2012 to 2021, distributed in the North Atlantic (NA) regions. The training set includes 2012 to 2017, the validation set includes 2018 to 2019, and the test set includes 2020 to 2021. Based on the Saffir-Simpson Hurricane Wind Scale (SSHS), this paper categorizes tropical cyclone levels into three major categories: pre-hurricane, minor hurricane, and major hurricane, with a classification accuracy rate of 86.18% and an intensity estimation error of 4.01m/s for NA based on this accuracy. The results indicate that thermal core data can effectively represent the level and intensity of tropical cyclones, warranting further exploration of tropical cyclone attributes under this data.Keywords: Artificial intelligence, deep learning, data mining, remote sensing
Procedia PDF Downloads 6322834 An Approach for Coagulant Dosage Optimization Using Soft Jar Test: A Case Study of Bangkhen Water Treatment Plant
Authors: Ninlawat Phuangchoke, Waraporn Viyanon, Setta Sasananan
Abstract:
The most important process of the water treatment plant process is the coagulation using alum and poly aluminum chloride (PACL), and the value of usage per day is a hundred thousand baht. Therefore, determining the dosage of alum and PACL are the most important factors to be prescribed. Water production is economical and valuable. This research applies an artificial neural network (ANN), which uses the Levenberg–Marquardt algorithm to create a mathematical model (Soft Jar Test) for prediction chemical dose used to coagulation such as alum and PACL, which input data consists of turbidity, pH, alkalinity, conductivity, and, oxygen consumption (OC) of Bangkhen water treatment plant (BKWTP) Metropolitan Waterworks Authority. The data collected from 1 January 2019 to 31 December 2019 cover changing seasons of Thailand. The input data of ANN is divided into three groups training set, test set, and validation set, which the best model performance with a coefficient of determination and mean absolute error of alum are 0.73, 3.18, and PACL is 0.59, 3.21 respectively.Keywords: soft jar test, jar test, water treatment plant process, artificial neural network
Procedia PDF Downloads 16722833 Drought Detection and Water Stress Impact on Vegetation Cover Sustainability Using Radar Data
Authors: E. Farg, M. M. El-Sharkawy, M. S. Mostafa, S. M. Arafat
Abstract:
Mapping water stress provides important baseline data for sustainable agriculture. Recent developments in the new Sentinel-1 data which allow the acquisition of high resolution images and varied polarization capabilities. This study was conducted to detect and quantify vegetation water content from canopy backscatter for extracting spatial information to encourage drought mapping activities throughout new reclaimed sandy soils in western Nile delta, Egypt. The performance of radar imagery in agriculture strongly depends on the sensor polarization capability. The dual mode capabilities of Sentinel-1 improve the ability to detect water stress and the backscatter from the structure components improves the identification and separation of vegetation types with various canopy structures from other features. The fieldwork data allowed identifying of water stress zones based on land cover structure; those classes were used for producing harmonious water stress map. The used analysis techniques and results show high capability of active sensors data in water stress mapping and monitoring especially when integrated with multi-spectral medium resolution images. Also sub soil drip irrigation systems cropped areas have lower drought and water stress than center pivot sprinkler irrigation systems. That refers to high level of evaporation from soil surface in initial growth stages. Results show that high relationship between vegetation indices such as Normalized Difference Vegetation Index NDVI the observed radar backscattering. In addition to observational evidence showed that the radar backscatter is highly sensitive to vegetation water stress, and essentially potential to monitor and detect vegetative cover drought.Keywords: canopy backscatter, drought, polarization, NDVI
Procedia PDF Downloads 14622832 New Technique of Estimation of Charge Carrier Density of Nanomaterials from Thermionic Emission Data
Authors: Dilip K. De, Olukunle C. Olawole, Emmanuel S. Joel, Moses Emetere
Abstract:
A good number of electronic properties such as electrical and thermal conductivities depend on charge carrier densities of nanomaterials. By controlling the charge carrier densities during the fabrication (or growth) processes, the physical properties can be tuned. In this paper, we discuss a new technique of estimating the charge carrier densities of nanomaterials from the thermionic emission data using the newly modified Richardson-Dushman equation. We find that the technique yields excellent results for graphene and carbon nanotube.Keywords: charge carrier density, nano materials, new technique, thermionic emission
Procedia PDF Downloads 32122831 Construction of Microbial Fuel Cells from Local Benthic Zones
Authors: Maria Luiza D. Ramiento, Maria Lissette D. Lucas
Abstract:
Electricity is said to serve as the backbone of modern technology. Considering this, electricity consumption has dynamically grown due to the continuous demand. An alternative producer of energy concerning electricity must therefore be given focus. Microbial fuel cell wholly characterizes a new method of renewable energy recovery: the direct conversion of organic matter to electricity using bacteria. Electricity is produced as fuel or new food is given to the bacteria. The study concentrated in determining the feasibility of electricity production from local benthic zones. Microbial fuel cells were constructed to harvest the possible electricity and to test the presence of electricity producing microorganisms. Soil samples were gathered from Calumpang River, Palawan Mangrove Forest, Rosario River and Batangas Port. Eleven modules were constructed for the different trials of the soil samples. These modules were made of cathode and anode chambers connected by a salt bridge. For 85 days, the harvested voltage was measured daily. No parameter is added for the first 24 days. For the next 61 days, acetic acid was included in the first and second trials of the modules. Each of the trials of the soil samples gave a positive result in electricity production.There were electricity producing microbes in local benthic zones. It is observed that the higher the organic content of the soil sample, the higher the electricity harvested from it. It is recommended to identify the specific species of the electricity-producing microorganism present in the local benthic zone. Complement experiments are encouraged like determining the kind of soil particles to test its effect on the amount electricity that can be harvested. To pursue the development of microbial fuel cells by building a closed circuit in it is also suggested.Keywords: microbial fuel cell, benthic zone, electricity, reduction-oxidation reaction, bacteria
Procedia PDF Downloads 40022830 Reviewing Image Recognition and Anomaly Detection Methods Utilizing GANs
Authors: Agastya Pratap Singh
Abstract:
This review paper examines the emerging applications of generative adversarial networks (GANs) in the fields of image recognition and anomaly detection. With the rapid growth of digital image data, the need for efficient and accurate methodologies to identify and classify images has become increasingly critical. GANs, known for their ability to generate realistic data, have gained significant attention for their potential to enhance traditional image recognition systems and improve anomaly detection performance. The paper systematically analyzes various GAN architectures and their modifications tailored for image recognition tasks, highlighting their strengths and limitations. Additionally, it delves into the effectiveness of GANs in detecting anomalies in diverse datasets, including medical imaging, industrial inspection, and surveillance. The review also discusses the challenges faced in training GANs, such as mode collapse and stability issues, and presents recent advancements aimed at overcoming these obstacles.Keywords: generative adversarial networks, image recognition, anomaly detection, synthetic data generation, deep learning, computer vision, unsupervised learning, pattern recognition, model evaluation, machine learning applications
Procedia PDF Downloads 3222829 Use of Artificial Neural Networks to Estimate Evapotranspiration for Efficient Irrigation Management
Authors: Adriana Postal, Silvio C. Sampaio, Marcio A. Villas Boas, Josué P. Castro
Abstract:
This study deals with the estimation of reference evapotranspiration (ET₀) in an agricultural context, focusing on efficient irrigation management to meet the growing interest in the sustainable management of water resources. Given the importance of water in agriculture and its scarcity in many regions, efficient use of this resource is essential to ensure food security and environmental sustainability. The methodology used involved the application of artificial intelligence techniques, specifically Multilayer Perceptron (MLP) Artificial Neural Networks (ANNs), to predict ET₀ in the state of Paraná, Brazil. The models were trained and validated with meteorological data from the Brazilian National Institute of Meteorology (INMET), together with data obtained from a producer's weather station in the western region of Paraná. Two optimizers (SGD and Adam) and different meteorological variables, such as temperature, humidity, solar radiation, and wind speed, were explored as inputs to the models. Nineteen configurations with different input variables were tested; amidst them, configuration 9, with 8 input variables, was identified as the most efficient of all. Configuration 10, with 4 input variables, was considered the most effective, considering the smallest number of variables. The main conclusions of this study show that MLP ANNs are capable of accurately estimating ET₀, providing a valuable tool for irrigation management in agriculture. Both configurations (9 and 10) showed promising performance in predicting ET₀. The validation of the models with cultivator data underlined the practical relevance of these tools and confirmed their generalization ability for different field conditions. The results of the statistical metrics, including Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and Coefficient of Determination (R²), showed excellent agreement between the model predictions and the observed data, with MAE as low as 0.01 mm/day and 0.03 mm/day, respectively. In addition, the models achieved an R² between 0.99 and 1, indicating a satisfactory fit to the real data. This agreement was also confirmed by the Kolmogorov-Smirnov test, which evaluates the agreement of the predictions with the statistical behavior of the real data and yields values between 0.02 and 0.04 for the producer data. In addition, the results of this study suggest that the developed technique can be applied to other locations by using specific data from these sites to further improve ET₀ predictions and thus contribute to sustainable irrigation management in different agricultural regions. The study has some limitations, such as the use of a single ANN architecture and two optimizers, the validation with data from only one producer, and the possible underestimation of the influence of seasonality and local climate variability. An irrigation management application using the most efficient models from this study is already under development. Future research can explore different ANN architectures and optimization techniques, validate models with data from multiple producers and regions, and investigate the model's response to different seasonal and climatic conditions.Keywords: agricultural technology, neural networks in agriculture, water efficiency, water use optimization
Procedia PDF Downloads 5122828 Need of Trained Clinical Research Professionals Globally to Conduct Clinical Trials
Authors: Tambe Daniel Atem
Abstract:
Background: Clinical Research is an organized research on human beings intended to provide adequate information on the drug use as a therapeutic agent on its safety and efficacy. The significance of the study is to educate the global health and life science graduates in Clinical Research in depth to perform better as it involves testing drugs on human beings. Objectives: to provide an overall understanding of the scientific approach to the evaluation of new and existing medical interventions and to apply ethical and regulatory principles appropriate to any individual research. Methodology: It is based on – Primary data analysis and Secondary data analysis. Primary data analysis: means the collection of data from journals, the internet, and other online sources. Secondary data analysis: a survey was conducted with a questionnaire to interview the Clinical Research Professionals to understand the need of training to perform clinical trials globally. The questionnaire consisted details of the professionals working with the expertise. It also included the areas of clinical research which needed intense training before entering into hardcore clinical research domain. Results: The Clinical Trials market worldwide worth over USD 26 billion and the industry has employed an estimated 2,10,000 people in the US and over 70,000 in the U.K, and they form one-third of the total research and development staff. There are more than 2,50,000 vacant positions globally with salary variations in the regions for a Clinical Research Coordinator. R&D cost on new drug development is estimated at US$ 70-85 billion. The cost of doing clinical trials for a new drug is US$ 200-250 million. Due to an increase trained Clinical Research Professionals India has emerged as a global hub for clinical research. The Global Clinical Trial outsourcing opportunity in India in the pharmaceutical industry increased to more than $2 billion in 2014 due to increased outsourcing from U.S and Europe to India. Conclusion: Assessment of training need is recommended for newer Clinical Research Professionals and trial sites, especially prior the conduct of larger confirmatory clinical trials.Keywords: clinical research, clinical trials, clinical research professionals
Procedia PDF Downloads 45422827 Smart Brain Wave Sensor for Paralyzed- a Real Time Implementation
Authors: U.B Mahadevswamy UBM, Siraj Ahmed Siraj
Abstract:
As the title of the paper indicates about brainwaves and its uses for various applications based on their frequencies and different parameters which can be implemented as real time application with the title a smart brain wave sensor system for paralyzed patients. Brain wave sensing is to detect a person's mental status. The purpose of brain wave sensing is to give exact treatment to paralyzed patients. The data or signal is obtained from the brainwaves sensing band. This data are converted as object files using Visual Basics. The processed data is further sent to Arduino which has the human's behavioral aspects like emotions, sensations, feelings, and desires. The proposed device can sense human brainwaves and detect the percentage of paralysis that the person is suffering. The advantage of this paper is to give a real-time smart sensor device for paralyzed patients with paralysis percentage for their exact treatment. Keywords:-Brainwave sensor, BMI, Brain scan, EEG, MCH.Keywords: Keywords:-Brainwave sensor , BMI, Brain scan, EEG, MCH
Procedia PDF Downloads 15622826 A Study of Student Satisfaction of the University TV Station
Authors: Prapoj Na Bangchang
Abstract:
This research aimed to study the satisfaction of university students on the Suan Sunandha Rajabhat University television station. The sample were 250 undergraduate students from Year 1 to Year 4. The tool used to collect data was a questionnaire. Statistics used in data analysis were percentage, mean and standard deviation. The results showed that student satisfaction on the University's television station location received high score, followed by the number of devices, and the content presented received the lowest score. Most students want the content of the programs to be improved especially entertainment content, followed by sports content.Keywords: student satisfaction, university TV channel, media, broadcasting
Procedia PDF Downloads 38622825 Self-Organizing Maps for Exploration of Partially Observed Data and Imputation of Missing Values in the Context of the Manufacture of Aircraft Engines
Authors: Sara Rejeb, Catherine Duveau, Tabea Rebafka
Abstract:
To monitor the production process of turbofan aircraft engines, multiple measurements of various geometrical parameters are systematically recorded on manufactured parts. Engine parts are subject to extremely high standards as they can impact the performance of the engine. Therefore, it is essential to analyze these databases to better understand the influence of the different parameters on the engine's performance. Self-organizing maps are unsupervised neural networks which achieve two tasks simultaneously: they visualize high-dimensional data by projection onto a 2-dimensional map and provide clustering of the data. This technique has become very popular for data exploration since it provides easily interpretable results and a meaningful global view of the data. As such, self-organizing maps are usually applied to aircraft engine condition monitoring. As databases in this field are huge and complex, they naturally contain multiple missing entries for various reasons. The classical Kohonen algorithm to compute self-organizing maps is conceived for complete data only. A naive approach to deal with partially observed data consists in deleting items or variables with missing entries. However, this requires a sufficient number of complete individuals to be fairly representative of the population; otherwise, deletion leads to a considerable loss of information. Moreover, deletion can also induce bias in the analysis results. Alternatively, one can first apply a common imputation method to create a complete dataset and then apply the Kohonen algorithm. However, the choice of the imputation method may have a strong impact on the resulting self-organizing map. Our approach is to address simultaneously the two problems of computing a self-organizing map and imputing missing values, as these tasks are not independent. In this work, we propose an extension of self-organizing maps for partially observed data, referred to as missSOM. First, we introduce a criterion to be optimized, that aims at defining simultaneously the best self-organizing map and the best imputations for the missing entries. As such, missSOM is also an imputation method for missing values. To minimize the criterion, we propose an iterative algorithm that alternates the learning of a self-organizing map and the imputation of missing values. Moreover, we develop an accelerated version of the algorithm by entwining the iterations of the Kohonen algorithm with the updates of the imputed values. This method is efficiently implemented in R and will soon be released on CRAN. Compared to the standard Kohonen algorithm, it does not come with any additional cost in terms of computing time. Numerical experiments illustrate that missSOM performs well in terms of both clustering and imputation compared to the state of the art. In particular, it turns out that missSOM is robust to the missingness mechanism, which is in contrast to many imputation methods that are appropriate for only a single mechanism. This is an important property of missSOM as, in practice, the missingness mechanism is often unknown. An application to measurements on one type of part is also provided and shows the practical interest of missSOM.Keywords: imputation method of missing data, partially observed data, robustness to missingness mechanism, self-organizing maps
Procedia PDF Downloads 15322824 Problems of Drought and Its Management in Yobe State, Nigeria
Authors: Hassan Gana Abdullahi, Michael A. Fullen, David Oloke
Abstract:
Drought poses an enormous global threat to sustainable development and is expected to increase with global climate change. Drought and desertification are major problems in Yobe State (north-east Nigeria). This investigation aims to develop a workable framework and management tool for drought mitigation in Yobe State. Mixed methods were employed during the study and additional qualitative information was gathered through Focus Group Discussions (FGD). Data on socio-economic impacts of drought were thus collected via both questionnaire surveys and FGD. In all, 1,040 questionnaires were distributed to farmers in the State and 721 were completed, representing a return rate of 69.3%. Data analysis showed that 97.9% of respondents considered themselves to be drought victims, whilst 69.3% of the respondents were unemployed and had no other means of income, except through rain-fed farming. Developing a viable and holistic approach to drought mitigation is crucial, to arrest and hopefully reverse environment degradation. Analysed data will be used to develop an integrated framework for drought mitigation and management in Yobe State. This paper introduces the socio-economic and environmental effects of drought in Yobe State.Keywords: drought, climate change, mitigation, management, Yobe State
Procedia PDF Downloads 372