Search results for: 99.95% IoT data transmission savings
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26847

Search results for: 99.95% IoT data transmission savings

22647 Mobi-DiQ: A Pervasive Sensing System for Delirium Risk Assessment in Intensive Care Unit

Authors: Subhash Nerella, Ziyuan Guan, Azra Bihorac, Parisa Rashidi

Abstract:

Intensive care units (ICUs) provide care to critically ill patients in severe and life-threatening conditions. However, patient monitoring in the ICU is limited by the time and resource constraints imposed on healthcare providers. Many critical care indices such as mobility are still manually assessed, which can be subjective, prone to human errors, and lack granularity. Other important aspects, such as environmental factors, are not monitored at all. For example, critically ill patients often experience circadian disruptions due to the absence of effective environmental “timekeepers” such as the light/dark cycle and the systemic effect of acute illness on chronobiologic markers. Although the occurrence of delirium is associated with circadian disruption risk factors, these factors are not routinely monitored in the ICU. Hence, there is a critical unmet need to develop systems for precise and real-time assessment through novel enabling technologies. We have developed the mobility and circadian disruption quantification system (Mobi-DiQ) by augmenting biomarker and clinical data with pervasive sensing data to generate mobility and circadian cues related to mobility, nightly disruptions, and light and noise exposure. We hypothesize that Mobi-DiQ can provide accurate mobility and circadian cues that correlate with bedside clinical mobility assessments and circadian biomarkers, ultimately important for delirium risk assessment and prevention. The collected multimodal dataset consists of depth images, Electromyography (EMG) data, patient extremity movement captured by accelerometers, ambient light levels, Sound Pressure Level (SPL), and indoor air quality measured by volatile organic compounds, and the equivalent CO₂ concentration. For delirium risk assessment, the system recognizes mobility cues (axial body movement features and body key points) and circadian cues, including nightly disruptions, ambient SPL, and light intensity, as well as other environmental factors such as indoor air quality. The Mobi-DiQ system consists of three major components: the pervasive sensing system, a data storage and analysis server, and a data annotation system. For data collection, six local pervasive sensing systems were deployed, including a local computer and sensors. A video recording tool with graphical user interface (GUI) developed in python was used to capture depth image frames for analyzing patient mobility. All sensor data is encrypted, then automatically uploaded to the Mobi-DiQ server through a secured VPN connection. Several data pipelines are developed to automate the data transfer, curation, and data preparation for annotation and model training. The data curation and post-processing are performed on the server. A custom secure annotation tool with GUI was developed to annotate depth activity data. The annotation tool is linked to the MongoDB database to record the data annotation and to provide summarization. Docker containers are also utilized to manage services and pipelines running on the server in an isolated manner. The processed clinical data and annotations are used to train and develop real-time pervasive sensing systems to augment clinical decision-making and promote targeted interventions. In the future, we intend to evaluate our system as a clinical implementation trial, as well as to refine and validate it by using other data sources, including neurological data obtained through continuous electroencephalography (EEG).

Keywords: deep learning, delirium, healthcare, pervasive sensing

Procedia PDF Downloads 93
22646 Delineation of the Geoelectric and Geovelocity Parameters in the Basement Complex of Northwestern Nigeria

Authors: M. D. Dogara, G. C. Afuwai, O. O. Esther, A. M. Dawai

Abstract:

The geology of Northern Nigeria is under intense investigation particularly that of the northwest believed to be of the basement complex. The variability of the lithology is consistently inconsistent. Hence, the need for a close range study, it is, in view of the above that, two geophysical techniques, the vertical electrical sounding employing the Schlumberger array and seismic refraction methods, were used to delineate the geoelectric and geovelocity parameters of the basement complex of northwestern Nigeria. A total area of 400,000 m² was covered with sixty geoelectric stations established and sixty sets of seismic refraction data collected using the forward and reverse method. From the interpretation of the resistivity data, it is suggestive that the area is underlain by not more than five geoelectric layers of varying thicknesses and resistivities when a maximum half electrode spread of 100m was used. The result of the interpreted seismic data revealed two geovelocity layers, with velocities ranging between 478m/s to 1666m/s for the first layer and 1166m/s to 7141m/s for the second layer. The results of the two techniques, suggests that the area of study has an undulating bedrock topography with geoeletric and geovelocity layers composed of weathered rock materials.

Keywords: basement complex, delineation, geoelectric, geovelocity, Nigeria

Procedia PDF Downloads 151
22645 The Thinking of Dynamic Formulation of Rock Aging Agent Driven by Data

Authors: Longlong Zhang, Xiaohua Zhu, Ping Zhao, Yu Wang

Abstract:

The construction of mines, railways, highways, water conservancy projects, etc., have formed a large number of high steep slope wounds in China. Under the premise of slope stability and safety, the minimum cost, green and close to natural wound space repair, has become a new problem. Nowadays, in situ element testing and analysis, monitoring, field quantitative factor classification, and assignment evaluation will produce vast amounts of data. Data processing and analysis will inevitably differentiate the morphology, mineral composition, physicochemical properties between rock wounds, by which to dynamically match the appropriate techniques and materials for restoration. In the present research, based on the grid partition of the slope surface, tested the content of the combined oxide of rock mineral (SiO₂, CaO, MgO, Al₂O₃, Fe₃O₄, etc.), and classified and assigned values to the hardness and breakage of rock texture. The data of essential factors are interpolated and normalized in GIS, which formed the differential zoning map of slope space. According to the physical and chemical properties and spatial morphology of rocks in different zones, organic acids (plant waste fruit, fruit residue, etc.), natural mineral powder (zeolite, apatite, kaolin, etc.), water-retaining agent, and plant gum (melon powder) were mixed in different proportions to form rock aging agents. To spray the aging agent with different formulas on the slopes in different sections can affectively age the fresh rock wound, providing convenience for seed implantation, and reducing the transformation of heavy metals in the rocks. Through many practical engineering practices, a dynamic data platform of rock aging agent formula system is formed, which provides materials for the restoration of different slopes. It will also provide a guideline for the mixed-use of various natural materials to solve the complex, non-uniformity ecological restoration problem.

Keywords: data-driven, dynamic state, high steep slope, rock aging agent, wounds

Procedia PDF Downloads 115
22644 Adult Language Learning in the Institute of Technology Sector in the Republic of Ireland

Authors: Una Carthy

Abstract:

A recent study of third level institutions in Ireland reveals that both age and aptitude can be overcome by teaching methodologies to motivate second language learners. This PhD investigation gathered quantitative and qualitative data from 14 Institutes of Technology over a three years period from 2011 to 2014. The fundamental research question was to establish the impact of institutional language policy on attitudes towards language learning. However, other related issues around second language acquisition arose in the course of the investigation. Data were collected from both lectures and students, allowing interesting points of comparison to emerge from both datasets. Negative perceptions among lecturers regarding language provision were often associated with the view that language learning belongs to primary and secondary level and has no place in third level education. This perception was offset by substantial data showing positive attitudes towards adult language learning. Lenneberg’s Critical Age Theory postulated that the optimum age for learning a second language is before puberty. More recently, scholars have challenged this theory in their studies, revealing that mature learners can and do succeed at learning languages. With regard to aptitude, a preoccupation among lecturers regarding poor literacy skills among students emerged and was often associated with resistance to second language acquisition. This was offset by a preponderance of qualitative data from students highlighting the crucial role which teaching approaches play in the learning process. Interestingly, the data collected regarding learning disabilities reveals that, given the appropriate learning environments, individuals can be motivated to acquire second languages, and indeed succeed at learning them. These findings are in keeping with other recent studies regarding attitudes towards second language learning among students with learning disabilities. Both sets of findings reinforce the case for language policies in the Institute of Technology (IoTs). Supportive and positive learning environments can be created in third level institutions to motivate adult learners, thereby overcoming perceived obstacles relating to age and aptitude.

Keywords: age, aptitude, second language acquisition, teaching methodologies

Procedia PDF Downloads 123
22643 Cloud Monitoring and Performance Optimization Ensuring High Availability

Authors: Inayat Ur Rehman, Georgia Sakellari

Abstract:

Cloud computing has evolved into a vital technology for businesses, offering scalability, flexibility, and cost-effectiveness. However, maintaining high availability and optimal performance in the cloud is crucial for reliable services. This paper explores the significance of cloud monitoring and performance optimization in sustaining the high availability of cloud-based systems. It discusses diverse monitoring tools, techniques, and best practices for continually assessing the health and performance of cloud resources. The paper also delves into performance optimization strategies, including resource allocation, load balancing, and auto-scaling, to ensure efficient resource utilization and responsiveness. Addressing potential challenges in cloud monitoring and optimization, the paper offers insights into data security and privacy considerations. Through this thorough analysis, the paper aims to underscore the importance of cloud monitoring and performance optimization for ensuring a seamless and highly available cloud computing environment.

Keywords: cloud computing, cloud monitoring, performance optimization, high availability, scalability, resource allocation, load balancing, auto-scaling, data security, data privacy

Procedia PDF Downloads 60
22642 The Use of Artificial Intelligence to Curb Corruption in Brazil

Authors: Camila Penido Gomes

Abstract:

Over the past decade, an emerging body of research has been pointing to artificial intelligence´s great potential to improve the use of open data, increase transparency and curb corruption in the public sector. Nonetheless, studies on this subject are scant and usually lack evidence to validate AI-based technologies´ effectiveness in addressing corruption, especially in developing countries. Aiming to fill this void in the literature, this paper sets out to examine how AI has been deployed by civil society to improve the use of open data and prevent congresspeople from misusing public resources in Brazil. Building on the current debates and carrying out a systematic literature review and extensive document analyses, this research reveals that AI should not be deployed as one silver bullet to fight corruption. Instead, this technology is more powerful when adopted by a multidisciplinary team as a civic tool in conjunction with other strategies. This study makes considerable contributions, bringing to the forefront discussion a more accurate understanding of the factors that play a decisive role in the successful implementation of AI-based technologies in anti-corruption efforts.

Keywords: artificial intelligence, civil society organization, corruption, open data, transparency

Procedia PDF Downloads 205
22641 Performance Study of Classification Algorithms for Consumer Online Shopping Attitudes and Behavior Using Data Mining

Authors: Rana Alaa El-Deen Ahmed, M. Elemam Shehab, Shereen Morsy, Nermeen Mekawie

Abstract:

With the growing popularity and acceptance of e-commerce platforms, users face an ever increasing burden in actually choosing the right product from the large number of online offers. Thus, techniques for personalization and shopping guides are needed by users. For a pleasant and successful shopping experience, users need to know easily which products to buy with high confidence. Since selling a wide variety of products has become easier due to the popularity of online stores, online retailers are able to sell more products than a physical store. The disadvantage is that the customers might not find products they need. In this research the customer will be able to find the products he is searching for, because recommender systems are used in some ecommerce web sites. Recommender system learns from the information about customers and products and provides appropriate personalized recommendations to customers to find the needed product. In this paper eleven classification algorithms are comparatively tested to find the best classifier fit for consumer online shopping attitudes and behavior in the experimented dataset. The WEKA knowledge analysis tool, which is an open source data mining workbench software used in comparing conventional classifiers to get the best classifier was used in this research. In this research by using the data mining tool (WEKA) with the experimented classifiers the results show that decision table and filtered classifier gives the highest accuracy and the lowest accuracy classification via clustering and simple cart.

Keywords: classification, data mining, machine learning, online shopping, WEKA

Procedia PDF Downloads 351
22640 Learning from Small Amount of Medical Data with Noisy Labels: A Meta-Learning Approach

Authors: Gorkem Algan, Ilkay Ulusoy, Saban Gonul, Banu Turgut, Berker Bakbak

Abstract:

Computer vision systems recently made a big leap thanks to deep neural networks. However, these systems require correctly labeled large datasets in order to be trained properly, which is very difficult to obtain for medical applications. Two main reasons for label noise in medical applications are the high complexity of the data and conflicting opinions of experts. Moreover, medical imaging datasets are commonly tiny, which makes each data very important in learning. As a result, if not handled properly, label noise significantly degrades the performance. Therefore, a label-noise-robust learning algorithm that makes use of the meta-learning paradigm is proposed in this article. The proposed solution is tested on retinopathy of prematurity (ROP) dataset with a very high label noise of 68%. Results show that the proposed algorithm significantly improves the classification algorithm's performance in the presence of noisy labels.

Keywords: deep learning, label noise, robust learning, meta-learning, retinopathy of prematurity

Procedia PDF Downloads 161
22639 Cationic Copolymer-Functionalized Nanodiamonds Stabilizes Silver Nanoparticles with Dual Antibacterial Activity and Lower Cytotoxicity

Authors: Weiwei Cao, Xiaodong Xing

Abstract:

In order to effectively resolve the microbial pollution and contamination, synthetic nano-antibacterial materials are widely used in daily life. Among them, nanodiamonds (NDs) have recently been demonstrated to hold promise as useful materials in biomedical applications due to their high specific surface area and biocompatibility. In this work, the copolymer, poly(4-vinylpyridine-co-2-hydroxyethyl methacrylate) was applied for the surface functionalization of NDs to produce the quaternized poly(4-vinylpyridine-co-2-hydroxyethyl methacrylate)-functionalized NDs (QNDs). Then, QNDs were used as a substrate for silver nanoparticles (AgNPs) to produce a QND@Ag hybrid. The composition and morphology of the resultant nanostructures were confirmed by Fourier transform infrared spectra (FT-IR), transmission electron microscope (TEM), X-ray diffraction (XRD), and thermogravimetric analysis (TGA). The mass fraction of AgNPs in the nanocomposites was about 35.7%. The antibacterial performances of the prepared nanocomposites were evaluated with Gram-negative Escherichia coli and Gram-positive Staphylococcus aureus by minimum inhibitory concentration (MIC), inhibition zone testing and time-kill study. As a result, due to the synergistic antibacterial activity of QND and AgNPs, this hybrid showed substantially higher antibacterial activity than QND and polyvinyl pyrrolidone (PVP)-stabilized AgNPs, and the AgNPs on QND@Ag were more stable than the Ag NPs on PVP, resulting in long-term antibacterial effects. More importantly, this hybrid showed excellent water solubility and low cytotoxicity, suggesting the great potential application in biomedical applications. The present work provided a simple strategy that successfully turned NDs into nanosized antibiotics with simultaneous superior stability and biocompatibility, which would broaden the applications of NDs and advance the development of novel antibacterial agents.

Keywords: cationic copolymer, nanodiamonds, silver nanoparticles, dual antibacterial activity, lower cytotoxicity

Procedia PDF Downloads 130
22638 Relational Attention Shift on Images Using Bu-Td Architecture and Sequential Structure Revealing

Authors: Alona Faktor

Abstract:

In this work, we present a NN-based computational model that can perform attention shifts according to high-level instruction. The instruction specifies the type of attentional shift using explicit geometrical relation. The instruction also can be of cognitive nature, specifying more complex human-human interaction or human-object interaction, or object-object interaction. Applying this approach sequentially allows obtaining a structural description of an image. A novel data-set of interacting humans and objects is constructed using a computer graphics engine. Using this data, we perform systematic research of relational segmentation shifts.

Keywords: cognitive science, attentin, deep learning, generalization

Procedia PDF Downloads 198
22637 Emergence of Information Centric Networking and Web Content Mining: A Future Efficient Internet Architecture

Authors: Sajjad Akbar, Rabia Bashir

Abstract:

With the growth of the number of users, the Internet usage has evolved. Due to its key design principle, there is an incredible expansion in its size. This tremendous growth of the Internet has brought new applications (mobile video and cloud computing) as well as new user’s requirements i.e. content distribution environment, mobility, ubiquity, security and trust etc. The users are more interested in contents rather than their communicating peer nodes. The current Internet architecture is a host-centric networking approach, which is not suitable for the specific type of applications. With the growing use of multiple interactive applications, the host centric approach is considered to be less efficient as it depends on the physical location, for this, Information Centric Networking (ICN) is considered as the potential future Internet architecture. It is an approach that introduces uniquely named data as a core Internet principle. It uses the receiver oriented approach rather than sender oriented. It introduces the naming base information system at the network layer. Although ICN is considered as future Internet architecture but there are lot of criticism on it which mainly concerns that how ICN will manage the most relevant content. For this Web Content Mining(WCM) approaches can help in appropriate data management of ICN. To address this issue, this paper contributes by (i) discussing multiple ICN approaches (ii) analyzing different Web Content Mining approaches (iii) creating a new Internet architecture by merging ICN and WCM to solve the data management issues of ICN. From ICN, Content-Centric Networking (CCN) is selected for the new architecture, whereas, Agent-based approach from Web Content Mining is selected to find most appropriate data.

Keywords: agent based web content mining, content centric networking, information centric networking

Procedia PDF Downloads 475
22636 One-Class Classification Approach Using Fukunaga-Koontz Transform and Selective Multiple Kernel Learning

Authors: Abdullah Bal

Abstract:

This paper presents a one-class classification (OCC) technique based on Fukunaga-Koontz Transform (FKT) for binary classification problems. The FKT is originally a powerful tool to feature selection and ordering for two-class problems. To utilize the standard FKT for data domain description problem (i.e., one-class classification), in this paper, a set of non-class samples which exist outside of positive class (target class) describing boundary formed with limited training data has been constructed synthetically. The tunnel-like decision boundary around upper and lower border of target class samples has been designed using statistical properties of feature vectors belonging to the training data. To capture higher order of statistics of data and increase discrimination ability, the proposed method, termed one-class FKT (OC-FKT), has been extended to its nonlinear version via kernel machines and referred as OC-KFKT for short. Multiple kernel learning (MKL) is a favorable family of machine learning such that tries to find an optimal combination of a set of sub-kernels to achieve a better result. However, the discriminative ability of some of the base kernels may be low and the OC-KFKT designed by this type of kernels leads to unsatisfactory classification performance. To address this problem, the quality of sub-kernels should be evaluated, and the weak kernels must be discarded before the final decision making process. MKL/OC-FKT and selective MKL/OC-FKT frameworks have been designed stimulated by ensemble learning (EL) to weight and then select the sub-classifiers using the discriminability and diversities measured by eigenvalue ratios. The eigenvalue ratios have been assessed based on their regions on the FKT subspaces. The comparative experiments, performed on various low and high dimensional data, against state-of-the-art algorithms confirm the effectiveness of our techniques, especially in case of small sample size (SSS) conditions.

Keywords: ensemble methods, fukunaga-koontz transform, kernel-based methods, multiple kernel learning, one-class classification

Procedia PDF Downloads 21
22635 Fabrication of Ligand Coated Lipid-Based Nanoparticles for Synergistic Treatment of Autoimmune Disease

Authors: Asiya Mahtab, Sushama Talegaonkar

Abstract:

The research is aimed at developing targeted lipid-based nanocarrier systems of chondroitin sulfate (CS) to deliver an antirheumatic drug to the inflammatory site in arthritic paw. Lipid-based nanoparticle (TEF-lipo) was prepared by using a thin-film hydration method. The coating of prepared drug-loaded nanoparticles was done by the ionic interaction mechanism. TEF-lipo and CS-coated lipid nanoparticle (CS-lipo) were characterized for mean droplet size, zeta potential, and surface morphology. TEF-lipo and CS-lipo were further subjected to in vitro cell line studies on RAW 264.7 murine macrophage, U937, and MG 63 cell lines. The pharmacodynamic study was performed to establish the effectiveness of the prepared lipid-based conventional and targeted nanoparticles in comparison to pure drugs. Droplet size and zeta potential of TEF-lipo were found to be 128. 92 ± 5.42 nm and +12.6 ± 1.2 mV. It was observed that after the coating of TEF-lipo with CS, particle size increased to 155.6± 2.12 nm and zeta potential changed to -10.2± 1.4mV. Transmission electron microscopic analysis revealed that the nanovesicles were uniformly dispersed and detached from each other. Formulations followed sustained release pattern up to 24 h. Results of cell line studies ind icated that CS-lipo formulation showed the highest cytotoxic potential, thereby proving its enhanced ability to kill the RAW 264.7 murine macrophage and U937 cells when compared with other formulations. It is clear from our in vivo pharmacodynamic results that targeted nanocarriers had a higher inhibitory effect on arthritis progression than nontargeted nanocarriers or free drugs. Results demonstrate that this approach will provide effective treatment for rheumatoid arthritis, and CS served as a potential prophylactic against the advancement of cartilage degeneration.

Keywords: adjuvant induced arthritis, chondroitin sulfate, rheumatoid arthritis, teriflunomide

Procedia PDF Downloads 136
22634 A Simple Algorithm for Real-Time 3D Capturing of an Interior Scene Using a Linear Voxel Octree and a Floating Origin Camera

Authors: Vangelis Drosos, Dimitrios Tsoukalos, Dimitrios Tsolis

Abstract:

We present a simple algorithm for capturing a 3D scene (focused on the usage of mobile device cameras in the context of augmented/mixed reality) by using a floating origin camera solution and storing the resulting information in a linear voxel octree. Data is derived from cloud points captured by a mobile device camera. For the purposes of this paper, we assume a scene of fixed size (known to us or determined beforehand) and a fixed voxel resolution. The resulting data is stored in a linear voxel octree using a hashtable. We commence by briefly discussing the logic behind floating origin approaches and the usage of linear voxel octrees for efficient storage. Following that, we present the algorithm for translating captured feature points into voxel data in the context of a fixed origin world and storing them. Finally, we discuss potential applications and areas of future development and improvement to the efficiency of our solution.

Keywords: voxel, octree, computer vision, XR, floating origin

Procedia PDF Downloads 133
22633 The Effect of Excel on Undergraduate Students’ Understanding of Statistics and the Normal Distribution

Authors: Masomeh Jamshid Nejad

Abstract:

Nowadays, statistical literacy is no longer a necessary skill but an essential skill with broad applications across diverse fields, especially in operational decision areas such as business management, finance, and economics. As such, learning and deep understanding of statistical concepts are essential in the context of business studies. One of the crucial topics in statistical theory and its application is the normal distribution, often called a bell-shaped curve. To interpret data and conduct hypothesis tests, comprehending the properties of normal distribution (the mean and standard deviation) is essential for business students. This requires undergraduate students in the field of economics and business management to visualize and work with data following a normal distribution. Since technology is interconnected with education these days, it is important to teach statistics topics in the context of Python, R-studio, and Microsoft Excel to undergraduate students. This research endeavours to shed light on the effect of Excel-based instruction on learners’ knowledge of statistics, specifically the central concept of normal distribution. As such, two groups of undergraduate students (from the Business Management program) were compared in this research study. One group underwent Excel-based instruction and another group relied only on traditional teaching methods. We analyzed experiential data and BBA participants’ responses to statistic-related questions focusing on the normal distribution, including its key attributes, such as the mean and standard deviation. The results of our study indicate that exposing students to Excel-based learning supports learners in comprehending statistical concepts more effectively compared with the other group of learners (teaching with the traditional method). In addition, students in the context of Excel-based instruction showed ability in picturing and interpreting data concentrated on normal distribution.

Keywords: statistics, excel-based instruction, data visualization, pedagogy

Procedia PDF Downloads 53
22632 Harvesting Energy from Lightning Strikes

Authors: Vaishakh Medikeri

Abstract:

Lightning, the marvelous, spectacular and the awesome truth of nature is one of the greatest energy sources left unharnessed since ages. A single lightning bolt of lightning contains energy of about 15 billion joules. This huge amount of energy cannot be harnessed completely but partially. This paper proposes to harness the energy from lightning strikes. Throughout the globe the frequency of lightning is 40-50 flashes per second, totally 1.4 billion flashes per year; all of these flashes carrying an average energy of about 15 billion joules each. When a lightning bolt strikes the ground, tremendous amounts of energy is transferred to earth which propagates in the form of concentric circular energy waves. These waves have a frequency of about 7.83Hz. Harvesting the lightning bolt directly seems impossible, but harvesting the energy waves produced by the lightning is pretty easier. This can be done using a tricoil energy harnesser which is a new device which I have invented. We know that lightning bolt seeks the path which has minimum resistance down to the earth. For this we can make a lightning rod about 100 meters high. Now the lightning rod is attached to the tricoil energy harnesser. The tricoil energy harnesser contains three coils whose centers are collinear and all the coils are parallel to the ground. The first coil has one of its ends connected to the lightning rod and the other end grounded. There is a secondary coil wound on the first coil with one of its end grounded and the other end pointing to the ground and left unconnected and placed a little bit above the ground so that this end of the coil produces more intense currents, hence producing intense energy waves. The first coil produces very high magnetic fields and induces them in the second and third coils. Along with the magnetic fields induced by the first coil, the energy waves which are currents also flow through the second and the third coils. The second and the third coils are connected to a generator which in turn is connected to a capacitor which stores the electrical energy. The first coil is placed in the middle of the second and the third coil. The stored energy can be used for transmission of electricity. This new technique of harnessing the lightning strikes would be most efficient in places with more probability of the lightning strikes. Since we are using a lightning rod sufficiently long, the probability of cloud to ground strikes is increased. If the proposed apparatus is implemented, it would be a great source of pure and clean energy.

Keywords: generator, lightning rod, tricoil energy harnesser, harvesting energy

Procedia PDF Downloads 381
22631 Novel Recommender Systems Using Hybrid CF and Social Network Information

Authors: Kyoung-Jae Kim

Abstract:

Collaborative Filtering (CF) is a popular technique for the personalization in the E-commerce domain to reduce information overload. In general, CF provides recommending items list based on other similar users’ preferences from the user-item matrix and predicts the focal user’s preference for particular items by using them. Many recommender systems in real-world use CF techniques because it’s excellent accuracy and robustness. However, it has some limitations including sparsity problems and complex dimensionality in a user-item matrix. In addition, traditional CF does not consider the emotional interaction between users. In this study, we propose recommender systems using social network and singular value decomposition (SVD) to alleviate some limitations. The purpose of this study is to reduce the dimensionality of data set using SVD and to improve the performance of CF by using emotional information from social network data of the focal user. In this study, we test the usability of hybrid CF, SVD and social network information model using the real-world data. The experimental results show that the proposed model outperforms conventional CF models.

Keywords: recommender systems, collaborative filtering, social network information, singular value decomposition

Procedia PDF Downloads 289
22630 Comparison of Developed Statokinesigram and Marker Data Signals by Model Approach

Authors: Boris Barbolyas, Kristina Buckova, Tomas Volensky, Cyril Belavy, Ladislav Dedik

Abstract:

Background: Based on statokinezigram, the human balance control is often studied. Approach to human postural reaction analysis is based on a combination of stabilometry output signal with retroreflective marker data signal processing, analysis, and understanding, in this study. The study shows another original application of Method of Developed Statokinesigram Trajectory (MDST), too. Methods: In this study, the participants maintained quiet bipedal standing for 10 s on stabilometry platform. Consequently, bilateral vibration stimuli to Achilles tendons in 20 s interval was applied. Vibration stimuli caused that human postural system took the new pseudo-steady state. Vibration frequencies were 20, 60 and 80 Hz. Participant's body segments - head, shoulders, hips, knees, ankles and little fingers were marked by 12 retroreflective markers. Markers positions were scanned by six cameras system BTS SMART DX. Registration of their postural reaction lasted 60 s. Sampling frequency was 100 Hz. For measured data processing were used Method of Developed Statokinesigram Trajectory. Regression analysis of developed statokinesigram trajectory (DST) data and retroreflective marker developed trajectory (DMT) data were used to find out which marker trajectories most correlate with stabilometry platform output signals. Scaling coefficients (λ) between DST and DMT by linear regression analysis were evaluated, too. Results: Scaling coefficients for marker trajectories were identified for all body segments. Head markers trajectories reached maximal value and ankle markers trajectories had a minimal value of scaling coefficient. Hips, knees and ankles markers were approximately symmetrical in the meaning of scaling coefficient. Notable differences of scaling coefficient were detected in head and shoulders markers trajectories which were not symmetrical. The model of postural system behavior was identified by MDST. Conclusion: Value of scaling factor identifies which body segment is predisposed to postural instability. Hypothetically, if statokinesigram represents overall human postural system response to vibration stimuli, then markers data represented particular postural responses. It can be assumed that cumulative sum of particular marker postural responses is equal to statokinesigram.

Keywords: center of pressure (CoP), method of developed statokinesigram trajectory (MDST), model of postural system behavior, retroreflective marker data

Procedia PDF Downloads 350
22629 SARS-CoV-2 Transmission Risk Factors among Patients from a Metropolitan Community Health Center, Puerto Rico, July 2020 to March 2022

Authors: Juan C. Reyes, Linnette Rodríguez, Héctor Villanueva, Jorge Vázquez, Ivonne Rivera

Abstract:

On July 2020, a private non-profit community health center (HealthProMed) that serves people without a medical insurance plan or with limited resources in one of the most populated areas in San Juan, Puerto Rico, implemented a COVID-19 case investigation and contact-tracing surveillance system. Nursing personnel at the health center completed a computerized case investigation form that was translated, adapted, and modified from CDC’s Patient Under Investigation (PUI) Form. Between July 13, 2020, and March 17, 2022, a total of 9,233 SARS-CoV-2 tests were conducted at the health center, 16.9% of which were classified as confirmed cases (positive molecular test) and 27.7% as probable cases (positive serologic test). Most of the confirmed cases were females (60.0%), under 20 years old (29.1%), and living in their homes (59.1%). In the 14 days before the onset of symptoms, 26.3% of confirmed cases reported going to the supermarket, 22.4% had contact with a known COVID-19 case, and 20.7% went to work. The symptoms most commonly reported were sore throat (33.4%), runny nose (33.3%), cough (24.9%), and headache (23.2%). The most common preexisting medical conditions among confirmed cases were hypertension (19.3%), chronic lung disease including asthma, emphysema, COPD (13.3%), and diabetes mellitus (12.8). Multiple logistic regression analysis revealed that patients who used alcohol frequently during the last two weeks (OR=1.43; 95%CI: 1.15-1.77), those who were in contact with a positive case (OR=1.58; 95%CI: 1.33-1.88) and those who were obese (OR=1.82; 95%CI: 1.24-2.69) were significantly more likely to be a confirmed case after controlling for sociodemographic variables. Implementing a case investigation and contact-tracing component at community health centers can be of great value in the prevention and control of COVID-19 at the community level and could be used in future outbreaks.

Keywords: community health center, Puerto Rico, risk factors, SARS-CoV-2

Procedia PDF Downloads 116
22628 Text Emotion Recognition by Multi-Head Attention based Bidirectional LSTM Utilizing Multi-Level Classification

Authors: Vishwanath Pethri Kamath, Jayantha Gowda Sarapanahalli, Vishal Mishra, Siddhesh Balwant Bandgar

Abstract:

Recognition of emotional information is essential in any form of communication. Growing HCI (Human-Computer Interaction) in recent times indicates the importance of understanding of emotions expressed and becomes crucial for improving the system or the interaction itself. In this research work, textual data for emotion recognition is used. The text being the least expressive amongst the multimodal resources poses various challenges such as contextual information and also sequential nature of the language construction. In this research work, the proposal is made for a neural architecture to resolve not less than 8 emotions from textual data sources derived from multiple datasets using google pre-trained word2vec word embeddings and a Multi-head attention-based bidirectional LSTM model with a one-vs-all Multi-Level Classification. The emotions targeted in this research are Anger, Disgust, Fear, Guilt, Joy, Sadness, Shame, and Surprise. Textual data from multiple datasets were used for this research work such as ISEAR, Go Emotions, Affect datasets for creating the emotions’ dataset. Data samples overlap or conflicts were considered with careful preprocessing. Our results show a significant improvement with the modeling architecture and as good as 10 points improvement in recognizing some emotions.

Keywords: text emotion recognition, bidirectional LSTM, multi-head attention, multi-level classification, google word2vec word embeddings

Procedia PDF Downloads 174
22627 The Mass Attenuation Coefficients, Effective Atomic Cross Sections, Effective Atomic Numbers and Electron Densities of Some Halides

Authors: Shivalinge Gowda

Abstract:

The total mass attenuation coefficients m/r, of some halides such as, NaCl, KCl, CuCl, NaBr, KBr, RbCl, AgCl, NaI, KI, AgBr, CsI, HgCl2, CdI2 and HgI2 were determined at photon energies 279.2, 320.07, 514.0, 661.6, 1115.5, 1173.2 and 1332.5 keV in a well-collimated narrow beam good geometry set-up using a high resolution, hyper pure germanium detector. The mass attenuation coefficients and the effective atomic cross sections are found to be in good agreement with the XCOM values. From these mass attenuation coefficients, the effective atomic cross sections sa, of the compounds were determined. These effective atomic cross section sa data so obtained are then used to compute the effective atomic numbers Zeff. For this, the interpolation of total attenuation cross-sections of photons of energy E in elements of atomic number Z was performed by using the logarithmic regression analysis of the data measured by the authors and reported earlier for the above said energies along with XCOM data for standard energies. The best-fit coefficients in the photon energy range of 250 to 350 keV, 350 to 500 keV, 500 to 700 keV, 700 to 1000 keV and 1000 to 1500 keV by a piecewise interpolation method were then used to find the Zeff of the compounds with respect to the effective atomic cross section sa from the relation obtained by piece wise interpolation method. Using these Zeff values, the electron densities Nel of halides were also determined. The present Zeff and Nel values of halides are found to be in good agreement with the values calculated from XCOM data and other available published values.

Keywords: mass attenuation coefficient, atomic cross-section, effective atomic number, electron density

Procedia PDF Downloads 377
22626 Developing a Culturally Acceptable End of Life Survey (the VOICES-ESRD/Thai Questionnaire) for Evaluation Health Services Provision of Older Persons with End-Stage Renal Disease (ESRD) in Thailand

Authors: W. Pungchompoo, A. Richardson, L. Brindle

Abstract:

Background: The developing of a culturally acceptable end of life survey (the VOICES-ESRD/Thai questionnaire) is an essential instrument for evaluation health services provision of older persons with ESRD in Thailand. The focus of the questionnaire was on symptoms, symptom control and the health care needs of older people with ESRD who are managed without dialysis. Objective: The objective of this study was to develop and adapt VOICES to make it suitable for use in a population survey in Thailand. Methods: The mixed methods exploratory sequential design was focussed on modifying an instrument. Data collection: A cognitive interviewing technique was implemented, using two cycles of data collection with a sample of 10 bereaved carers and a prototype of the Thai VOICES questionnaire. Qualitative study was used to modify the developing a culturally acceptable end of life survey (the VOICES-ESRD/Thai questionnaire). Data analysis: The data were analysed by using content analysis. Results: The revisions to the prototype questionnaire were made. The results were used to adapt the VOICES questionnaire for use in a population-based survey with older ESRD patients in Thailand. Conclusions: A culturally specific questionnaire was generated during this second phase and issues with questionnaire design were rectified.

Keywords: VOICES-ESRD/Thai questionnaire, cognitive interviewing, end of life survey, health services provision, older persons with ESRD

Procedia PDF Downloads 286
22625 Cost Sensitive Feature Selection in Decision-Theoretic Rough Set Models for Customer Churn Prediction: The Case of Telecommunication Sector Customers

Authors: Emel Kızılkaya Aydogan, Mihrimah Ozmen, Yılmaz Delice

Abstract:

In recent days, there is a change and the ongoing development of the telecommunications sector in the global market. In this sector, churn analysis techniques are commonly used for analysing why some customers terminate their service subscriptions prematurely. In addition, customer churn is utmost significant in this sector since it causes to important business loss. Many companies make various researches in order to prevent losses while increasing customer loyalty. Although a large quantity of accumulated data is available in this sector, their usefulness is limited by data quality and relevance. In this paper, a cost-sensitive feature selection framework is developed aiming to obtain the feature reducts to predict customer churn. The framework is a cost based optional pre-processing stage to remove redundant features for churn management. In addition, this cost-based feature selection algorithm is applied in a telecommunication company in Turkey and the results obtained with this algorithm.

Keywords: churn prediction, data mining, decision-theoretic rough set, feature selection

Procedia PDF Downloads 446
22624 Applying Pre-Accident Observational Methods for Accident Assessment and Prediction at Intersections in Norrkoping City in Sweden

Authors: Ghazwan Al-Haji, Adeyemi Adedokun

Abstract:

Traffic safety at intersections is highly represented, given the fact that accidents occur randomly in time and space. It is necessary to judge whether the intersection is dangerous or not based on short-term observations, and not waiting for many years of assessing historical accident data. There are active and pro-active road infrastructure safety methods for assessing safety at intersections. This study aims to investigate the use of quantitative and qualitative pre-observational methods as the best practice for accident prediction, future black spot identification, and treatment. Historical accident data from STRADA (the Swedish Traffic Accident Data Acquisition) was used within Norrkoping city in Sweden. The ADT (Average Daily Traffic), capacity and speed were used to predict accident rates. Locations with the highest accident records and predicted accident counts were identified and hence audited qualitatively by using Street Audit. The results from these quantitative and qualitative methods were analyzed, validated and compared. The paper provides recommendations on the used methods as well as on how to reduce the accident occurrence at the chosen intersections.

Keywords: intersections, traffic conflict, traffic safety, street audit, accidents predictions

Procedia PDF Downloads 233
22623 Two-Stage Estimation of Tropical Cyclone Intensity Based on Fusion of Coarse and Fine-Grained Features from Satellite Microwave Data

Authors: Huinan Zhang, Wenjie Jiang

Abstract:

Accurate estimation of tropical cyclone intensity is of great importance for disaster prevention and mitigation. Existing techniques are largely based on satellite imagery data, and research and utilization of the inner thermal core structure characteristics of tropical cyclones still pose challenges. This paper presents a two-stage tropical cyclone intensity estimation network based on the fusion of coarse and fine-grained features from microwave brightness temperature data. The data used in this network are obtained from the thermal core structure of tropical cyclones through the Advanced Technology Microwave Sounder (ATMS) inversion. Firstly, the thermal core information in the pressure direction is comprehensively expressed through the maximal intensity projection (MIP) method, constructing coarse-grained thermal core images that represent the tropical cyclone. These images provide a coarse-grained feature range wind speed estimation result in the first stage. Then, based on this result, fine-grained features are extracted by combining thermal core information from multiple view profiles with a distributed network and fused with coarse-grained features from the first stage to obtain the final two-stage network wind speed estimation. Furthermore, to better capture the long-tail distribution characteristics of tropical cyclones, focal loss is used in the coarse-grained loss function of the first stage, and ordinal regression loss is adopted in the second stage to replace traditional single-value regression. The selection of tropical cyclones spans from 2012 to 2021, distributed in the North Atlantic (NA) regions. The training set includes 2012 to 2017, the validation set includes 2018 to 2019, and the test set includes 2020 to 2021. Based on the Saffir-Simpson Hurricane Wind Scale (SSHS), this paper categorizes tropical cyclone levels into three major categories: pre-hurricane, minor hurricane, and major hurricane, with a classification accuracy rate of 86.18% and an intensity estimation error of 4.01m/s for NA based on this accuracy. The results indicate that thermal core data can effectively represent the level and intensity of tropical cyclones, warranting further exploration of tropical cyclone attributes under this data.

Keywords: Artificial intelligence, deep learning, data mining, remote sensing

Procedia PDF Downloads 63
22622 An Approach for Coagulant Dosage Optimization Using Soft Jar Test: A Case Study of Bangkhen Water Treatment Plant

Authors: Ninlawat Phuangchoke, Waraporn Viyanon, Setta Sasananan

Abstract:

The most important process of the water treatment plant process is the coagulation using alum and poly aluminum chloride (PACL), and the value of usage per day is a hundred thousand baht. Therefore, determining the dosage of alum and PACL are the most important factors to be prescribed. Water production is economical and valuable. This research applies an artificial neural network (ANN), which uses the Levenberg–Marquardt algorithm to create a mathematical model (Soft Jar Test) for prediction chemical dose used to coagulation such as alum and PACL, which input data consists of turbidity, pH, alkalinity, conductivity, and, oxygen consumption (OC) of Bangkhen water treatment plant (BKWTP) Metropolitan Waterworks Authority. The data collected from 1 January 2019 to 31 December 2019 cover changing seasons of Thailand. The input data of ANN is divided into three groups training set, test set, and validation set, which the best model performance with a coefficient of determination and mean absolute error of alum are 0.73, 3.18, and PACL is 0.59, 3.21 respectively.

Keywords: soft jar test, jar test, water treatment plant process, artificial neural network

Procedia PDF Downloads 166
22621 Drought Detection and Water Stress Impact on Vegetation Cover Sustainability Using Radar Data

Authors: E. Farg, M. M. El-Sharkawy, M. S. Mostafa, S. M. Arafat

Abstract:

Mapping water stress provides important baseline data for sustainable agriculture. Recent developments in the new Sentinel-1 data which allow the acquisition of high resolution images and varied polarization capabilities. This study was conducted to detect and quantify vegetation water content from canopy backscatter for extracting spatial information to encourage drought mapping activities throughout new reclaimed sandy soils in western Nile delta, Egypt. The performance of radar imagery in agriculture strongly depends on the sensor polarization capability. The dual mode capabilities of Sentinel-1 improve the ability to detect water stress and the backscatter from the structure components improves the identification and separation of vegetation types with various canopy structures from other features. The fieldwork data allowed identifying of water stress zones based on land cover structure; those classes were used for producing harmonious water stress map. The used analysis techniques and results show high capability of active sensors data in water stress mapping and monitoring especially when integrated with multi-spectral medium resolution images. Also sub soil drip irrigation systems cropped areas have lower drought and water stress than center pivot sprinkler irrigation systems. That refers to high level of evaporation from soil surface in initial growth stages. Results show that high relationship between vegetation indices such as Normalized Difference Vegetation Index NDVI the observed radar backscattering. In addition to observational evidence showed that the radar backscatter is highly sensitive to vegetation water stress, and essentially potential to monitor and detect vegetative cover drought.

Keywords: canopy backscatter, drought, polarization, NDVI

Procedia PDF Downloads 145
22620 New Technique of Estimation of Charge Carrier Density of Nanomaterials from Thermionic Emission Data

Authors: Dilip K. De, Olukunle C. Olawole, Emmanuel S. Joel, Moses Emetere

Abstract:

A good number of electronic properties such as electrical and thermal conductivities depend on charge carrier densities of nanomaterials. By controlling the charge carrier densities during the fabrication (or growth) processes, the physical properties can be tuned. In this paper, we discuss a new technique of estimating the charge carrier densities of nanomaterials from the thermionic emission data using the newly modified Richardson-Dushman equation. We find that the technique yields excellent results for graphene and carbon nanotube.

Keywords: charge carrier density, nano materials, new technique, thermionic emission

Procedia PDF Downloads 321
22619 The Impact of City Mobility on Propagation of Infectious Diseases: Mathematical Modelling Approach

Authors: Asrat M.Belachew, Tiago Pereira, Institute of Mathematics, Computer Sciences, Avenida Trabalhador São Carlense, 400, São Carlos, 13566-590, Brazil

Abstract:

Infectious diseases are among the most prominent threats to human beings. They cause morbidity and mortality to an individual and collapse the social, economic, and political systems of the whole world collectively. Mathematical models are fundamental tools and provide a comprehensive understanding of how infectious diseases spread and designing the control strategy to mitigate infectious diseases from the host population. Modeling the spread of infectious diseases using a compartmental model of inhomogeneous populations is good in terms of complexity. However, in the real world, there is a situation that accounts for heterogeneity, such as ages, locations, and contact patterns of the population which are ignored in a homogeneous setting. In this work, we study how classical an SEIR infectious disease spreading of the compartmental model can be extended by incorporating the mobility of population between heterogeneous cities during an outbreak of infectious disease. We have formulated an SEIR multi-cities epidemic spreading model using a system of 4k ordinary differential equations to describe the disease transmission dynamics in k-cities during the day and night. We have shownthat the model is epidemiologically (i.e., variables have biological interpretation) and mathematically (i.e., a unique bounded solution exists all the time) well-posed. We constructed the next-generation matrix (NGM) for the model and calculated the basic reproduction number R0for SEIR-epidemic spreading model with cities mobility. R0of the disease depends on the spectral radius mobility operator, and it is a threshold between asymptotic stability of the disease-free equilibrium and disease persistence. Using the eigenvalue perturbation theorem, we showed that sending a fraction of the population between cities decreases the reproduction number of diseases in interconnected cities. As a result, disease transmissiondecreases in the population.

Keywords: SEIR-model, mathematical model, city mobility, epidemic spreading

Procedia PDF Downloads 109
22618 Field Environment Sensing and Modeling for Pears towards Precision Agriculture

Authors: Tatsuya Yamazaki, Kazuya Miyakawa, Tomohiko Sugiyama, Toshitaka Iwatani

Abstract:

The introduction of sensor technologies into agriculture is a necessary step to realize Precision Agriculture. Although sensing methodologies themselves have been prevailing owing to miniaturization and reduction in costs of sensors, there are some difficulties to analyze and understand the sensing data. Targeting at pears ’Le Lectier’, which is particular to Niigata in Japan, cultivation environmental data have been collected at pear fields by eight sorts of sensors: field temperature, field humidity, rain gauge, soil water potential, soil temperature, soil moisture, inner-bag temperature, and inner-bag humidity sensors. With regard to the inner-bag temperature and humidity sensors, they are used to measure the environment inside the fruit bag used for pre-harvest bagging of pears. In this experiment, three kinds of fruit bags were used for the pre-harvest bagging. After over 100 days continuous measurement, volumes of sensing data have been collected. Firstly, correlation analysis among sensing data measured by respective sensors reveals that one sensor can replace another sensor so that more efficient and cost-saving sensing systems can be proposed to pear farmers. Secondly, differences in characteristic and performance of the three kinds of fruit bags are clarified by the measurement results by the inner-bag environmental sensing. It is found that characteristic and performance of the inner-bags significantly differ from each other by statistical analysis. Lastly, a relational model between the sensing data and the pear outlook quality is established by use of Structural Equation Model (SEM). Here, the pear outlook quality is related with existence of stain, blob, scratch, and so on caused by physiological impair or diseases. Conceptually SEM is a combination of exploratory factor analysis and multiple regression. By using SEM, a model is constructed to connect independent and dependent variables. The proposed SEM model relates the measured sensing data and the pear outlook quality determined on the basis of farmer judgement. In particularly, it is found that the inner-bag humidity variable relatively affects the pear outlook quality. Therefore, inner-bag humidity sensing might help the farmers to control the pear outlook quality. These results are supported by a large quantity of inner-bag humidity data measured over the years 2014, 2015, and 2016. The experimental and analytical results in this research contribute to spreading Precision Agriculture technologies among the farmers growing ’Le Lectier’.

Keywords: precision agriculture, pre-harvest bagging, sensor fusion, structural equation model

Procedia PDF Downloads 314