Search results for: mobile Ad Hoc networks
497 High Capacity SnO₂/Graphene Composite Anode Materials for Li-Ion Batteries
Authors: Hilal Köse, Şeyma Dombaycıoğlu, Ali Osman Aydın, Hatem Akbulut
Abstract:
Rechargeable lithium-ion batteries (LIBs) have become promising power sources for a wide range of applications, such as mobile communication devices, portable electronic devices and electrical/hybrid vehicles due to their long cycle life, high voltage and high energy density. Graphite, as anode material, has been widely used owing to its extraordinary electronic transport properties, large surface area, and high electrocatalytic activities although its limited specific capacity (372 mAh g-1) cannot fulfil the increasing demand for lithium-ion batteries with higher energy density. To settle this problem, many studies have been taken into consideration to investigate new electrode materials and metal oxide/graphene composites are selected as a kind of promising material for lithium ion batteries as their specific capacities are much higher than graphene. Among them, SnO₂, an n-type and wide band gap semiconductor, has attracted much attention as an anode material for the new-generation lithium-ion batteries with its high theoretical capacity (790 mAh g-1). However, it suffers from large volume changes and agglomeration associated with the Li-ion insertion and extraction processes, which brings about failure and loss of electrical contact of the anode. In addition, there is also a huge irreversible capacity during the first cycle due to the formation of amorphous Li₂O matrix. To obtain high capacity anode materials, we studied on the synthesis and characterization of SnO₂-Graphene nanocomposites and investigated the capacity of this free-standing anode material in this work. For this aim, firstly, graphite oxide was obtained from graphite powder using the method described by Hummers method. To prepare the nanocomposites as free-standing anode, graphite oxide particles were ultrasonicated in distilled water with SnO2 nanoparticles (1:1, w/w). After vacuum filtration, the GO-SnO₂ paper was peeled off from the PVDF membrane to obtain a flexible, free-standing GO paper. Then, GO structure was reduced in hydrazine solution. Produced SnO2- graphene nanocomposites were characterized by scanning electron microscopy (SEM), energy dispersive X-ray spectrometer (EDS), and X-ray diffraction (XRD) analyses. CR2016 cells were assembled in a glove box (MBraun-Labstar). The cells were charged and discharged at 25°C between fixed voltage limits (2.5 V to 0.2 V) at a constant current density on a BST8-MA MTI model battery tester with 0.2C charge-discharge rate. Cyclic voltammetry (CV) was performed at the scan rate of 0.1 mVs-1 and electrochemical impedance spectroscopy (EIS) measurements were carried out using Gamry Instrument applying a sine wave of 10 mV amplitude over a frequency range of 1000 kHz-0.01 Hz.Keywords: SnO₂-graphene, nanocomposite, anode, Li-ion battery
Procedia PDF Downloads 227496 Energy Loss Reduction in Oil Refineries through Flare Gas Recovery Approaches
Authors: Majid Amidpour, Parisa Karimi, Marzieh Joda
Abstract:
For the last few years, release of burned undesirable by-products has become a challenging issue in oil industries. Flaring, as one of the main sources of air contamination, involves detrimental and long-lasting effects on human health and is considered a substantial reason for energy losses worldwide. This research involves studying the implications of two main flare gas recovery methods at three oil refineries, all in Iran as the case I, case II, and case III in which the production capacities are increasing respectively. In the proposed methods, flare gases are converted into more valuable products, before combustion by the flare networks. The first approach involves collecting, compressing and converting the flare gas to smokeless fuel which can be used in the fuel gas system of the refineries. The other scenario includes utilizing the flare gas as a feed into liquefied petroleum gas (LPG) production unit already established in the refineries. The processes of these scenarios are simulated, and the capital investment is calculated for each procedure. The cumulative profits of the scenarios are evaluated using Net Present Value method. Furthermore, the sensitivity analysis based on total propane and butane mole fraction is carried out to make a rational comparison for LPG production approach, and the results are illustrated for different mole fractions of propane and butane. As the mole fraction of propane and butane contained in LPG differs in summer and winter seasons, the results corresponding to LPG scenario are demonstrated for each season. The results of the simulations show that cumulative profit in fuel gas production scenario and LPG production rate increase with the capacity of the refineries. Moreover, the investment return time in LPG production method experiences a decline, followed by a rising trend with an increase in C3 and C4 content. The minimum value of time return occurs at propane and butane sum concentration values of 0.7, 0.6, and 0.7 in case I, II, and III, respectively. Based on comparison of the time of investment return and cumulative profit, fuel gas production is the superior scenario for three case studies.Keywords: flare gas reduction, liquefied petroleum gas, fuel gas, net present value method, sensitivity analysis
Procedia PDF Downloads 159495 Determinants of Carbon-Certified Small-Scale Agroforestry Adoption In Rural Mount Kenyan
Authors: Emmanuel Benjamin, Matthias Blum
Abstract:
Purpose – We address smallholder farmers’ restricted possibilities to adopt sustainable technologies which have direct and indirect benefits. Smallholders often face little asset endowment due to small farm size und insecure property rights, therefore experiencing constraints in adopting agricultural innovation. A program involving payments for ecosystem services (PES) benefits poor smallholder farmers in developing countries in many ways and has been suggested as a means of easing smallholder farmers’ financial constraints. PES may also provide additional mainstay which can eventually result in more favorable credit contract terms due to the availability of collateral substitute. Results of this study may help to understand the barriers, motives and incentives for smallholders’ participation in PES and help in designing a strategy to foster participation in beneficial programs. Design/methodology/approach – This paper uses a random utility model and a logistic regression approach to investigate factors that influence agroforestry adoption. We investigate non-monetary factors, such as information spillover, that influence the decision to adopt such conservation strategies. We collected original data from non-government-run agroforestry mitigation programs with PES that have been implemented in the Mount Kenya region. Preliminary Findings – We find that spread of information, existing networks and peer involvement in such programs drive participation. Conversely, participation by smallholders does not seem to be influenced by education, land or asset endowment. Contrary to some existing literature, we found weak evidence for a positive correlation between the adoption of agroforestry with PES and age of smallholder, e.g., one increases with the other, in the Mount Kenyan region. Research implications – Poverty alleviation policies for developing countries should target social capital to increase the adoption rate of modern technologies amongst smallholders.Keywords: agriculture innovation, agroforestry adoption, smallholders, payment for ecosystem services, Sub-Saharan Africa
Procedia PDF Downloads 381494 Career Guidance System Using Machine Learning
Authors: Mane Darbinyan, Lusine Hayrapetyan, Elen Matevosyan
Abstract:
Artificial Intelligence in Education (AIED) has been created to help students get ready for the workforce, and over the past 25 years, it has grown significantly, offering a variety of technologies to support academic, institutional, and administrative services. However, this is still challenging, especially considering the labor market's rapid change. While choosing a career, people face various obstacles because they do not take into consideration their own preferences, which might lead to many other problems like shifting jobs, work stress, occupational infirmity, reduced productivity, and manual error. Besides preferences, people should properly evaluate their technical and non-technical skills, as well as their personalities. Professional counseling has become a difficult undertaking for counselors due to the wide range of career choices brought on by changing technological trends. It is necessary to close this gap by utilizing technology that makes sophisticated predictions about a person's career goals based on their personality. Hence, there is a need to create an automated model that would help in decision-making based on user inputs. Improving career guidance can be achieved by embedding machine learning into the career consulting ecosystem. There are various systems of career guidance that work based on the same logic, such as the classification of applicants, matching applications with appropriate departments or jobs, making predictions, and providing suitable recommendations. Methodologies like KNN, Neural Networks, K-means clustering, D-Tree, and many other advanced algorithms are applied in the fields of data and compute some data, which is helpful to predict the right careers. Besides helping users with their career choice, these systems provide numerous opportunities which are very useful while making this hard decision. They help the candidate to recognize where he/she specifically lacks sufficient skills so that the candidate can improve those skills. They are also capable to offer an e-learning platform, taking into account the user's lack of knowledge. Furthermore, users can be provided with details on a particular job, such as the abilities required to excel in that industry.Keywords: career guidance system, machine learning, career prediction, predictive decision, data mining, technical and non-technical skills
Procedia PDF Downloads 80493 Thermodynamic Analysis of Surface Seawater under Ocean Warming: An Integrated Approach Combining Experimental Measurements, Theoretical Modeling, Machine Learning Techniques, and Molecular Dynamics Simulation for Climate Change Assessment
Authors: Nishaben Desai Dholakiya, Anirban Roy, Ranjan Dey
Abstract:
Understanding ocean thermodynamics has become increasingly critical as Earth's oceans serve as the primary planetary heat regulator, absorbing approximately 93% of excess heat energy from anthropogenic greenhouse gas emissions. This investigation presents a comprehensive analysis of Arabian Sea surface seawater thermodynamics, focusing specifically on heat capacity (Cp) and thermal expansion coefficient (α) - parameters fundamental to global heat distribution patterns. Through high-precision experimental measurements of ultrasonic velocity and density across varying temperature (293.15-318.15K) and salinity (0.5-35 ppt) conditions, it characterize critical thermophysical parameters including specific heat capacity, thermal expansion, and isobaric and isothermal compressibility coefficients in natural seawater systems. The study employs advanced machine learning frameworks - Random Forest, Gradient Booster, Stacked Ensemble Machine Learning (SEML), and AdaBoost - with SEML achieving exceptional accuracy (R² > 0.99) in heat capacity predictions. the findings reveal significant temperature-dependent molecular restructuring: enhanced thermal energy disrupts hydrogen-bonded networks and ion-water interactions, manifesting as decreased heat capacity with increasing temperature (negative ∂Cp/∂T). This mechanism creates a positive feedback loop where reduced heat absorption capacity potentially accelerates oceanic warming cycles. These quantitative insights into seawater thermodynamics provide crucial parametric inputs for climate models and evidence-based environmental policy formulation, particularly addressing the critical knowledge gap in thermal expansion behavior of seawater under varying temperature-salinity conditions.Keywords: climate change, arabian sea, thermodynamics, machine learning
Procedia PDF Downloads 6492 Mass Flux and Forensic Assessment: Informed Remediation Decision Making at One of Canada’s Most Polluted Sites
Authors: Tony R. Walker, N. Devin MacAskill, Andrew Thalhiemer
Abstract:
Sydney Harbour, Nova Scotia, Canada has long been subject to effluent and atmospheric inputs of contaminants, including thousands of tons of PAHs from a large coking and steel plant which operated in Sydney for nearly a century. Contaminants comprised of coal tar residues which were discharged from coking ovens into a small tidal tributary, which became known as the Sydney Tar Ponds (STPs), and subsequently discharged into Sydney Harbour. An Environmental Impact Statement concluded that mobilization of contaminated sediments posed unacceptable ecological risks, therefore immobilizing contaminants in the STPs using solidification and stabilization was identified as a primary source control remediation option to mitigate against continued transport of contaminated sediments from the STPs into Sydney Harbour. Recent developments in contaminant mass flux techniques focus on understanding “mobile” vs. “immobile” contaminants at remediation sites. Forensic source evaluations are also increasingly used for understanding origins of PAH contaminants in soils or sediments. Flux and forensic source evaluation-informed remediation decision-making uses this information to develop remediation end point goals aimed at reducing off-site exposure and managing potential ecological risk. This study included reviews of previous flux studies, calculating current mass flux estimates and a forensic assessment using PAH fingerprint techniques, during remediation of one of Canada’s most polluted sites at the STPs. Historically, the STPs was thought to be the major source of PAH contamination in Sydney Harbour with estimated discharges of nearly 800 kg/year of PAHs. However, during three years of remediation monitoring only 17-97 kg/year of PAHs were discharged from the STPs, which was also corroborated by an independent PAH flux study during the first year of remediation which estimated 119 kg/year. The estimated mass efflux of PAHs from the STPs during remediation was in stark contrast to ~2000 kg loading thought necessary to cause a short term increase in harbour sediment PAH concentrations. These mass flux estimates during remediation were also between three to eight times lower than PAHs discharged from the STPs a decade prior to remediation, when at the same time, government studies demonstrated on-going reduction in PAH concentrations in harbour sediments. Flux results were also corroborated using forensic source evaluations using PAH fingerprint techniques which found a common source of PAHs for urban soils, marine and aquatic sediments in and around Sydney. Coal combustion (from historical coking) and coal dust transshipment (from current coal transshipment facilities), are likely the principal source of PAHs in these media and not migration of PAH laden sediments from the STPs during a large scale remediation project.Keywords: contaminated sediment, mass flux, forensic source evaluations, remediation
Procedia PDF Downloads 239491 Career Guidance System Using Machine Learning
Authors: Mane Darbinyan, Lusine Hayrapetyan, Elen Matevosyan
Abstract:
Artificial Intelligence in Education (AIED) has been created to help students get ready for the workforce, and over the past 25 years, it has grown significantly, offering a variety of technologies to support academic, institutional, and administrative services. However, this is still challenging, especially considering the labor market's rapid change. While choosing a career, people face various obstacles because they do not take into consideration their own preferences, which might lead to many other problems like shifting jobs, work stress, occupational infirmity, reduced productivity, and manual error. Besides preferences, people should evaluate properly their technical and non-technical skills, as well as their personalities. Professional counseling has become a difficult undertaking for counselors due to the wide range of career choices brought on by changing technological trends. It is necessary to close this gap by utilizing technology that makes sophisticated predictions about a person's career goals based on their personality. Hence, there is a need to create an automated model that would help in decision-making based on user inputs. Improving career guidance can be achieved by embedding machine learning into the career consulting ecosystem. There are various systems of career guidance that work based on the same logic, such as the classification of applicants, matching applications with appropriate departments or jobs, making predictions, and providing suitable recommendations. Methodologies like KNN, neural networks, K-means clustering, D-Tree, and many other advanced algorithms are applied in the fields of data and compute some data, which is helpful to predict the right careers. Besides helping users with their career choice, these systems provide numerous opportunities which are very useful while making this hard decision. They help the candidate to recognize where he/she specifically lacks sufficient skills so that the candidate can improve those skills. They are also capable of offering an e-learning platform, taking into account the user's lack of knowledge. Furthermore, users can be provided with details on a particular job, such as the abilities required to excel in that industry.Keywords: career guidance system, machine learning, career prediction, predictive decision, data mining, technical and non-technical skills
Procedia PDF Downloads 70490 Investigating University Students' Attitudes towards Infertility in Terms of Socio-Demographic Variables
Authors: Yelda Kağnıcı, Seçil Seymenler, Bahar Baran, Erol Esen, Barışcan Öztürk, Ender Siyez, Diğdem M. Siyez
Abstract:
Infertility is the inability to reproduce after twelve months or longer unprotected sexual relationship. Although infertility is not a life threatening illness, it is considered as a serious problem for both the individual and the society. At this point, the importance of examining attitudes towards infertility is critical. Negative attitudes towards infertility may postpone individuals’ help seeking behaviors. The aim of this study is to investigate university students’ attitudes towards infertility in terms of socio-demographic variables (gender, age, taking sexual health education, existence of an infertile individual in the social network, plans about having child and behaviors about health). The sample of the study was 9693 university students attending to 21 universities in Turkey. Of the 9693 students, % 51.6 (n = 5002) were female, % 48.4 (n = 4691) were male. The data was collected by Attitudes toward Infertility Scale developed by researchers and Personal Information Form. In data analysis first frequencies were calculated, then in order to test whether there were significant differences in attitudes towards infertility scores of university students in terms of socio-demographic variables, one way ANOVA was conducted. According to the results, it was found that female students, students who had sexual health education, who have sexual relationship experience, who have an infertile individual in their social networks, who have child plans, who have high caffeine usage and who use alcohol regularly have more positive attitudes towards infertility. On the other hand, attitudes towards infidelity did not show significant differences in terms of age and cigarette usage. When the results of the study were evaluated in general, it was seen that university students’ attitudes towards infertility were negative. The attitudes of students who have high caffeine and alcohols usage were high. It can be considered that these students are aware that their social habits are risky. Female students’ positive attitudes might be explained by their gender role. The results point out that in order to decrease university students’ negative attitudes towards infertility, there is a necessity to develop preventive programs in universities.Keywords: infertility, attitudes, sex, university students
Procedia PDF Downloads 247489 Chemical Aging of High-Density Polyethylene (HDPE-100) in Interaction with Aggressive Environment
Authors: Berkas Khaoula, Chaoui Kamel
Abstract:
Polyethylene (PE) pipes are one of the best options for water and gas transmission networks. The main reason for such a choice is its high-quality performance in service conditions over long periods of time. PE pipes are installed in contact with different soils having various chemical compositions with confirmed aggressiveness. As a result, PE pipe surfaces undergo unwanted oxidation reactions. Usually, the polymer mixture is designed to include some additives, such as anti-oxidants, to inhibit or reduce the degradation effects. Some other additives are intended to increase resistance to the ESC phenomenon associated with polymers (ESC: Environmental Stress Cracking). This situation occurs in contact with aggressive external environments following different contaminations of soil, groundwater and transported fluids. In addition, bacterial activity and other physical or chemical media, such as temperature and humidity, can play an enhancing role. These conditions contribute to modifying the PE pipe structure and degrade its properties during exposure. In this work, the effect of distilled water, sodium hypochlorite (bleach), diluted sulfuric acid (H2SO4) and toluene-methanol (TM) mixture are studied when extruded PE samples are exposed to those environments for given periods. The chosen exposure periods are 7, 14 and 28 days at room temperature and in sealed glass containers. Post-exposure observations and ISO impact tests are presented as a function of time and chemical medium. Water effects are observed to be limited in explaining such use in real applications, whereas the changes in TM and acidic media are very significant. For the TM medium, the polymer toughness increased drastically (from 15.95 kJ/m² up to 32.01 kJ/m²), while sulfuric acid showed a steady augmentation over time. This situation may correspond to a hardening phenomenon of PE increasing its brittleness and its ability for structural degradation because of localized oxidation reactions and changes in crystallinity.Keywords: polyethylene, toluene-methanol mixture, environmental stress cracking, degradation, impact resistance
Procedia PDF Downloads 75488 Slöjd International: Translating and Tracking Nordic Curricula for Holistic Health, 1890s-1920s
Authors: Sasha Mullally
Abstract:
This paper investigates the transnational circulation of European Nordic ideas about and programs for manual education and training over the decades spanning the late 19th and early 20th centuries. Based on the unexamined but voluminous correspondence (English-language) of Otto Salomon, an internationally famous education reformer who popularized a form of manual training called "slöjd" (anglicized as "sloyd"), this paper examines it's circulation and translation across global cultures. Salomon, a multilingual promoter of new standardized program for manual training, based his curricula on traditional handcrqafts, particularly Swedish woodworking. He and his followers claimed that the integration of manual training and craft work provided primary and secondary educators with an opportunity to cultivate the mental, but also the physical, and tangentially, the spiritual, health of children. While historians have examined the networks who came together in person to train at his slöjd school for educators in western Sweden, no one has mapped the international community he cultivated over decades of letter writing. Additionally, while the circulation of his ideas in Britain and Germany, as well as the northeastern United States has been placed in a broader narrative of "western" education reform in the Progressive or late Victorian era, no one has examined the correspondence for evidence of the program's wider international appeal beyond Europe and North America. This paper fills this gap by examining the breadth of his reach through active correspondence with educators in Asia (Japan), South America (Brazil), and Africa (South Africa and Zimbabwe). As such, this research presents an opportunity to map the international communities of education reformers active at the turn of the last century, compare and contrast their understandings of and interpretations of "holistic" education, and reveal the ways manual formation was understood to be foundational to the healthy development of children.Keywords: history of education, history of medicine and psychiatry, child health, child formation, internationalism
Procedia PDF Downloads 105487 Exploring the Motivations That Drive Paper Use in Clinical Practice Post-Electronic Health Record Adoption: A Nursing Perspective
Authors: Sinead Impey, Gaye Stephens, Lucy Hederman, Declan O'Sullivan
Abstract:
Continued paper use in the clinical area post-Electronic Health Record (EHR) adoption is regularly linked to hardware and software usability challenges. Although paper is used as a workaround to circumvent challenges, including limited availability of a computer, this perspective does not consider the important role paper, such as the nurses’ handover sheet, play in practice. The purpose of this study is to confirm the hypothesis that paper use post-EHR adoption continues as paper provides both a cognitive tool (that assists with workflow) and a compensation tool (to circumvent usability challenges). Distinguishing the different motivations for continued paper-use could assist future evaluations of electronic record systems. Methods: Qualitative data were collected from three clinical care environments (ICU, general ward and specialist day-care) who used an electronic record for at least 12 months. Data were collected through semi-structured interviews with 22 nurses. Data were transcribed, themes extracted using an inductive bottom-up coding approach and a thematic index constructed. Findings: All nurses interviewed continued to use paper post-EHR adoption. While two distinct motivations for paper use post-EHR adoption were confirmed by the data - paper as a cognitive tool and paper as a compensation tool - further finding was that there was an overlap between the two uses. That is, paper used as a compensation tool could also be adapted to function as a cognitive aid due to its nature (easy to access and annotate) or vice versa. Rather than present paper persistence as having two distinctive motivations, it is more useful to describe it as presenting on a continuum with compensation tool and cognitive tool at either pole. Paper as a cognitive tool referred to pages such as nurses’ handover sheet. These did not form part of the patient’s record, although information could be transcribed from one to the other. Findings suggest that although the patient record was digitised, handover sheets did not fall within this remit. These personal pages continued to be useful post-EHR adoption for capturing personal notes or patient information and so continued to be incorporated into the nurses’ work. Comparatively, the paper used as a compensation tool, such as pre-printed care plans which were stored in the patient's record, appears to have been instigated in reaction to usability challenges. In these instances, it is expected that paper use could reduce or cease when the underlying problem is addressed. There is a danger that as paper affords nurses a temporary information platform that is mobile, easy to access and annotate, its use could become embedded in clinical practice. Conclusion: Paper presents a utility to nursing, either as a cognitive or compensation tool or combination of both. By fully understanding its utility and nuances, organisations can avoid evaluating all incidences of paper use (post-EHR adoption) as arising from usability challenges. Instead, suitable remedies for paper-persistence can be targeted at the root cause.Keywords: cognitive tool, compensation tool, electronic record, handover sheet, nurse, paper persistence
Procedia PDF Downloads 442486 Computational Linguistic Implications of Gender Bias: Machines Reflect Misogyny in Society
Authors: Irene Yi
Abstract:
Machine learning, natural language processing, and neural network models of language are becoming more and more prevalent in the fields of technology and linguistics today. Training data for machines are at best, large corpora of human literature and at worst, a reflection of the ugliness in society. Computational linguistics is a growing field dealing with such issues of data collection for technological development. Machines have been trained on millions of human books, only to find that in the course of human history, derogatory and sexist adjectives are used significantly more frequently when describing females in history and literature than when describing males. This is extremely problematic, both as training data, and as the outcome of natural language processing. As machines start to handle more responsibilities, it is crucial to ensure that they do not take with them historical sexist and misogynistic notions. This paper gathers data and algorithms from neural network models of language having to deal with syntax, semantics, sociolinguistics, and text classification. Computational analysis on such linguistic data is used to find patterns of misogyny. Results are significant in showing the existing intentional and unintentional misogynistic notions used to train machines, as well as in developing better technologies that take into account the semantics and syntax of text to be more mindful and reflect gender equality. Further, this paper deals with the idea of non-binary gender pronouns and how machines can process these pronouns correctly, given its semantic and syntactic context. This paper also delves into the implications of gendered grammar and its effect, cross-linguistically, on natural language processing. Languages such as French or Spanish not only have rigid gendered grammar rules, but also historically patriarchal societies. The progression of society comes hand in hand with not only its language, but how machines process those natural languages. These ideas are all extremely vital to the development of natural language models in technology, and they must be taken into account immediately.Keywords: computational analysis, gendered grammar, misogynistic language, neural networks
Procedia PDF Downloads 119485 Modeling Biomass and Biodiversity across Environmental and Management Gradients in Temperate Grasslands with Deep Learning and Sentinel-1 and -2
Authors: Javier Muro, Anja Linstadter, Florian Manner, Lisa Schwarz, Stephan Wollauer, Paul Magdon, Gohar Ghazaryan, Olena Dubovyk
Abstract:
Monitoring the trade-off between biomass production and biodiversity in grasslands is critical to evaluate the effects of management practices across environmental gradients. New generations of remote sensing sensors and machine learning approaches can model grasslands’ characteristics with varying accuracies. However, studies often fail to cover a sufficiently broad range of environmental conditions, and evidence suggests that prediction models might be case specific. In this study, biomass production and biodiversity indices (species richness and Fishers’ α) are modeled in 150 grassland plots for three sites across Germany. These sites represent a North-South gradient and are characterized by distinct soil types, topographic properties, climatic conditions, and management intensities. Predictors used are derived from Sentinel-1 & 2 and a set of topoedaphic variables. The transferability of the models is tested by training and validating at different sites. The performance of feed-forward deep neural networks (DNN) is compared to a random forest algorithm. While biomass predictions across gradients and sites were acceptable (r2 0.5), predictions of biodiversity indices were poor (r2 0.14). DNN showed higher generalization capacity than random forest when predicting biomass across gradients and sites (relative root mean squared error of 0.5 for DNN vs. 0.85 for random forest). DNN also achieved high performance when using the Sentinel-2 surface reflectance data rather than different combinations of spectral indices, Sentinel-1 data, or topoedaphic variables, simplifying dimensionality. This study demonstrates the necessity of training biomass and biodiversity models using a broad range of environmental conditions and ensuring spatial independence to have realistic and transferable models where plot level information can be upscaled to landscape scale.Keywords: ecosystem services, grassland management, machine learning, remote sensing
Procedia PDF Downloads 218484 The Digital Microscopy in Organ Transplantation: Ergonomics of the Tele-Pathological Evaluation of Renal, Liver, and Pancreatic Grafts
Authors: Constantinos S. Mammas, Andreas Lazaris, Adamantia S. Mamma-Graham, Georgia Kostopanagiotou, Chryssa Lemonidou, John Mantas, Eustratios Patsouris
Abstract:
The process to build a better safety culture, methods of error analysis, and preventive measures, starts with an understanding of the effects when human factors engineering refer to remote microscopic diagnosis in surgery and specially in organ transplantation for the evaluation of the grafts. Α high percentage of solid organs arrive at the recipient hospitals and are considered as injured or improper for transplantation in the UK. Digital microscopy adds information on a microscopic level about the grafts (G) in Organ Transplant (OT), and may lead to a change in their management. Such a method will reduce the possibility that a diseased G will arrive at the recipient hospital for implantation. Aim: The aim of this study is to analyze the ergonomics of digital microscopy (DM) based on virtual slides, on telemedicine systems (TS) for tele-pathological evaluation (TPE) of the grafts (G) in organ transplantation (OT). Material and Methods: By experimental simulation, the ergonomics of DM for microscopic TPE of renal graft (RG), liver graft (LG) and pancreatic graft (PG) tissues is analyzed. In fact, this corresponded to the ergonomics of digital microscopy for TPE in OT by applying virtual slide (VS) system for graft tissue image capture, for remote diagnoses of possible microscopic inflammatory and/or neoplastic lesions. Experimentation included the development of an OTE-TS similar experimental telemedicine system (Exp.-TS) for simulating the integrated VS based microscopic TPE of RG, LG and PG Simulation of DM on TS based TPE performed by 2 specialists on a total of 238 human renal graft (RG), 172 liver graft (LG) and 108 pancreatic graft (PG) tissues digital microscopic images for inflammatory and neoplastic lesions on four electronic spaces of the four used TS. Results: Statistical analysis of specialist‘s answers about the ability to accurately diagnose the diseased RG, LG and PG tissues on the electronic space among four TS (A,B,C,D) showed that DM on TS for TPE in OT is elaborated perfectly on the ES of a desktop, followed by the ES of the applied Exp.-TS. Tablet and mobile-phone ES seem significantly risky for the application of DM in OT (p<.001). Conclusion: To make the largest reduction in errors and adverse events referring to the quality of the grafts, it will take application of human factors engineering to procurement, design, audit, and awareness-raising activities. Consequently, it will take an investment in new training, people, and other changes to management activities for DM in OT. The simulating VS based TPE with DM of RG, LG and PG tissues after retrieval, seem feasible and reliable and dependable on the size of the electronic space of the applied TS, for remote prevention of diseased grafts from being retrieved and/or sent to the recipient hospital and for post-grafting and pre-transplant planning.Keywords: digital microscopy, organ transplantation, tele-pathology, virtual slides
Procedia PDF Downloads 280483 A Next Generation Multi-Scale Modeling Theatre for in silico Oncology
Authors: Safee Chaudhary, Mahnoor Naseer Gondal, Hira Anees Awan, Abdul Rehman, Ammar Arif, Risham Hussain, Huma Khawar, Zainab Arshad, Muhammad Faizyab Ali Chaudhary, Waleed Ahmed, Muhammad Umer Sultan, Bibi Amina, Salaar Khan, Muhammad Moaz Ahmad, Osama Shiraz Shah, Hadia Hameed, Muhammad Farooq Ahmad Butt, Muhammad Ahmad, Sameer Ahmed, Fayyaz Ahmed, Omer Ishaq, Waqar Nabi, Wim Vanderbauwhede, Bilal Wajid, Huma Shehwana, Muhammad Tariq, Amir Faisal
Abstract:
Cancer is a manifestation of multifactorial deregulations in biomolecular pathways. These deregulations arise from the complex multi-scale interplay between cellular and extracellular factors. Such multifactorial aberrations at gene, protein, and extracellular scales need to be investigated systematically towards decoding the underlying mechanisms and orchestrating therapeutic interventions for patient treatment. In this work, we propose ‘TISON’, a next-generation web-based multiscale modeling platform for clinical systems oncology. TISON’s unique modeling abstraction allows a seamless coupling of information from biomolecular networks, cell decision circuits, extra-cellular environments, and tissue geometries. The platform can undertake multiscale sensitivity analysis towards in silico biomarker identification and drug evaluation on cellular phenotypes in user-defined tissue geometries. Furthermore, integration of cancer expression databases such as The Cancer Genome Atlas (TCGA) and Human Proteome Atlas (HPA) facilitates in the development of personalized therapeutics. TISON is the next-evolution of multiscale cancer modeling and simulation platforms and provides a ‘zero-code’ model development, simulation, and analysis environment for application in clinical settings.Keywords: systems oncology, cancer systems biology, cancer therapeutics, personalized therapeutics, cancer modelling
Procedia PDF Downloads 222482 Comparison of Two Home Sleep Monitors Designed for Self-Use
Authors: Emily Wood, James K. Westphal, Itamar Lerner
Abstract:
Background: Polysomnography (PSG) recordings are regularly used in research and clinical settings to study sleep and sleep-related disorders. Typical PSG studies are conducted in professional laboratories and performed by qualified researchers. However, the number of sleep labs worldwide is disproportionate to the increasing number of individuals with sleep disorders like sleep apnea and insomnia. Consequently, there is a growing need to supply cheaper yet reliable means to measure sleep, preferably autonomously by subjects in their own home. Over the last decade, a variety of devices for self-monitoring of sleep became available in the market; however, very few have been directly validated against PSG to demonstrate their ability to perform reliable automatic sleep scoring. Two popular mobile EEG-based systems that have published validation results, the DREEM 3 headband and the Z-Machine, have never been directly compared one to the other by independent researchers. The current study aimed to compare the performance of DREEM 3 and the Z-Machine to help investigators and clinicians decide which of these devices may be more suitable for their studies. Methods: 26 participants have completed the study for credit or monetary compensation. Exclusion criteria included any history of sleep, neurological or psychiatric disorders. Eligible participants arrived at the lab in the afternoon and received the two devices. They then spent two consecutive nights monitoring their sleep at home. Participants were also asked to keep a sleep log, indicating the time they fell asleep, woke up, and the number of awakenings occurring during the night. Data from both devices, including detailed sleep hypnograms in 30-second epochs (differentiating Wake, combined N1/N2, N3; and Rapid Eye Movement sleep), were extracted and aligned upon retrieval. For analysis, the number of awakenings each night was defined as four or more consecutive wake epochs between sleep onset and termination. Total sleep time (TST) and the number of awakenings were compared to subjects’ sleep logs to measure consistency with the subjective reports. In addition, the sleep scores from each device were compared epoch-by-epoch to calculate the agreement between the two devices using Cohen’s Kappa. All analysis was performed using Matlab 2021b and SPSS 27. Results/Conclusion: Subjects consistently reported longer times spent asleep than the time reported by each device (M= 448 minutes for sleep logs compared to M= 406 and M= 345 minutes for the DREEM and Z-Machine, respectively; both ps<0.05). Linear correlations between the sleep log and each device were higher for the DREEM than the Z-Machine for both TST and the number of awakenings, and, likewise, the mean absolute bias between the sleep logs and each device was higher for the Z-Machine for both TST (p<0.001) and awakenings (p<0.04). There was some indication that these effects were stronger for the second night compared to the first night. Epoch-by-epoch comparisons showed that the main discrepancies between the devices were for detecting N2 and REM sleep, while N3 had a high agreement. Overall, the DREEM headband seems superior for reliably scoring sleep at home.Keywords: DREEM, EEG, seep monitoring, Z-machine
Procedia PDF Downloads 107481 Interactivity as a Predictor of Intent to Revisit Sports Apps
Authors: Young Ik Suh, Tywan G. Martin
Abstract:
Sports apps in a smartphone provide up-to-date information and fast and convenient access to live games. The market of sports apps has emerged as the second fastest growing app category worldwide. Further, many sports fans use their smartphones to know the schedule of sporting events, players’ position and bios, videos and highlights. In recent years, a growing number of scholars and practitioners alike have emphasized the importance of interactivity with sports apps, hypothesizing that interactivity plays a significant role in enticing sports apps users and that it is a key component in measuring the success of sports apps. Interactivity in sports apps focuses primarily on two functions: (1) two-way communication and (2) active user control, neither of which have been applicable through traditional mass media and communication technologies. Therefore, the purpose of this study is to examine whether the interactivity function on sports apps leads to positive outcomes such as intent to revisit. More specifically, this study investigates how three major functions of interactivity (i.e., two-way communication, active user control, and real-time information) influence the attitude of sports apps users and their intent to revisit the sports apps. The following hypothesis is proposed; interactivity functions will be positively associated with both attitudes toward sports apps and intent to revisit sports apps. The survey questionnaire includes four parts: (1) an interactivity scale, (2) an attitude scale, (3) a behavioral intention scale, and (4) demographic questions. Data are to be collected from ESPN apps users. To examine the relationships among the observed and latent variables and determine the reliability and validity of constructs, confirmatory factor analysis (CFA) is conducted. Structural equation modeling (SEM) is utilized to test hypothesized relationships among constructs. Additionally, this study compares the proposed interactivity model with a rival model to identify the role of attitude as a mediating factor. The findings of the current sports apps study provide several theoretical and practical contributions and implications by extending the research and literature associated with the important role of interactivity functions in sports apps and sports media consumption behavior. Specifically, this study may improve the theoretical understandings of whether the interactivity functions influence user attitudes and intent to revisit sports apps. Additionally, this study identifies which dimensions of interactivity are most important to sports apps users. From practitioners’ perspectives, this findings of this study provide significant implications. More entrepreneurs and investors in the sport industry need to recognize that high-resolution photos, live streams, and up-to-date stats are in the sports app, right at sports fans fingertips. The result will imply that sport practitioners may need to develop sports mobile apps that offer greater interactivity functions to attract sport fans.Keywords: interactivity, two-way communication, active user control, real time information, sports apps, attitude, intent to revisit
Procedia PDF Downloads 147480 Quality by Design in the Optimization of a Fast HPLC Method for Quantification of Hydroxychloroquine Sulfate
Authors: Pedro J. Rolim-Neto, Leslie R. M. Ferraz, Fabiana L. A. Santos, Pablo A. Ferreira, Ricardo T. L. Maia-Jr., Magaly A. M. Lyra, Danilo A F. Fonte, Salvana P. M. Costa, Amanda C. Q. M. Vieira, Larissa A. Rolim
Abstract:
Initially developed as an antimalarial agent, hydroxychloroquine (HCQ) sulfate is often used as a slow-acting antirheumatic drug in the treatment of disorders of connective tissue. The United States Pharmacopeia (USP) 37 provides a reversed-phase HPLC method for quantification of HCQ. However, this method was not reproducible, producing asymmetric peaks in a long analysis time. The asymmetry of the peak may cause an incorrect calculation of the concentration of the sample. Furthermore, the analysis time is unacceptable, especially regarding the routine of a pharmaceutical industry. The aiming of this study was to develop a fast, easy and efficient method for quantification of HCQ sulfate by High Performance Liquid Chromatography (HPLC) based on the Quality by Design (QbD) methodology. This method was optimized in terms of peak symmetry using the surface area graphic as the Design of Experiments (DoE) and the tailing factor (TF) as an indicator to the Design Space (DS). The reference method used was that described at USP 37 to the quantification of the drug. For the optimized method, was proposed a 33 factorial design, based on the QbD concepts. The DS was created with the TF (in a range between 0.98 and 1.2) in order to demonstrate the ideal analytical conditions. Changes were made in the composition of the USP mobile-phase (USP-MP): USP-MP: Methanol (90:10 v/v, 80:20 v/v and 70:30 v/v), in the flow (0.8, 1.0 and 1.2 mL) and in the oven temperature (30, 35, and 40ºC). The USP method allowed the quantification of drug in a long time (40-50 minutes). In addition, the method uses a high flow rate (1,5 mL.min-1) which increases the consumption of expensive solvents HPLC grade. The main problem observed was the TF value (1,8) that would be accepted if the drug was not a racemic mixture, since the co-elution of the isomers can become an unreliable peak integration. Therefore, the optimization was suggested in order to reduce the analysis time, aiming a better peak resolution and TF. For the optimization method, by the analysis of the surface-response plot it was possible to confirm the ideal setting analytical condition: 45 °C, 0,8 mL.min-1 and 80:20 USP-MP: Methanol. The optimized HPLC method enabled the quantification of HCQ sulfate, with a peak of high resolution, showing a TF value of 1,17. This promotes good co-elution of isomers of the HCQ, ensuring an accurate quantification of the raw material as racemic mixture. This method also proved to be 18 times faster, approximately, compared to the reference method, using a lower flow rate, reducing even more the consumption of the solvents and, consequently, the analysis cost. Thus, an analytical method for the quantification of HCQ sulfate was optimized using QbD methodology. This method proved to be faster and more efficient than the USP method, regarding the retention time and, especially, the peak resolution. The higher resolution in the chromatogram peaks supports the implementation of the method for quantification of the drug as racemic mixture, not requiring the separation of isomers.Keywords: analytical method, hydroxychloroquine sulfate, quality by design, surface area graphic
Procedia PDF Downloads 639479 Family Health in Families with Children with Autism
Authors: Teresa Isabel Lozano Pérez, Sandra Soca Lozano
Abstract:
In Cuba, the childcare is one of the programs prioritized by the Ministry of Public Health and the birth of a child becomes a desired and rewarding event for the family, which is prepared for the reception of a healthy child. When this does not happen and after the first months of the child's birth begin to appear developmental deviations that indicate the presence of a disorder, the event becomes a live event potentially negative and generates disruptions in the family health. A quantitative, descriptive, and cross-sectional research methodology was conducted to describe the impact on family health of diagnosis of autism in a sample of 25 families of children diagnosed with infantile autism at the University Pediatric Hospital Juan Manuel Marquez Havana, Cuba; in the period between January 2014 and May 2015. The sample was non probabilistic and intentional from the inclusion criteria selected. As instruments, we used a survey to identify the structure of the family, life events inventory and an instrument to assess the relative impact, adaptive resources of family and social support perceived (IRFA) to identify the diagnosis of autism as life event. The main results indicated that the majority of families studied were nuclear, small and medium and in the formation stage. All households surveyed identified the diagnosis of autism in a child as an event of great importance and negative significance for the family, taking in most of the families studied a high impact on the four areas of family health and impact enhancer of involvement in family health. All the studied families do not have sufficient adaptive resources to face this situation, sensing that they received social support frequently, mainly in information and emotional areas. We conclude that the diagnosis of autism one of the members of the families studied is valued as a life event highly significant with unfavorably way causing an enhancer impact of involvement in family health especially in the areas ‘health’ and ‘socio-psychological’. Among the social support networks health institutions, partners and friends are highlighted. We recommend developing intervention strategies in families of these children to support them in the process of adapting the diagnosis.Keywords: family, family health, infantile autism, life event
Procedia PDF Downloads 431478 A Relational Approach to Adverb Use in Interactions
Authors: Guillaume P. Fernandez
Abstract:
Individual language use is a matter of choice in particular interactions. The paper proposes a conceptual and theoretical framework with methodological consideration to develop how language produced in dyadic relations is to be considered and situated in the larger social configuration the interaction is embedded within. An integrated and comprehensive view is taken: social interactions are expected to be ruled by a normative context, defined by the chain of interdependences that structures the personal network. In this approach, the determinants of discursive practices are not only constrained by the moment of production and isolated from broader influences. Instead, the position the individual and the dyad have in the personal network influences the discursive practices in a twofold manner: on the one hand, the network limits the access to linguistic resources available within it, and, on the other hand, the structure of the network influences the agency of the individual, by the social control inherent to particular network characteristics. Concretely, we investigate how and to what extent consistent ego is from one interaction to another in his or her use of adverbs. To do so, social network analysis (SNA) methods are mobilized. Participants (N=130) are college students recruited in the french speaking part of Switzerland. The personal network of significant ones of each individual is created using name generators and edge interpreters, with a focus on social support and conflict. For the linguistic parts, respondents were asked to record themselves with five of their close relations. From the recordings, we computed an average similarity score based on the adverb used across interactions. In terms of analyses, two are envisaged: First, OLS regressions including network-level measures, such as density and reciprocity, and individual-level measures, such as centralities, are performed to understand the tenets of linguistic similarity from one interaction to another. The second analysis considers each social tie as nested within ego networks. Multilevel models are performed to investigate how the different types of ties may influence the likelihood to use adverbs, by controlling structural properties of the personal network. Primary results suggest that the more cohesive the network, the less likely is the individual to change his or her manner of speaking, and social support increases the use of adverbs in interactions. While promising results emerge, further research should consider a longitudinal approach to able the claim of causality.Keywords: personal network, adverbs, interactions, social influence
Procedia PDF Downloads 67477 Determination of Influence Lines for Train Crossings on a Tied Arch Bridge to Optimize the Construction of the Hangers
Authors: Martin Mensinger, Marjolaine Pfaffinger, Matthias Haslbeck
Abstract:
The maintenance and expansion of the railway network represents a central task for transport planning in the future. In addition to the ultimate limit states, the aspects of resource conservation and sustainability are increasingly more necessary to include in the basic engineering. Therefore, as part of the AiF research project, ‘Integrated assessment of steel and composite railway bridges in accordance with sustainability criteria’, the entire lifecycle of engineering structures is involved in planning and evaluation, offering a way to optimize the design of steel bridges. In order to reduce the life cycle costs and increase the profitability of steel structures, it is particularly necessary to consider the demands on hanger connections resulting from fatigue. In order for accurate analysis, a number simulations were conducted as part of the research project on a finite element model of a reference bridge, which gives an indication of the internal forces of the individual structural components of a tied arch bridge, depending on the stress incurred by various types of trains. The calculations were carried out on a detailed FE-model, which allows an extraordinarily accurate modeling of the stiffness of all parts of the constructions as it is made up surface elements. The results point to a large impact of the formation of details on fatigue-related changes in stress, on the one hand, and on the other, they could depict construction-specific specifics over the course of adding stress. Comparative calculations with varied axle-stress distribution also provide information about the sensitivity of the results compared to the imposition of stress and axel distribution on the stress-resultant development. The calculated diagrams help to achieve an optimized hanger connection design through improved durability, which helps to reduce the maintenance costs of rail networks and to give practical application notes for the formation of details.Keywords: fatigue, influence line, life cycle, tied arch bridge
Procedia PDF Downloads 330476 Predictive Analysis of the Stock Price Market Trends with Deep Learning
Authors: Suraj Mehrotra
Abstract:
The stock market is a volatile, bustling marketplace that is a cornerstone of economics. It defines whether companies are successful or in spiral. A thorough understanding of it is important - many companies have whole divisions dedicated to analysis of both their stock and of rivaling companies. Linking the world of finance and artificial intelligence (AI), especially the stock market, has been a relatively recent development. Predicting how stocks will do considering all external factors and previous data has always been a human task. With the help of AI, however, machine learning models can help us make more complete predictions in financial trends. Taking a look at the stock market specifically, predicting the open, closing, high, and low prices for the next day is very hard to do. Machine learning makes this task a lot easier. A model that builds upon itself that takes in external factors as weights can predict trends far into the future. When used effectively, new doors can be opened up in the business and finance world, and companies can make better and more complete decisions. This paper explores the various techniques used in the prediction of stock prices, from traditional statistical methods to deep learning and neural networks based approaches, among other methods. It provides a detailed analysis of the techniques and also explores the challenges in predictive analysis. For the accuracy of the testing set, taking a look at four different models - linear regression, neural network, decision tree, and naïve Bayes - on the different stocks, Apple, Google, Tesla, Amazon, United Healthcare, Exxon Mobil, J.P. Morgan & Chase, and Johnson & Johnson, the naïve Bayes model and linear regression models worked best. For the testing set, the naïve Bayes model had the highest accuracy along with the linear regression model, followed by the neural network model and then the decision tree model. The training set had similar results except for the fact that the decision tree model was perfect with complete accuracy in its predictions, which makes sense. This means that the decision tree model likely overfitted the training set when used for the testing set.Keywords: machine learning, testing set, artificial intelligence, stock analysis
Procedia PDF Downloads 95475 Streamflow Modeling Using the PyTOPKAPI Model with Remotely Sensed Rainfall Data: A Case Study of Gilgel Ghibe Catchment, Ethiopia
Authors: Zeinu Ahmed Rabba, Derek D Stretch
Abstract:
Remote sensing contributes valuable information to streamflow estimates. Usually, stream flow is directly measured through ground-based hydrological monitoring station. However, in many developing countries like Ethiopia, ground-based hydrological monitoring networks are either sparse or nonexistent, which limits the manage water resources and hampers early flood-warning systems. In such cases, satellite remote sensing is an alternative means to acquire such information. This paper discusses the application of remotely sensed rainfall data for streamflow modeling in Gilgel Ghibe basin in Ethiopia. Ten years (2001-2010) of two satellite-based precipitation products (SBPP), TRMM and WaterBase, were used. These products were combined with the PyTOPKAPI hydrological model to generate daily stream flows. The results were compared with streamflow observations at Gilgel Ghibe Nr, Assendabo gauging station using four statistical tools (Bias, R², NS and RMSE). The statistical analysis indicates that the bias-adjusted SBPPs agree well with gauged rainfall compared to bias-unadjusted ones. The SBPPs with no bias-adjustment tend to overestimate (high Bias and high RMSE) the extreme precipitation events and the corresponding simulated streamflow outputs, particularly during wet months (June-September) and underestimate the streamflow prediction over few dry months (January and February). This shows that bias-adjustment can be important for improving the performance of the SBPPs in streamflow forecasting. We further conclude that the general streamflow patterns were well captured at daily time scales when using SBPPs after bias adjustment. However, the overall results demonstrate that the simulated streamflow using the gauged rainfall is superior to those obtained from remotely sensed rainfall products including bias-adjusted ones.Keywords: Ethiopia, PyTOPKAPI model, remote sensing, streamflow, Tropical Rainfall Measuring Mission (TRMM), waterBase
Procedia PDF Downloads 285474 Transit Facility Planning in Fringe Areas of Kolkata Metropolitan Region
Authors: Soumen Mitra, Aparna Saha
Abstract:
The perceived link between the city and the countryside is evolving rapidly and is getting shifted away from the assumptions of mainstream paradigms to new conceptual networks where rural-urban links are being redefined. In this conceptual field, the fringe interface is still considered as a transitional zone between city and countryside, and is defined as a diffused area rather than a discrete territory. In developing countries fringe areas are said to have both rural and urban characteristics but are devoid of basic municipal facilities. Again, when the urban core areas envelopes the fringe areas along with it the character of fringe changes but services are not well facilitated which in turn results to uneven growth, rapid and haphazard development. One of the major services present in fringe areas is inter-linkages in terms of transit corridors. Planning for the appropriate and sustainable future of fringe areas requires a sheer focus on these corridors pertaining to transit facility, for better accessibility and mobility. Inducing a transit facility plan enhances the various facilities and also increases their proximity for user groups. The study focuses on the western fringe region of Kolkata metropolis which is a major source of industrial hub and housing sector, thus converting the agricultural lands into non-agricultural use. The study emphasizes on providing transit facilities both physical (stops, sheds, terminals, etc.) and operational (ticketing system, route prioritization, integration of transit modes, etc.), to facilitate the region as well as accelerate the growth pattern systematically. Hence, the scope of this work is on the basis of prevailing conditions in fringe areas and attempts for an effective transit facility plan. The strategies and recommendations are in terms of road widening, service coverage, feeder route prioritization, bus stops facilitation, pedestrian facilities, etc, which in turn enhances the region’s growth pattern. Thus, this context of transit facility planning acts as a catalytic agent to avoid the future unplanned growth and accelerates it towards an integrated development.Keywords: feeder route, fringe, municipal planning, transit facility
Procedia PDF Downloads 177473 Hardy Type Inequalities of Two-Dimensional on Time Scales via Steklov Operator
Authors: Wedad Albalawi
Abstract:
The mathematical inequalities have been the core of mathematical study and used in almost all branches of mathematics as well in various areas of science and engineering. The inequalities by Hardy, Littlewood and Polya were the first significant composition of several science. This work presents fundamental ideas, results and techniques and it has had much influence on research in various branches of analysis. Since 1934, various inequalities have been produced and studied in the literature. Furthermore, some inequalities have been formulated by some operators; in 1989, weighted Hardy inequalities have been obtained for integration operators. Then, they obtained weighted estimates for Steklov operators that were used in the solution of the Cauchy problem for the wave equation. They were improved upon in 2011 to include the boundedness of integral operators from the weighted Sobolev space to the weighted Lebesgue space. Some inequalities have been demonstrated and improved using the Hardy–Steklov operator. Recently, a lot of integral inequalities have been improved by differential operators. Hardy inequality has been one of the tools that is used to consider integrity solutions of differential equations. Then dynamic inequalities of Hardy and Coposon have been extended and improved by various integral operators. These inequalities would be interesting to apply in different fields of mathematics (functional spaces, partial differential equations, mathematical modeling). Some inequalities have been appeared involving Copson and Hardy inequalities on time scales to obtain new special version of them. A time scale is defined as a closed subset contains real numbers. Then the inequalities of time scales version have received a lot of attention and has had a major field in both pure and applied mathematics. There are many applications of dynamic equations on time scales to quantum mechanics, electrical engineering, neural networks, heat transfer, combinatorics, and population dynamics. This study focuses on double integrals to obtain new time-scale inequalities of Copson driven by Steklov operator. They will be applied in the solution of the Cauchy problem for the wave equation. The proof can be done by introducing restriction on the operator in several cases. In addition, the obtained inequalities done by using some concepts in time scale version such as time scales calculus, theorem of Fubini and the inequality of H¨older.Keywords: time scales, inequality of Hardy, inequality of Coposon, Steklov operator
Procedia PDF Downloads 76472 Analysis of Friction Stir Welding Process for Joining Aluminum Alloy
Authors: A. M. Khourshid, I. Sabry
Abstract:
Friction Stir Welding (FSW), a solid state joining technique, is widely being used for joining Al alloys for aerospace, marine automotive and many other applications of commercial importance. FSW were carried out using a vertical milling machine on Al 5083 alloy pipe. These pipe sections are relatively small in diameter, 5mm, and relatively thin walled, 2 mm. In this study, 5083 aluminum alloy pipe were welded as similar alloy joints using (FSW) process in order to investigate mechanical and microstructural properties .rotation speed 1400 r.p.m and weld speed 10,40,70 mm/min. In order to investigate the effect of welding speeds on mechanical properties, metallographic and mechanical tests were carried out on the welded areas. Vickers hardness profile and tensile tests of the joints as a metallurgical feasibility of friction stir welding for joining Al 6061 aluminum alloy welding was performed on pipe with different thickness 2, 3 and 4 mm,five rotational speeds (485,710,910,1120 and 1400) rpm and a traverse speed (4, 8 and 10)mm/min was applied. This work focuses on two methods such as artificial neural networks using software (pythia) and response surface methodology (RSM) to predict the tensile strength, the percentage of elongation and hardness of friction stir welded 6061 aluminum alloy. An artificial neural network (ANN) model was developed for the analysis of the friction stir welding parameters of 6061 pipe. The tensile strength, the percentage of elongation and hardness of weld joints were predicted by taking the parameters Tool rotation speed, material thickness and travel speed as a function. A comparison was made between measured and predicted data. Response surface methodology (RSM) also developed and the values obtained for the response Tensile strengths, the percentage of elongation and hardness are compared with measured values. The effect of FSW process parameter on mechanical properties of 6061 aluminum alloy has been analyzed in detail.Keywords: friction stir welding (FSW), al alloys, mechanical properties, microstructure
Procedia PDF Downloads 462471 Determining Components of Deflection of the Vertical in Owerri West Local Government, Imo State Nigeria Using Least Square Method
Authors: Chukwu Fidelis Ndubuisi, Madufor Michael Ozims, Asogwa Vivian Ndidiamaka, Egenamba Juliet Ngozi, Okonkwo Stephen C., Kamah Chukwudi David
Abstract:
Deflection of the vertical is a quantity used in reducing geodetic measurements related to geoidal networks to the ellipsoidal plane; and it is essential in Geoid modeling processes. Computing the deflection of the vertical component of a point in a given area is necessary in evaluating the standard errors along north-south and east-west direction. Using combined approach for the determination of deflection of the vertical component provides improved result but labor intensive without appropriate method. Least square method is a method that makes use of redundant observation in modeling a given sets of problem that obeys certain geometric condition. This research work is aimed to computing the deflection of vertical component of Owerri West local government area of Imo State using geometric method as field technique. In this method combination of Global Positioning System on static mode and precise leveling observation were utilized in determination of geodetic coordinate of points established within the study area by GPS observation and the orthometric heights through precise leveling. By least square using Matlab programme; the estimated deflections of vertical component parameters for the common station were -0.0286 and -0.0001 arc seconds for the north-south and east-west components respectively. The associated standard errors of the processed vectors of the network were computed. The computed standard errors of the North-south and East-west components were 5.5911e-005 and 1.4965e-004 arc seconds, respectively. Therefore, including the derived component of deflection of the vertical to the ellipsoidal model will yield high observational accuracy since an ellipsoidal model is not tenable due to its far observational error in the determination of high quality job. It is important to include the determined deflection of the vertical component for Owerri West Local Government in Imo State, Nigeria.Keywords: deflection of vertical, ellipsoidal height, least square, orthometric height
Procedia PDF Downloads 209470 A Literature Study on IoT Based Monitoring System for Smart Agriculture
Authors: Sonu Rana, Jyoti Verma, A. K. Gautam
Abstract:
In most developing countries like India, the majority of the population heavily relies on agriculture for their livelihood. The yield of agriculture is heavily dependent on uncertain weather conditions like a monsoon, soil fertility, availability of irrigation facilities and fertilizers as well as support from the government. The agricultural yield is quite less compared to the effort put in due to inefficient agricultural facilities and obsolete farming practices on the one hand and lack of knowledge on the other hand, and ultimately agricultural community does not prosper. It is therefore essential for the farmers to improve their harvest yield by the acquisition of related data such as soil condition, temperature, humidity, availability of irrigation facilities, availability of, manure, etc., and adopt smart farming techniques using modern agricultural equipment. Nowadays, using IOT technology in agriculture is the best solution to improve the yield with fewer efforts and economic costs. The primary focus of this work-related is IoT technology in the agriculture field. By using IoT all the parameters would be monitored by mounting sensors in an agriculture field held at different places, will collect real-time data, and could be transmitted by a transmitting device like an antenna. To improve the system, IoT will interact with other useful systems like Wireless Sensor Networks. IoT is exploring every aspect, so the radio frequency spectrum is getting crowded due to the increasing demand for wireless applications. Therefore, Federal Communications Commission is reallocating the spectrum for various wireless applications. An antenna is also an integral part of the newly designed IoT devices. The main aim is to propose a new antenna structure used for IoT agricultural applications and compatible with this new unlicensed frequency band. The main focus of this paper is to present work related to these technologies in the agriculture field. This also presented their challenges & benefits. It can help in understanding the job of data by using IoT and correspondence advancements in the horticulture division. This will help to motivate and educate the unskilled farmers to comprehend the best bits of knowledge given by the huge information investigation utilizing smart technology.Keywords: smart agriculture, IoT, agriculture technology, data analytics, smart technology
Procedia PDF Downloads 116469 The Data Quality Model for the IoT based Real-time Water Quality Monitoring Sensors
Authors: Rabbia Idrees, Ananda Maiti, Saurabh Garg, Muhammad Bilal Amin
Abstract:
IoT devices are the basic building blocks of IoT network that generate enormous volume of real-time and high-speed data to help organizations and companies to take intelligent decisions. To integrate this enormous data from multisource and transfer it to the appropriate client is the fundamental of IoT development. The handling of this huge quantity of devices along with the huge volume of data is very challenging. The IoT devices are battery-powered and resource-constrained and to provide energy efficient communication, these IoT devices go sleep or online/wakeup periodically and a-periodically depending on the traffic loads to reduce energy consumption. Sometime these devices get disconnected due to device battery depletion. If the node is not available in the network, then the IoT network provides incomplete, missing, and inaccurate data. Moreover, many IoT applications, like vehicle tracking and patient tracking require the IoT devices to be mobile. Due to this mobility, If the distance of the device from the sink node become greater than required, the connection is lost. Due to this disconnection other devices join the network for replacing the broken-down and left devices. This make IoT devices dynamic in nature which brings uncertainty and unreliability in the IoT network and hence produce bad quality of data. Due to this dynamic nature of IoT devices we do not know the actual reason of abnormal data. If data are of poor-quality decisions are likely to be unsound. It is highly important to process data and estimate data quality before bringing it to use in IoT applications. In the past many researchers tried to estimate data quality and provided several Machine Learning (ML), stochastic and statistical methods to perform analysis on stored data in the data processing layer, without focusing the challenges and issues arises from the dynamic nature of IoT devices and how it is impacting data quality. A comprehensive review on determining the impact of dynamic nature of IoT devices on data quality is done in this research and presented a data quality model that can deal with this challenge and produce good quality of data. This research presents the data quality model for the sensors monitoring water quality. DBSCAN clustering and weather sensors are used in this research to make data quality model for the sensors monitoring water quality. An extensive study has been done in this research on finding the relationship between the data of weather sensors and sensors monitoring water quality of the lakes and beaches. The detailed theoretical analysis has been presented in this research mentioning correlation between independent data streams of the two sets of sensors. With the help of the analysis and DBSCAN, a data quality model is prepared. This model encompasses five dimensions of data quality: outliers’ detection and removal, completeness, patterns of missing values and checks the accuracy of the data with the help of cluster’s position. At the end, the statistical analysis has been done on the clusters formed as the result of DBSCAN, and consistency is evaluated through Coefficient of Variation (CoV).Keywords: clustering, data quality, DBSCAN, and Internet of things (IoT)
Procedia PDF Downloads 139468 Corpus-Based Neural Machine Translation: Empirical Study Multilingual Corpus for Machine Translation of Opaque Idioms - Cloud AutoML Platform
Authors: Khadija Refouh
Abstract:
Culture bound-expressions have been a bottleneck for Natural Language Processing (NLP) and comprehension, especially in the case of machine translation (MT). In the last decade, the field of machine translation has greatly advanced. Neural machine translation NMT has recently achieved considerable development in the quality of translation that outperformed previous traditional translation systems in many language pairs. Neural machine translation NMT is an Artificial Intelligence AI and deep neural networks applied to language processing. Despite this development, there remain some serious challenges that face neural machine translation NMT when translating culture bounded-expressions, especially for low resources language pairs such as Arabic-English and Arabic-French, which is not the case with well-established language pairs such as English-French. Machine translation of opaque idioms from English into French are likely to be more accurate than translating them from English into Arabic. For example, Google Translate Application translated the sentence “What a bad weather! It runs cats and dogs.” to “يا له من طقس سيء! تمطر القطط والكلاب” into the target language Arabic which is an inaccurate literal translation. The translation of the same sentence into the target language French was “Quel mauvais temps! Il pleut des cordes.” where Google Translate Application used the accurate French corresponding idioms. This paper aims to perform NMT experiments towards better translation of opaque idioms using high quality clean multilingual corpus. This Corpus will be collected analytically from human generated idiom translation. AutoML translation, a Google Neural Machine Translation Platform, is used as a custom translation model to improve the translation of opaque idioms. The automatic evaluation of the custom model will be compared to the Google NMT using Bilingual Evaluation Understudy Score BLEU. BLEU is an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another. Human evaluation is integrated to test the reliability of the Blue Score. The researcher will examine syntactical, lexical, and semantic features using Halliday's functional theory.Keywords: multilingual corpora, natural language processing (NLP), neural machine translation (NMT), opaque idioms
Procedia PDF Downloads 149