Search results for: spectrum estimation
947 Estimation of PM10 Concentration Using Ground Measurements and Landsat 8 OLI Satellite Image
Authors: Salah Abdul Hameed Saleh, Ghada Hasan
Abstract:
The aim of this work is to produce an empirical model for the determination of particulate matter (PM10) concentration in the atmosphere using visible bands of Landsat 8 OLI satellite image over Kirkuk city- IRAQ. The suggested algorithm is established on the aerosol optical reflectance model. The reflectance model is a function of the optical properties of the atmosphere, which can be related to its concentrations. The concentration of PM10 measurements was collected using Particle Mass Profiler and Counter in a Single Handheld Unit (Aerocet 531) meter simultaneously by the Landsat 8 OLI satellite image date. The PM10 measurement locations were defined by a handheld global positioning system (GPS). The obtained reflectance values for visible bands (Coastal aerosol, Blue, Green and blue bands) of landsat 8 OLI image were correlated with in-suite measured PM10. The feasibility of the proposed algorithms was investigated based on the correlation coefficient (R) and root-mean-square error (RMSE) compared with the PM10 ground measurement data. A choice of our proposed multispectral model was founded on the highest value correlation coefficient (R) and lowest value of the root mean square error (RMSE) with PM10 ground data. The outcomes of this research showed that visible bands of Landsat 8 OLI were capable of calculating PM10 concentration with an acceptable level of accuracy.Keywords: air pollution, PM10 concentration, Lansat8 OLI image, reflectance, multispectral algorithms, Kirkuk area
Procedia PDF Downloads 442946 Transmission of Values among Polish Young Adults and Their Parents: Pseudo Dyad Analysis and Gender Differences
Authors: Karolina Pietras, Joanna Fryt, Aleksandra Gronostaj, Tomasz Smolen
Abstract:
Young women and men differ from their parents in preferred values. Those differences enable their adaptability to a new socio-cultural context and help with fulfilling developmental tasks specific to young adulthood. At the same time core values, with special importance to family members, are transmitted within families. Intergenerational similarities in values may thus be both an effect of value transmission within a family and a consequence of sharing the same socio-cultural context. These processes are difficult to separate. In our study we assessed similarities and differences in values within four intergenerational family dyads (mothers-daughters, fathers-daughters, mothers-sons, fathers-sons). Sixty Polish young adults (30 women and 30 men aged 19-25) along with their parents (a total of 180 participants) completed the Schwartz’ Portrait Value Questionnaire (PVQ-21). To determine which values may be transmitted within families, we used a correlation analysis and pseudo dyad analysis that allows for the estimation of a baseline likeness between all tested subjects and consequently makes it possible to determine if similarities between actual family members are greater than chance. We also assessed whether different strategies of measuring similarity between family members render different results, and checked whether resemblances in family dyads are influenced by child’s and parent’s gender. Reported similarities were interpreted in light of the evolutionary and the value salience perspective.Keywords: intergenerational differences in values, gender differences, pseudo dyad analysis, transmission of values
Procedia PDF Downloads 502945 Accuracy of Small Field of View CBCT in Determining Endodontic Working Length
Authors: N. L. S. Ahmad, Y. L. Thong, P. Nambiar
Abstract:
An in vitro study was carried out to evaluate the feasibility of small field of view (FOV) cone beam computed tomography (CBCT) in determining endodontic working length. The objectives were to determine the accuracy of CBCT in measuring the estimated preoperative working lengths (EPWL), endodontic working lengths (EWL) and file lengths. Access cavities were prepared in 27 molars. For each root canal, the baseline electronic working length was determined using an EAL (Raypex 5). The teeth were then divided into overextended, non-modified and underextended groups and the lengths were adjusted accordingly. Imaging and measurements were made using the respective software of the RVG (Kodak RVG 6100) and CBCT units (Kodak 9000 3D). Root apices were then shaved and the apical constrictions viewed under magnification to measure the control working lengths. The paired t-test showed a statistically significant difference between CBCT EPWL and control length but the difference was too small to be clinically significant. From the Bland Altman analysis, the CBCT method had the widest range of 95% limits of agreement, reflecting its greater potential of error. In measuring file lengths, RVG had a bigger window of 95% limits of agreement compared to CBCT. Conclusions: (1) The clinically insignificant underestimation of the preoperative working length using small FOV CBCT showed that it is acceptable for use in the estimation of preoperative working length. (2) Small FOV CBCT may be used in working length determination but it is not as accurate as the currently practiced method of using the EAL. (3) It is also more accurate than RVG in measuring file lengths.Keywords: accuracy, CBCT, endodontics, measurement
Procedia PDF Downloads 308944 Interactive IoT-Blockchain System for Big Data Processing
Authors: Abdallah Al-ZoubI, Mamoun Dmour
Abstract:
The spectrum of IoT devices is becoming widely diversified, entering almost all possible fields and finding applications in industry, health, finance, logistics, education, to name a few. The IoT active endpoint sensors and devices exceeded the 12 billion mark in 2021 and are expected to reach 27 billion in 2025, with over $34 billion in total market value. This sheer rise in numbers and use of IoT devices bring with it considerable concerns regarding data storage, analysis, manipulation and protection. IoT Blockchain-based systems have recently been proposed as a decentralized solution for large-scale data storage and protection. COVID-19 has actually accelerated the desire to utilize IoT devices as it impacted both demand and supply and significantly affected several regions due to logistic reasons such as supply chain interruptions, shortage of shipping containers and port congestion. An IoT-blockchain system is proposed to handle big data generated by a distributed network of sensors and controllers in an interactive manner. The system is designed using the Ethereum platform, which utilizes smart contracts, programmed in solidity to execute and manage data generated by IoT sensors and devices. such as Raspberry Pi 4, Rasbpian, and add-on hardware security modules. The proposed system will run a number of applications hosted by a local machine used to validate transactions. It then sends data to the rest of the network through InterPlanetary File System (IPFS) and Ethereum Swarm, forming a closed IoT ecosystem run by blockchain where a number of distributed IoT devices can communicate and interact, thus forming a closed, controlled environment. A prototype has been deployed with three IoT handling units distributed over a wide geographical space in order to examine its feasibility, performance and costs. Initial results indicated that big IoT data retrieval and storage is feasible and interactivity is possible, provided that certain conditions of cost, speed and thorough put are met.Keywords: IoT devices, blockchain, Ethereum, big data
Procedia PDF Downloads 150943 International and Intercultural Communication Design: Case Study of Manipulative Advertising
Authors: Faiqa Jalal
Abstract:
The purpose of the following research paper is to discuss the differentiating meanings of culture and how popular culture has maintained a great impact on intercultural and international behavior. The following discussion leads to the notion of communicating cultural impact on behavior through advertising and sub-cultural theory in advertising. Although towards the end of the research, the complexities that develop through the above discussion, lead to the solution that ‘advertising gives meaning to the otherwise meaningless and identical objects through linking them to our basic needs’. In today’s fast paced digital world, it is difficult to define culture, literally, since its meaning tends to shift through series of different perceptions such as ‘how’ and ‘why’ it should be used. This notion can be taken towards another notion of popular culture. It is dependent on ‘attitudes, ideas, images, perspectives and other phenomena within the mainstream of a given culture’. Since popular culture is influenced by mass media, it has a way of influencing an individual’s attitude towards certain topics. For example, tattoos are a form of human decorations, that have historic significance, and a huge spectrum of meanings. Advertising is one aspect of marketing that has evolved from the time when it was ‘production oriented’, up till the time it started using different mediums to make its impact more effective. However, this impact has confused us between our needs and desires. The focus in this paper is ‘we consume to acquire a sense of social identity and status, not just for the sake of consumption’. Every culture owns different expressions, which are then used by advertisers to create its impact on the behavior of people sub-culturally and globally, as culture grows through social interaction. Advertisers furthermore play a smart role in highlighting quality of life ranging from ‘survival to well-being’. Hence, this research paper concludes by highlighting that culture is considered as a ‘basic root’ of any community that also provides solution to certain problems; however, advertisers play their part in manipulating society’s literacy and beliefs by rationalizing how relevant certain products/brands are to their beliefs.Keywords: mass media, popular culture, production oriented, sub-culture
Procedia PDF Downloads 227942 Estimation of Geotechnical Parameters by Comparing Monitoring Data with Numerical Results: Case Study of Arash–Esfandiar-Niayesh Under-Passing Tunnel, Africa Tunnel, Tehran, Iran
Authors: Aliakbar Golshani, Seyyed Mehdi Poorhashemi, Mahsa Gharizadeh
Abstract:
The under passing tunnels are strongly influenced by the soils around. There are some complexities in the specification of real soil behavior, owing to the fact that lots of uncertainties exist in soil properties, and additionally, inappropriate soil constitutive models. Such mentioned factors may cause incompatible settlements in numerical analysis with the obtained values in actual construction. This paper aims to report a case study on a specific tunnel constructed by NATM. The tunnel has a depth of 11.4 m, height of 12.2 m, and width of 14.4 m with 2.5 lanes. The numerical modeling was based on a 2D finite element program. The soil material behavior was modeled by hardening soil model. According to the field observations, the numerical estimated settlement at the ground surface was approximately four times more than the measured one, after the entire installation of the initial lining, indicating that some unknown factors affect the values. Consequently, the geotechnical parameters are accurately revised by a numerical back-analysis using laboratory and field test data and based on the obtained monitoring data. The obtained result confirms that typically, the soil parameters are conservatively low-estimated. And additionally, the constitutive models cannot be applied properly for all soil conditions.Keywords: NATM tunnel, initial lining, laboratory test data, numerical back-analysis
Procedia PDF Downloads 361941 Influence of UV Aging on the Mechanical Properties of Polycarbonate
Authors: S. Redjala, N. Ait Hocine, M. Gratton, N. Poirot, R. Ferhoum, S. Azem
Abstract:
Polycarbonate (PC) is a promising polymer with high transparency in the range of the visible spectrum and is used in various fields, for example medical, electronic, automotive. Its low weight, chemical inertia, high impact resistance and relatively low cost are of major importance. In recent decades, some materials such as metals and ceramics have been replaced by polymers because of their superior advantages. However, some characteristics of the polymers are highly modified under the effect of ultraviolet (UV) radiation and temperature. The changes induced in the material by such aging depend on the exposure time, the wavelength of the UV radiation and the temperature level. The UV energy is sufficient to break the chemical bonds leading to a cleavage of the molecular chains. This causes changes in the mechanical, thermal, optical and morphological properties of the material. The present work is focused on the study of the effects of aging under ultraviolet (UV) radiation and under different temperature values on the physical-chemical and mechanical properties of a PC. Thus, various investigations, such as FTIR and XRD analyses, SEM and optical microscopy observations, micro-hardness measurements and monotonic and cyclic tensile tests, were carried out on the PC in the initial state and after aging. Results have shown the impact of aging on the properties of the PC studied. In fact, the MEB highlighted changes in the superficial morphology of the material by the presence of cracks and material de-bonding in the form of debris. The FTIR spectra reveal an attenuation of the peaks like the hydroxyl (OH) groups located at 3520 cm-1. The XRD lines shift towards a larger angle, reaching a maximum of 3°. In addition, Vickers micro-hardness measurements show that aging affects the surface and the core of the material, which results in different mechanical behaviours under monotonic and cyclic tensile tests. This study pointed out effects of aging on the macroscopic properties of the PC studied, in relationship with its microstructural changes.Keywords: mechanical properties, physical-chemical properties, polycarbonate, UV aging, temperature aging
Procedia PDF Downloads 142940 Evaluation of Ficus racemosa (Moraceae) as a Potential Source for Drug Formulation Against Coccidiosis
Authors: Naveeda Akhtar Qureshi, Wajiha
Abstract:
Coccidiosis is a protozoan parasitic disease of genus Eimeria. It is an avian infection causing a great economic loss of 3 billion USD per year globally. A number of anticoccidial drugs are in use however many of them have side effects and cost effective. With increase in poultry demand throughout the world there is a need of more drugs and vaccines against coccidiosis. The present study is based upon the use of F. racemosa a medicinal plant to be a potential source of anticoccidial agents. The methanolic leaves extract was fractionated by column and thin layer chromatography and got nineteen fractions. Each fraction different concentrations was evaluated for its anticoccidial properties in an invitro experiment against E. tenella, E. necatrix and E. mitis. The anticoccidial active fractions were further characterized by spectroscopy (UV-Vis, FTIR) and GC-MS analysis. The in silico molecular docking of active fractions identified compounds were carried out. Among all fractions significantly maximum sporulation inhibition efficacy was shown by F-19 (67.11±2.18) followed by F-15 (65.21±1.34) at concentration of 30mg/ml against E. tenella. The significantly highest sporozoites viability inhibition was shown by F-19 (69.23±2.11) followed by F-15 (67.14±1.52) against E. necatrix at concentration 30mg/ml. Anticoccidial active fractions 15 and 19 showed peak spectrum at 207 and 202nm respectively by UV analysis. Their FTIR analysis confirmed the presence of carboxylic acid, amines, phenols, etc. Anticoccidial active compounds like Cyclododecane methanol, oleic acid, Octadecanoic acid, etc were identified by GC-MS analysis. Identified compounds in silico molecular docking study showed that cyclododecane methanol of F-19 and oleic acid of F-15 showed highest binding affinity with target S-Adenosylmethionine synthase. Hence for further authentication in vivo anticoccidial studies are recommended.Keywords: ficus racemosa, cluster fig, column chromatography, anticoccidial fractions, GC-MS, molecular docking., s-adenosylmethionine synthase
Procedia PDF Downloads 85939 Unsupervised Classification of DNA Barcodes Species Using Multi-Library Wavelet Networks
Authors: Abdesselem Dakhli, Wajdi Bellil, Chokri Ben Amar
Abstract:
DNA Barcode, a short mitochondrial DNA fragment, made up of three subunits; a phosphate group, sugar and nucleic bases (A, T, C, and G). They provide good sources of information needed to classify living species. Such intuition has been confirmed by many experimental results. Species classification with DNA Barcode sequences has been studied by several researchers. The classification problem assigns unknown species to known ones by analyzing their Barcode. This task has to be supported with reliable methods and algorithms. To analyze species regions or entire genomes, it becomes necessary to use similarity sequence methods. A large set of sequences can be simultaneously compared using Multiple Sequence Alignment which is known to be NP-complete. To make this type of analysis feasible, heuristics, like progressive alignment, have been developed. Another tool for similarity search against a database of sequences is BLAST, which outputs shorter regions of high similarity between a query sequence and matched sequences in the database. However, all these methods are still computationally very expensive and require significant computational infrastructure. Our goal is to build predictive models that are highly accurate and interpretable. This method permits to avoid the complex problem of form and structure in different classes of organisms. On empirical data and their classification performances are compared with other methods. Our system consists of three phases. The first is called transformation, which is composed of three steps; Electron-Ion Interaction Pseudopotential (EIIP) for the codification of DNA Barcodes, Fourier Transform and Power Spectrum Signal Processing. The second is called approximation, which is empowered by the use of Multi Llibrary Wavelet Neural Networks (MLWNN).The third is called the classification of DNA Barcodes, which is realized by applying the algorithm of hierarchical classification.Keywords: DNA barcode, electron-ion interaction pseudopotential, Multi Library Wavelet Neural Networks (MLWNN)
Procedia PDF Downloads 317938 Factors Affecting Online Health Seeking Behaviors in Middle-Income Class Filipino Adults
Authors: Reinzo Vittorio B. Cardenas, Heather Venice L. Abogado, Andrea Therese V. Afable, Rhea D. Avillanoza, Marie Abegail P. Ayagan, Catherine D. Bantayan
Abstract:
As the Internet provides fast and reliable health-related information, the tendency to self-diagnose increases to further understand medical jargon in a diagnosis with a physician and decreases costly consultation fees. The study aimed to explore and understand the factors affecting online health-seeking behaviors in middle-income class adults in Metro Manila. The study was conducted from March to April of 2021 with a sample size of 200 individuals aged 20 to 49 years old. The study was delivered via an online survey that used a questionnaire adapted from the research of Lee et al. (2015). Specifically, the survey consisted of three sections: assessing web-based health-seeking behaviors, consultation with health professionals, and participants' hesitancy to consult with physicians, which used a mix of a 5-point Likert-type scale with multiple responses and multiple-choice options. The results showed that the age and educational attainment of the respondents had a negative effect while presenting a positive effect of socio-economic status on health-seeking behavior. Lastly, there was a significant effect of participant’s hesitancy for professional consultation on their health-seeking behavior. The results gleaned from the study indicated that various individual and socio-economic factors might significantly affect one’s health-seeking behaviors. Although hesitancy had a significant effect on the spectrum of health-seeking behaviors, this does not imply that certain factors are specifically related to an individual’s tendency to seek health information. This information instead becomes essential in understanding the patient-physician relationship and giving patients a more holistic treatment.Keywords: health-seeking behavior, health information, Internet, physician consultation
Procedia PDF Downloads 216937 Biomimetic Strategies to Design Non-Toxic Antimicrobial Textiles
Authors: Isabel Gouveia
Abstract:
Antimicrobial textile materials may significantly reduce the risk of infections and because they are able to absorb substances from the skin and release therapeutic compounds to the skin, they can also find applications as complementary therapy of skin-diseases as part of standard management. Although functional textiles may be a promising area in skin disease/injury management, as part of standard management, few offer complementary treatment even though they are well known to reduce scratching and aiding emollient absorption, reducing infection, and alleviating pruritus. The reason for this may rely on the low quality of supporting evidence and negative effect that antimicrobial agents may exert on skin microbiome, as for example additional irritation of the vulnerable skin, and by causing resistant bacteria. Several antimicrobial agents have been tested in textiles: quaternary ammonium compounds, silver, polyhexamethylene-biguanides and triclosan have been used, with success. They have powerful bactericidal activity but the majority have a reduce spectrum of microbial inhibition and may cause skin irritation, ecotoxicity and bacteria resistance. Furthermore, the rising flow of strains resistant to last-resort antibiotics rekindles interest in alternative strategies. In this regard, new functional textiles incorporating highly specific antimicrobial agents towards pathogenic bacteria, are required. Recent research has been conducted on naturally occurring antimicrobials as novel alternatives to antibiotics. Conscious of this need our team firstly reported new approaches using L-cysteine and antimicrobial peptides (AMP). Briefly, we were able to develop different immobilization processes towards 6 Log Reduction against bacteria such as S. aureus and K. pneumoniae. Therefore, here we present several innovative antimicrobial textiles incorporating AMP and L-Cysteine which may open new avenues for the medical textiles market and biomaterials in general. Team references will be discussed as an overview and for comparison purposes in terms of potential therapeutic applications.Keywords: Antimicrobials, Antimicrobial Textiles, Biomedical Textiles, Biomimetic surface functionalization
Procedia PDF Downloads 118936 Development of Broad Spectrum Nitrilase Biocatalysts and Bioprocesses for Nitrile Biotransformation
Authors: Avinash Vellore Sunder, Shikha Shah, Pramod P. Wangikar
Abstract:
The enzymatic conversion of nitriles to carboxylic acids by nitrilases has gained significance in the green synthesis of several pharmaceutical precursors and fine chemicals. While nitrilases have been characterized from different sources, the industrial application requires the identification of nitrilases that possess higher substrate tolerance, wider specificity and better thermostability, along with the development of an efficient bioprocess for producing large amounts of nitrilase. To produce large amounts of nitrilase, we developed a fed-batch fermentation process on defined media for the high cell density cultivation of E. coli cells expressing the well-studied nitrilase from Alcaligenes fecalis. A DO-stat feeding approach was employed combined with an optimized post-induction strategy to achieve nitrilase titer of 2.5*105 U/l and 78 g/l dry cell weight. We also identified 16 novel nitrilase sequences from genome mining and analysis of substrate binding residues. The nitrilases were expressed in E. coli and their biocatalytic potential was evaluated on a panel of 22 industrially relevant nitrile substrates using high-throughput screening and HPLC analysis. Nine nitrilases were identified to exhibit high activity on structurally diverse nitriles including aliphatic and aromatic dinitriles, heterocyclic, -hydroxy and -keto nitriles. With fed-batch biotransformation, whole-cell Zobelia galactanivorans nitrilase achieved yields of 2.4 M nicotinic acid and 1.8 M isonicotinic acid from 3-cyanopyridine and 4-cyanopyridine respectively within 5 h, while Cupravidus necator nitrilase enantioselectively converted 740 mM mandelonitrile to (R)–mandelic acid. The nitrilase from Achromobacter insolitus could hydrolyze 542 mM iminodiacetonitrile in 1 h. The availability of highly active nitrilases along with bioprocesses for enzyme production expands the toolbox for industrial biocatalysis.Keywords: biocatalysis, isonicotinic acid, iminodiacetic acid, mandelic acid, nitrilase
Procedia PDF Downloads 234935 Profit Efficiency and Technology Adoption of Boro Rice Production in Bangladesh
Authors: Fazlul Hoque, Tahmina Akter Joya, Asma Akter, Supawat Rungsuriyawiboon
Abstract:
Rice is the staple food in Bangladesh, and therefore, self-sufficiency in rice production remains a major concern. However, Bangladesh is experiencing insufficiency in rice production due to high production cost and low national average productivity of 2.848 ton/ha in comparison to other rice-growing countries in the world. This study aims to find out the profit efficiency and determinants of profit efficiency in Boro rice cultivation in Manikganj and Dhaka districts of Bangladesh. It also focuses on technology adoption and effect of technology adoption on profit efficiency of Boro rice cultivation in Bangladesh. The data were collected from 300 households growing Boro rice through face to face interviews by one set structured questionnaire; Frontier Version 4.1 and STATA 15 software were employed to analyze the data according to the purpose of the study. Maximum likelihood estimates of the specified profit model showed that profit efficiency of the farmer varied between 23% and 97% with a mean of 76% which implied as 24% of the profit is lost due to a combination of technical and allocative inefficiencies in Boro rice cultivation in the study area. The inefficiency model revealed that the education level of the farmer, farm size, variety of seed, and training and extension service influence the profit inefficiency significantly. The study also explained that the level of technology adoption index affects profit efficiency. The technology adoption in Boro rice cultivation is influenced by the education level of the farmer, farm size and farm capital.Keywords: farmer, maximum likelihood estimation, profit efficiency, rice
Procedia PDF Downloads 135934 Comparative Fragility Analysis of Shallow Tunnels Subjected to Seismic and Blast Loads
Authors: Siti Khadijah Che Osmi, Mohammed Ahmad Syed
Abstract:
Underground structures are crucial components which required detailed analysis and design. Tunnels, for instance, are massively constructed as transportation infrastructures and utilities network especially in urban environments. Considering their prime importance to the economy and public safety that cannot be compromised, thus any instability to these tunnels will be highly detrimental to their performance. Recent experience suggests that tunnels become vulnerable during earthquakes and blast scenarios. However, a very limited amount of studies has been carried out to study and understanding the dynamic response and performance of underground tunnels under those unpredictable extreme hazards. In view of the importance of enhancing the resilience of these structures, the overall aims of the study are to evaluate probabilistic future performance of shallow tunnels subjected to seismic and blast loads by developing detailed fragility analysis. Critical non-linear time history numerical analyses using sophisticated finite element software Midas GTS NX have been presented about the current methods of analysis, taking into consideration of structural typology, ground motion and explosive characteristics, effect of soil conditions and other associated uncertainties on the tunnel integrity which may ultimately lead to the catastrophic failure of the structures. The proposed fragility curves for both extreme loadings are discussed and compared which provide significant information the performance of the tunnel under extreme hazards which may beneficial for future risk assessment and loss estimation.Keywords: fragility analysis, seismic loads, shallow tunnels, blast loads
Procedia PDF Downloads 343933 Validation of SWAT Model for Prediction of Water Yield and Water Balance: Case Study of Upstream Catchment of Jebba Dam in Nigeria
Authors: Adeniyi G. Adeogun, Bolaji F. Sule, Adebayo W. Salami, Michael O. Daramola
Abstract:
Estimation of water yield and water balance in a river catchment is critical to the sustainable management of water resources at watershed level in any country. Therefore, in the present study, Soil and Water Assessment Tool (SWAT) interfaced with Geographical Information System (GIS) was applied as a tool to predict water balance and water yield of a catchment area in Nigeria. The catchment area, which was 12,992km2, is located upstream Jebba hydropower dam in North central part of Nigeria. In this study, data on the observed flow were collected and compared with simulated flow using SWAT. The correlation between the two data sets was evaluated using statistical measures, such as, Nasch-Sucliffe Efficiency (NSE) and coefficient of determination (R2). The model output shows a good agreement between the observed flow and simulated flow as indicated by NSE and R2, which were greater than 0.7 for both calibration and validation period. A total of 42,733 mm of water was predicted by the calibrated model as the water yield potential of the basin for a simulation period 1985 to 2010. This interesting performance obtained with SWAT model suggests that SWAT model could be a promising tool to predict water balance and water yield in sustainable management of water resources. In addition, SWAT could be applied to other water resources in other basins in Nigeria as a decision support tool for sustainable water management in Nigeria.Keywords: GIS, modeling, sensitivity analysis, SWAT, water yield, watershed level
Procedia PDF Downloads 439932 Estimation of Asphalt Pavement Surfaces Using Image Analysis Technique
Authors: Mohammad A. Khasawneh
Abstract:
Asphalt concrete pavements gradually lose their skid resistance causing safety problems especially under wet conditions and high driving speeds. In order to enact the actual field polishing and wearing process of asphalt pavement surfaces in a laboratory setting, several laboratory-scale accelerated polishing devices were developed by different agencies. To mimic the actual process, friction and texture measuring devices are needed to quantify surface deterioration at different polishing intervals that reflect different stages of the pavement life. The test could still be considered lengthy and to some extent labor-intensive. Therefore, there is a need to come up with another method that can assist in investigating the bituminous pavement surface characteristics in a practical and time-efficient test procedure. The purpose of this paper is to utilize a well-developed image analysis technique to characterize asphalt pavement surfaces without the need to use conventional friction and texture measuring devices in an attempt to shorten and simplify the polishing procedure in the lab. Promising findings showed the possibility of using image analysis in lieu of the labor-sensitive-variable-in-nature friction and texture measurements. It was found that the exposed aggregate surface area of asphalt specimens made from limestone and gravel aggregates produced solid evidence of the validity of this method in describing asphalt pavement surfaces. Image analysis results correlated well with the British Pendulum Numbers (BPN), Polish Values (PV) and Mean Texture Depth (MTD) values.Keywords: friction, image analysis, polishing, statistical analysis, texture
Procedia PDF Downloads 306931 A National Systematic Review on Determining Prevalence of Mobbing Exposure in Turkish Nurses
Authors: Betül Sönmez, Aytolan Yıldırım
Abstract:
Objective: This systematic review aims to methodically analyze studies regarding mobbing behavior prevalence, individuals performing this behavior and the effects of mobbing on Turkish nurses. Background: Worldwide reports on mobbing cases have increased in the past years, a similar trend also observable in Turkey. It has been demonstrated that among healthcare workers, mobbing is significantly widespread in nurses. The number of studies carried out in this regard has also increased. Method: The main criteria for choosing articles in this systematic review were nurses located in Turkey, regardless of any specific date. In November 2014, a search using the keywords 'mobbing, bullying, psychological terror/violence, emotional violence, nurses, healthcare workers, Turkey' in PubMed, Science Direct, Ebscohost, National Thesis Centre database and Google search engine led to 71 studies in this field. 33 studies were not met the inclusion criteria specified for this study. Results: The findings were obtained using the results of 38 studies carried out in the past 13 years in Turkey, a large sample consisting of 8,877 nurses. Analysis of the incidences of mobbing behavior revealed a broad spectrum, ranging from none-slight experiences to 100% experiences. The most frequently observed mobbing behaviors include attacking personality, blocking communication and attacking professional and social reputation. Victims mostly experienced mobbing from their managers, the most common consequence of these actions being psychological effects. Conclusions: The results of studies with various scales indicate exposure of nurses to similar mobbing behavior. The high frequency of exposure of nurses to mobbing behavior in such a large sample highlights the importance of considering this issue in terms of individual and institutional consequences that adversely affect the performance of nurses.Keywords: mobbing, bullying, workplace violence, nurses, Turkey
Procedia PDF Downloads 277930 Factors Affecting Autistic Children's Development during the Early Years in Elementary School: A Longitudinal Study in Taiwan
Authors: Huang Ying
Abstract:
The present study was to investigate factors affecting children's improvement through the first two years of elementary school on a population-based sample of children with autism in Taiwan. All the children were diagnosed with autism spectrum disorder (ASD) by clinical psychologists according to DSM-IV. Children's development was assessed by the Vineland Adaptive Behavior Scales-Chinese version (VABS-C) on the first and the third grade. Children's improvement was measured by the difference between the standardized total score of the third and the first year. In Taiwan, school-age children with special-education needs will be arranged into different classes, including normal classes (NC), resource classes (RC), and special classes (SC) by the government. Therefore, type of class was one of the independent variables. Moreover, as early intervention is considered to be crucial, the earliest age when intervention begins was collected from parents. Attention was also included in the analysis. Teachers were asked to evaluate children's attention with a 3-item Likert Scale. The frequency of paying attention to the class or the task was recorded and scores were summed up. Additionally, standardized scores of the VABS-C in the first grade were used as pretest scores representing children's developmental level at the beginning of elementary school. Multiple regression was conducted with improvement as the dependent variable. Results showed that children in special classes had smaller improvement compared to those in normal or resource classes. Attention positively predicted improvement yet the effect of earliest intervention age was not significant. Furthermore, scores in the first grade negatively predicted improvement, which indicated that children with higher developmental levels would make less progress in the following years. Results were to some degree consistent with previous findings through meta-analysis that the effectiveness of conventional intervention methods lacked sufficient evidence to support.Keywords: attention, early intervention, elementary school, special education in Taiwan
Procedia PDF Downloads 291929 Optimization of Economic Order Quantity of Multi-Item Inventory Control Problem through Nonlinear Programming Technique
Authors: Prabha Rohatgi
Abstract:
To obtain an efficient control over a huge amount of inventory of drugs in pharmacy department of any hospital, generally, the medicines are categorized on the basis of their cost ‘ABC’ (Always Better Control), first and then categorize on the basis of their criticality ‘VED’ (Vital, Essential, desirable) for prioritization. About one-third of the annual expenditure of a hospital is spent on medicines. To minimize the inventory investment, the hospital management may like to keep the medicines inventory low, as medicines are perishable items. The main aim of each and every hospital is to provide better services to the patients under certain limited resources. To achieve the satisfactory level of health care services to outdoor patients, a hospital has to keep eye on the wastage of medicines because expiry date of medicines causes a great loss of money though it was limited and allocated for a particular period of time. The objectives of this study are to identify the categories of medicines requiring incentive managerial control. In this paper, to minimize the total inventory cost and the cost associated with the wastage of money due to expiry of medicines, an inventory control model is used as an estimation tool and then nonlinear programming technique is used under limited budget and fixed number of orders to be placed in a limited time period. Numerical computations have been given and shown that by using scientific methods in hospital services, we can give more effective way of inventory management under limited resources and can provide better health care services. The secondary data has been collected from a hospital to give empirical evidence.Keywords: ABC-VED inventory classification, multi item inventory problem, nonlinear programming technique, optimization of EOQ
Procedia PDF Downloads 255928 Big Data in Telecom Industry: Effective Predictive Techniques on Call Detail Records
Authors: Sara ElElimy, Samir Moustafa
Abstract:
Mobile network operators start to face many challenges in the digital era, especially with high demands from customers. Since mobile network operators are considered a source of big data, traditional techniques are not effective with new era of big data, Internet of things (IoT) and 5G; as a result, handling effectively different big datasets becomes a vital task for operators with the continuous growth of data and moving from long term evolution (LTE) to 5G. So, there is an urgent need for effective Big data analytics to predict future demands, traffic, and network performance to full fill the requirements of the fifth generation of mobile network technology. In this paper, we introduce data science techniques using machine learning and deep learning algorithms: the autoregressive integrated moving average (ARIMA), Bayesian-based curve fitting, and recurrent neural network (RNN) are employed for a data-driven application to mobile network operators. The main framework included in models are identification parameters of each model, estimation, prediction, and final data-driven application of this prediction from business and network performance applications. These models are applied to Telecom Italia Big Data challenge call detail records (CDRs) datasets. The performance of these models is found out using a specific well-known evaluation criteria shows that ARIMA (machine learning-based model) is more accurate as a predictive model in such a dataset than the RNN (deep learning model).Keywords: big data analytics, machine learning, CDRs, 5G
Procedia PDF Downloads 139927 Theoretical Study of Substitutional Phosphorus and Nitrogen Pairs in Diamond
Authors: Tahani Amutairi, Paul May, Neil Allan
Abstract:
Many properties of semiconductor materials (mechanical, electronic, magnetic, and optical) can be significantly modified by introducing a point defect. Diamond offers extraordinary properties as a semiconductor, and doping seems to be a viable method of solving the problem associated with the fabrication of diamond-based electronic devices in order to exploit those properties. The dopants are believed to play a significant role in reducing the energy barrier to conduction and controlling the mobility of the carriers and the resistivity of the film. Although it has been proven that the n-type diamond semiconductor can be obtained with phosphorus doping, the resulting ionisation energy and mobility are still inadequate for practical application. Theoretical studies have revealed that this is partly because the effects of the many phosphorus atoms incorporated in the diamond lattice are compensated by acceptor states. Using spin-polarised hybrid density functional theory and a supercell approach, we explored the effects of bonding one N atom to a P in adjacent substitutional sites in diamond. A range of hybrid functional, including HSE06, B3LYP, PBE0, PBEsol0, and PBE0-13, were used to calculate the formation, binding, and ionisation energies, in order to explore the solubility and stability of the point defect. The equilibrium geometry and the magnetic and electronic structures were analysed and presented in detail. The defect introduces a unique reconstruction in a diamond where one of the C atoms coordinated with the N atom involved in the elongated C-N bond and creates a new bond with the P atom. The simulated infrared spectra of phosphorus-nitrogen defects were investigated with different supercell sizes and found to contain two sharp peaks at the edges of the spectrum, one at a high frequency 1,379 cm⁻¹ and the second appearing at the end range, 234 cm⁻¹, as obtained with the largest supercell (216).Keywords: DFT, HSE06, B3LYP, PBE0, PBEsol0, PBE0-13
Procedia PDF Downloads 84926 Prevalence of Workplace Bullying in Hong Kong: A Latent Class Analysis
Authors: Catalina Sau Man Ng
Abstract:
Workplace bullying is generally defined as a form of direct and indirect maltreatment at work including harassing, offending, socially isolating someone or negatively affecting someone’s work tasks. Workplace bullying is unfortunately commonplace around the world, which makes it a social phenomenon worth researching. However, the measurements and estimation methods of workplace bullying seem to be diverse in different studies, leading to dubious results. Hence, this paper attempts to examine the prevalence of workplace bullying in Hong Kong using the latent class analysis approach. It is often argued that the traditional classification of workplace bullying into the dichotomous 'victims' and 'non-victims' may not be able to fully represent the complex phenomenon of bullying. By treating workplace bullying as one latent variable and examining the potential categorical distribution within the latent variable, a more thorough understanding of workplace bullying in real-life situations may hence be provided. As a result, this study adopts a latent class analysis method, which was tested to demonstrate higher construct and higher predictive validity previously. In the present study, a representative sample of 2814 employees (Male: 54.7%, Female: 45.3%) in Hong Kong was recruited. The participants were asked to fill in a self-reported questionnaire which included measurements such as Chinese Workplace Bullying Scale (CWBS) and Chinese Version of Depression Anxiety Stress Scale (DASS). It is estimated that four latent classes will emerge: 'non-victims', 'seldom bullied', 'sometimes bullied', and 'victims'. The results of each latent class and implications of the study will also be discussed in this working paper.Keywords: latent class analysis, prevalence, survey, workplace bullying
Procedia PDF Downloads 330925 Unlocking the Puzzle of Borrowing Adult Data for Designing Hybrid Pediatric Clinical Trials
Authors: Rajesh Kumar G
Abstract:
A challenging aspect of any clinical trial is to carefully plan the study design to meet the study objective in optimum way and to validate the assumptions made during protocol designing. And when it is a pediatric study, there is the added challenge of stringent guidelines and difficulty in recruiting the necessary subjects. Unlike adult trials, there is not much historical data available for pediatrics, which is required to validate assumptions for planning pediatric trials. Typically, pediatric studies are initiated as soon as approval is obtained for a drug to be marketed for adults, so with the adult study historical information and with the available pediatric pilot study data or simulated pediatric data, the pediatric study can be well planned. Generalizing the historical adult study for new pediatric study is a tedious task; however, it is possible by integrating various statistical techniques and utilizing the advantage of hybrid study design, which will help to achieve the study objective in a smoother way even with the presence of many constraints. This research paper will explain how well the hybrid study design can be planned along with integrated technique (SEV) to plan the pediatric study; In brief the SEV technique (Simulation, Estimation (using borrowed adult data and applying Bayesian methods)) incorporates the use of simulating the planned study data and getting the desired estimates to Validate the assumptions.This method of validation can be used to improve the accuracy of data analysis, ensuring that results are as valid and reliable as possible, which allow us to make informed decisions well ahead of study initiation. With professional precision, this technique based on the collected data allows to gain insight into best practices when using data from historical study and simulated data alike.Keywords: adaptive design, simulation, borrowing data, bayesian model
Procedia PDF Downloads 76924 Application of Nonparametric Geographically Weighted Regression to Evaluate the Unemployment Rate in East Java
Authors: Sifriyani Sifriyani, I Nyoman Budiantara, Sri Haryatmi, Gunardi Gunardi
Abstract:
East Java Province has a first rank as a province that has the most counties and cities in Indonesia and has the largest population. In 2015, the population reached 38.847.561 million, this figure showed a very high population growth. High population growth is feared to lead to increase the levels of unemployment. In this study, the researchers mapped and modeled the unemployment rate with 6 variables that were supposed to influence. Modeling was done by nonparametric geographically weighted regression methods with truncated spline approach. This method was chosen because spline method is a flexible method, these models tend to look for its own estimation. In this modeling, there were point knots, the point that showed the changes of data. The selection of the optimum point knots was done by selecting the most minimun value of Generalized Cross Validation (GCV). Based on the research, 6 variables were declared to affect the level of unemployment in eastern Java. They were the percentage of population that is educated above high school, the rate of economic growth, the population density, the investment ratio of total labor force, the regional minimum wage and the ratio of the number of big industry and medium scale industry from the work force. The nonparametric geographically weighted regression models with truncated spline approach had a coefficient of determination 98.95% and the value of MSE equal to 0.0047.Keywords: East Java, nonparametric geographically weighted regression, spatial, spline approach, unemployed rate
Procedia PDF Downloads 321923 The Signaling Power of ESG Accounting in Sub-Sahara Africa: A Dynamic Model Approach
Authors: Haruna Maama
Abstract:
Environmental, social and governance (ESG) reporting is gaining considerable attention despite being voluntary. Meanwhile, it consumes resources to provide ESG reporting, raising a question of its value relevance. The study examined the impact of ESG reporting on the market value of listed firms in SSA. The annual and integrated reports of 276 listed sub-Sahara Africa (SSA) firms. The integrated reporting scores of the firm were analysed using a content analysis method. A multiple regression estimation technique using a GMM approach was employed for the analysis. The results revealed that ESG has a positive relationship with firms’ market value, suggesting that investors are interested in the ESG information disclosure of firms in SSA. This suggests that extensive ESG disclosures are attempts by firms to obtain the approval of powerful social, political and environmental stakeholders, especially institutional investors. Furthermore, the market value analysis evidence is consistent with signalling theory, which postulates that firms provide integrated reports as a signal to influence the behaviour of stakeholders. This finding reflects the value placed on investors' social, environmental and governance disclosures, which affirms the views that conventional investors would care about the social, environmental and governance issues of their potential or existing investee firms. Overall, the evidence is consistent with the prediction of signalling theory. In the context of this theory, integrated reporting is seen as part of firms' overall competitive strategy to influence investors' behaviour. The findings of this study make unique contributions to knowledge and practice in corporate reporting.Keywords: environmental accounting, ESG accounting, signalling theory, sustainability reporting, sub-saharan Africa
Procedia PDF Downloads 77922 Uncertainty Assessment in Building Energy Performance
Authors: Fally Titikpina, Abderafi Charki, Antoine Caucheteux, David Bigaud
Abstract:
The building sector is one of the largest energy consumer with about 40% of the final energy consumption in the European Union. Ensuring building energy performance is of scientific, technological and sociological matter. To assess a building energy performance, the consumption being predicted or estimated during the design stage is compared with the measured consumption when the building is operational. When valuing this performance, many buildings show significant differences between the calculated and measured consumption. In order to assess the performance accurately and ensure the thermal efficiency of the building, it is necessary to evaluate the uncertainties involved not only in measurement but also those induced by the propagation of dynamic and static input data in the model being used. The evaluation of measurement uncertainty is based on both the knowledge about the measurement process and the input quantities which influence the result of measurement. Measurement uncertainty can be evaluated within the framework of conventional statistics presented in the \textit{Guide to the Expression of Measurement Uncertainty (GUM)} as well as by Bayesian Statistical Theory (BST). Another choice is the use of numerical methods like Monte Carlo Simulation (MCS). In this paper, we proposed to evaluate the uncertainty associated to the use of a simplified model for the estimation of the energy consumption of a given building. A detailed review and discussion of these three approaches (GUM, MCS and BST) is given. Therefore, an office building has been monitored and multiple sensors have been mounted on candidate locations to get required data. The monitored zone is composed of six offices and has an overall surface of 102 $m^2$. Temperature data, electrical and heating consumption, windows opening and occupancy rate are the features for our research work.Keywords: building energy performance, uncertainty evaluation, GUM, bayesian approach, monte carlo method
Procedia PDF Downloads 458921 Identification of Outliers in Flood Frequency Analysis: Comparison of Original and Multiple Grubbs-Beck Test
Authors: Ayesha S. Rahman, Khaled Haddad, Ataur Rahman
Abstract:
At-site flood frequency analysis is used to estimate flood quantiles when at-site record length is reasonably long. In Australia, FLIKE software has been introduced for at-site flood frequency analysis. The advantage of FLIKE is that, for a given application, the user can compare a number of most commonly adopted probability distributions and parameter estimation methods relatively quickly using a windows interface. The new version of FLIKE has been incorporated with the multiple Grubbs and Beck test which can identify multiple numbers of potentially influential low flows. This paper presents a case study considering six catchments in eastern Australia which compares two outlier identification tests (original Grubbs and Beck test and multiple Grubbs and Beck test) and two commonly applied probability distributions (Generalized Extreme Value (GEV) and Log Pearson type 3 (LP3)) using FLIKE software. It has been found that the multiple Grubbs and Beck test when used with LP3 distribution provides more accurate flood quantile estimates than when LP3 distribution is used with the original Grubbs and Beck test. Between these two methods, the differences in flood quantile estimates have been found to be up to 61% for the six study catchments. It has also been found that GEV distribution (with L moments) and LP3 distribution with the multiple Grubbs and Beck test provide quite similar results in most of the cases; however, a difference up to 38% has been noted for flood quantiles for annual exceedance probability (AEP) of 1 in 100 for one catchment. These findings need to be confirmed with a greater number of stations across other Australian states.Keywords: floods, FLIKE, probability distributions, flood frequency, outlier
Procedia PDF Downloads 450920 The Effects of Time and Cyclic Loading to the Axial Capacity for Offshore Pile in Shallow Gas
Authors: Christian H. Girsang, M. Razi B. Mansoor, Noorizal N. Huang
Abstract:
An offshore platform was installed in 1977 at about 260km offshore West Malaysia at the water depth of 73.6m. Twelve (12) piles were installed with four (4) are skirt piles. The piles have 1.219m outside diameter and wall thickness of 31mm and were driven to 109m below seabed. Deterministic analyses of the pile capacity under axial loading were conducted using the current API (American Petroleum Institute) method and the four (4) CPT-based methods: the ICP (Imperial College Pile)-method, the NGI (Norwegian Geotechnical Institute)-Method, the UWA (University of Western Australia)-method and the Fugro-method. A statistical analysis of the model uncertainty associated with each pile capacity method was performed. There were two (2) piles analysed: Pile 1 and piles other than Pile 1, where Pile 1 is the pile that was most affected by shallow gas problems. Using the mean estimate of soil properties, the five (5) methods used for deterministic estimation of axial pile capacity in compression predict an axial capacity from 28 to 42MN for Pile 1 and 32 to 49MN for piles other than Pile 1. These values refer to the static capacity shortly after pile installation. They do not include the effects of cyclic loading during the design storm or time after installation on the axial pile capacity. On average, the axial pile capacity is expected to have increased by about 40% because of ageing since the installation of the platform in 1977. On the other hand, the cyclic loading effects during the design storm may reduce the axial capacity of the piles by around 25%. The study concluded that all piles have sufficient safety factor when the pile aging and cyclic loading effect are considered, as all safety factors are above 2.0 for maximum operating and storm loads.Keywords: axial capacity, cyclic loading, pile ageing, shallow gas
Procedia PDF Downloads 345919 Lightweight Ceramics from Clay and Ground Corncobs
Authors: N.Quaranta, M. Caligaris, R. Varoli, A. Cristobal, M. Unsen, H. López
Abstract:
Corncobs are agricultural wastes and they can be used as fuel or as raw material in different industrial processes like cement manufacture, contaminant adsorption, chemical compound synthesis, etc. The aim of this work is to characterize this waste and analyze the feasibility of its use as a pore-forming material in the manufacture of lightweight ceramics for the civil construction industry. The characterization of raw materials is carried out by using various techniques: electron diffraction analysis X-ray, differential and gravimetric thermal analyses, FTIR spectroscopy, ecotoxicity evaluation, among others. The ground corncobs, particle size less than 2 mm, are mixed with clay up to 30% in volume and shaped by uniaxial pressure of 25 MPa, with 6% humidity, in moulds of 70mm x 40mm x 18mm. Then the green bodies are heat treated at 950°C for two hours following the treatment curves used in ceramic industry. The ceramic probes are characterized by several techniques: density, porosity and water absorption, permanent volumetric variation, loss on ignition, microscopies analysis, and mechanical properties. DTA-TGA analysis of corncobs shows in the range 20°-250°C a small loss in TGA curve and exothermic peaks at 250°-500°C. FTIR spectrum of the corncobs sample shows the characteristic pattern of this kind of organic matter with stretching vibration bands of adsorbed water, methyl groups, C–O and C–C bonds, and the complex form of the cellulose and hemicellulose glycosidic bonds. The obtained ceramic bodies present external good characteristics without loose edges and adequate properties for the market requirements. The porosity values of the sintered pieces are higher than those of the reference sample without waste addition. The results generally indicate that it is possible to use corncobs as porosity former in ceramic bodies without modifying the usual sintering temperatures employed in the industry.Keywords: ceramic industry, biomass, recycling, hemicellulose glycosidic bonds
Procedia PDF Downloads 405918 Predictive Analysis of Chest X-rays Using NLP and Large Language Models with the Indiana University Dataset and Random Forest Classifier
Authors: Azita Ramezani, Ghazal Mashhadiagha, Bahareh Sanabakhsh
Abstract:
This study researches the combination of Random. Forest classifiers with large language models (LLMs) and natural language processing (NLP) to improve diagnostic accuracy in chest X-ray analysis using the Indiana University dataset. Utilizing advanced NLP techniques, the research preprocesses textual data from radiological reports to extract key features, which are then merged with image-derived data. This improved dataset is analyzed with Random Forest classifiers to predict specific clinical results, focusing on the identification of health issues and the estimation of case urgency. The findings reveal that the combination of NLP, LLMs, and machine learning not only increases diagnostic precision but also reliability, especially in quickly identifying critical conditions. Achieving an accuracy of 99.35%, the model shows significant advancements over conventional diagnostic techniques. The results emphasize the large potential of machine learning in medical imaging, suggesting that these technologies could greatly enhance clinician judgment and patient outcomes by offering quicker and more precise diagnostic approximations.Keywords: natural language processing (NLP), large language models (LLMs), random forest classifier, chest x-ray analysis, medical imaging, diagnostic accuracy, indiana university dataset, machine learning in healthcare, predictive modeling, clinical decision support systems
Procedia PDF Downloads 43