Search results for: graphics processing units
1441 Concentrations of Some Metallic Trace Elements in Twelve Sludge Incineration Ashes
Authors: Lotfi Khiari, Antoine Karam, Claude-Alla Joseph, Marc Hébert
Abstract:
The main objective of incineration of sludge generated from municipal or agri-food waste treatment plant is to reduce the volume of sludge to be disposed of as a solid or liquid waste, whilst concentrating or destroying potentially harmful volatile substances. In some cities in Canada and United States of America (USA), a large amount of sludge is incinerated, which entails a loss of organic matter and water leading to phosphorus, potassium and some metallic trace element (MTE) accumulation in ashes. The purpose of this study was to evaluate the concentration of potentially hazardous MTE such as cadmium (Cd), lead (Pb) and mercury (Hg) in twelve sludge incineration ash samples obtained from municipal wastewater and other food processing waste treatments from Canada and USA. The average, maximum, and minimum values of MTE in ashes were calculated for each city individually and all together. The trace metal concentration values were compared to the literature reported values. The concentrations of MTE in ashes vary widely depending on the sludge origins and treatment options. The concentrations of MTE in ashes were found the range of 0.1-6.4 mg/kg for Cd; 13-286 mg/kg for Pb and 0.1-0.5 mg/kg for Hg. On average, the following order of metal concentration in ashes was observed: Pb > Cd > Hg. Results show that metal contents in most ashes were similar to MTE levels in synthetic inorganic fertilizers and many fertilizing residual materials. Consequently, the environmental effects of MTE content of these ashes would be low.Keywords: biosolids, heavy metals, recycling, sewage sludge
Procedia PDF Downloads 3801440 Integration of Gravity and Seismic Methods in the Geometric Characterization of a Dune Reservoir: Case of the Zouaraa Basin, NW Tunisia
Authors: Marwa Djebbi, Hakim Gabtni
Abstract:
Gravity is a continuously advancing method that has become a mature technology for geological studies. Increasingly, it has been used to complement and constrain traditional seismic data and even used as the only tool to get information of the sub-surface. In fact, in some regions the seismic data, if available, are of poor quality and hard to be interpreted. Such is the case for the current study area. The Nefza zone is part of the Tellian fold and thrust belt domain in the north west of Tunisia. It is essentially made of a pile of allochthonous units resulting from a major Neogene tectonic event. Its tectonic and stratigraphic developments have always been subject of controversies. Considering the geological and hydrogeological importance of this area, a detailed interdisciplinary study has been conducted integrating geology, seismic and gravity techniques. The interpretation of Gravity data allowed the delimitation of the dune reservoir and the identification of the regional lineaments contouring the area. It revealed the presence of three gravity lows that correspond to the dune of Zouara and Ouchtata separated along with a positive gravity axis espousing the Ain Allega_Aroub Er Roumane axe. The Bouguer gravity map illustrated the compartmentalization of the Zouara dune into two depressions separated by a NW-SE anomaly trend. This constitution was confirmed by the vertical derivative map which showed the individualization of two depressions with slightly different anomaly values. The horizontal gravity gradient magnitude was performed in order to determine the different geological features present in the studied area. The latest indicated the presence of NE-SW parallel folds according to the major Atlasic direction. Also, NW-SE and EW trends were identified. The maxima tracing confirmed this direction by the presence of NE-SW faults, mainly the Ghardimaou_Cap Serrat accident. The quality of the available seismic sections and the absence of borehole data in the region, except few hydraulic wells that been drilled and showing the heterogeneity of the substratum of the dune, required the process of gravity modeling of this challenging area that necessitates to be modeled for the geometrical characterization of the dune reservoir and determine the different stratigraphic series underneath these deposits. For more detailed and accurate results, the scale of study will be reduced in coming research. A more concise method will be elaborated; the 4D microgravity survey. This approach is considered as an expansion of gravity method and its fourth dimension is time. It will allow a continuous and repeated monitoring of fluid movement in the subsurface according to the micro gal (μgall) scale. The gravity effect is a result of a monthly variation of the dynamic groundwater level which correlates with rainfall during different periods.Keywords: 3D gravity modeling, dune reservoir, heterogeneous substratum, seismic interpretation
Procedia PDF Downloads 2981439 Attention Deficit Disorders (ADD) among Stressed Pre-NCE Students in Federal College of Education, Kano-Nigeria
Authors: A. S. Haruna, M. L. Mayanchi
Abstract:
Pre Nigeria Certificate in Education otherwise called Pre-NCE is an intensive two semester course designed to assist candidates who could not meet the requirements for admission into NCE programme. The task of coping with the stressors in the course can interfere with the students’ ability to regulate attention skills and stay organized. The main objectives of the study were to find out the prevalence of stress; determine the association between stress and ADD and reveal gender difference in the prevalence of ADD among stressed pre-NCE students. Cross–Sectional Correlation Design was employed in which 333 (Male=65%; Female=35%) students were proportionately sampled and administered Stress Assessment Scale [SAS r=0.74) and those identified with stress were thereafter rated with Cognitive Processing Inventory [CPI]. Data collected was used to analyze the three null hypotheses through One-sample Kolmogorov-Smirnov (K-S) Z-score, Pearson Product Moment Correlation Coefficients (PPMCC) and t-test statistics respectively at 0.05 confidence level. Results revealed significant prevalence of stress [Z-calculated =2.24; Z-critical = ±1.96], and a positive relationship between Stress and ADD among Pre-NCE students [r-calculated =0.450; r-critical =0.138]. However, there was no gender difference in the prevalence of ADD among stressed Pre-NCE students in the college [t-calculated =1.49; t-critical =1.645]. The study concludes that while stress and ADD prevail among pre-NCE students, there was no gender difference in the prevalence of ADD. Recommendations offered suggest the use of Learners Assistance Programs (LAP) for stress management, and Teacher-Students ratio of 1:25 be adopted in order to cater for stressed pre-NCE students with ADD.Keywords: attention deficit disorder, pre-NCE students, stress, Pearson Product Moment Correlation Coefficients (PPMCC)
Procedia PDF Downloads 2421438 The Effect of Improvement Programs in the Mean Time to Repair and in the Mean Time between Failures on Overall Lead Time: A Simulation Using the System Dynamics-Factory Physics Model
Authors: Marcel Heimar Ribeiro Utiyama, Fernanda Caveiro Correia, Dario Henrique Alliprandini
Abstract:
The importance of the correct allocation of improvement programs is of growing interest in recent years. Due to their limited resources, companies must ensure that their financial resources are directed to the correct workstations in order to be the most effective and survive facing the strong competition. However, to our best knowledge, the literature about allocation of improvement programs does not analyze in depth this problem when the flow shop process has two capacity constrained resources. This is a research gap which is deeply studied in this work. The purpose of this work is to identify the best strategy to allocate improvement programs in a flow shop with two capacity constrained resources. Data were collected from a flow shop process with seven workstations in an industrial control and automation company, which process 13.690 units on average per month. The data were used to conduct a simulation with the System Dynamics-Factory Physics model. The main variables considered, due to their importance on lead time reduction, were the mean time between failures and the mean time to repair. The lead time reduction was the output measure of the simulations. Ten different strategies were created: (i) focused time to repair improvement, (ii) focused time between failures improvement, (iii) distributed time to repair improvement, (iv) distributed time between failures improvement, (v) focused time to repair and time between failures improvement, (vi) distributed time to repair and between failures improvement, (vii) hybrid time to repair improvement, (viii) hybrid time between failures improvements, (ix) time to repair improvement strategy towards the two capacity constrained resources, (x) time between failures improvement strategy towards the two capacity constrained resources. The ten strategies tested are variations of the three main strategies for improvement programs named focused, distributed and hybrid. Several comparisons among the effect of the ten strategies in lead time reduction were performed. The results indicated that for the flow shop analyzed, the focused strategies delivered the best results. When it is not possible to perform a large investment on the capacity constrained resources, companies should use hybrid approaches. An important contribution to the academy is the hybrid approach, which proposes a new way to direct the efforts of improvements. In addition, the study in a flow shop with two strong capacity constrained resources (more than 95% of utilization) is an important contribution to the literature. Another important contribution is the problem of allocation with two CCRs and the possibility of having floating capacity constrained resources. The results provided the best improvement strategies considering the different strategies of allocation of improvement programs and different positions of the capacity constrained resources. Finally, it is possible to state that both strategies, hybrid time to repair improvement and hybrid time between failures improvement, delivered best results compared to the respective distributed strategies. The main limitations of this study are mainly regarding the flow shop analyzed. Future work can further investigate different flow shop configurations like a varying number of workstations, different number of products or even different positions of the two capacity constrained resources.Keywords: allocation of improvement programs, capacity constrained resource, hybrid strategy, lead time, mean time to repair, mean time between failures
Procedia PDF Downloads 1241437 Machine Learning and Deep Learning Approach for People Recognition and Tracking in Crowd for Safety Monitoring
Authors: A. Degale Desta, Cheng Jian
Abstract:
Deep learning application in computer vision is rapidly advancing, giving it the ability to monitor the public and quickly identify potentially anomalous behaviour from crowd scenes. Therefore, the purpose of the current work is to improve the performance of safety of people in crowd events from panic behaviour through introducing the innovative idea of Aggregation of Ensembles (AOE), which makes use of the pre-trained ConvNets and a pool of classifiers to find anomalies in video data with packed scenes. According to the theory of algorithms that applied K-means, KNN, CNN, SVD, and Faster-CNN, YOLOv5 architectures learn different levels of semantic representation from crowd videos; the proposed approach leverages an ensemble of various fine-tuned convolutional neural networks (CNN), allowing for the extraction of enriched feature sets. In addition to the above algorithms, a long short-term memory neural network to forecast future feature values and a handmade feature that takes into consideration the peculiarities of the crowd to understand human behavior. On well-known datasets of panic situations, experiments are run to assess the effectiveness and precision of the suggested method. Results reveal that, compared to state-of-the-art methodologies, the system produces better and more promising results in terms of accuracy and processing speed.Keywords: action recognition, computer vision, crowd detecting and tracking, deep learning
Procedia PDF Downloads 1611436 Data Driven Infrastructure Planning for Offshore Wind farms
Authors: Isha Saxena, Behzad Kazemtabrizi, Matthias C. M. Troffaes, Christopher Crabtree
Abstract:
The calculations done at the beginning of the life of a wind farm are rarely reliable, which makes it important to conduct research and study the failure and repair rates of the wind turbines under various conditions. This miscalculation happens because the current models make a simplifying assumption that the failure/repair rate remains constant over time. This means that the reliability function is exponential in nature. This research aims to create a more accurate model using sensory data and a data-driven approach. The data cleaning and data processing is done by comparing the Power Curve data of the wind turbines with SCADA data. This is then converted to times to repair and times to failure timeseries data. Several different mathematical functions are fitted to the times to failure and times to repair data of the wind turbine components using Maximum Likelihood Estimation and the Posterior expectation method for Bayesian Parameter Estimation. Initial results indicate that two parameter Weibull function and exponential function produce almost identical results. Further analysis is being done using the complex system analysis considering the failures of each electrical and mechanical component of the wind turbine. The aim of this project is to perform a more accurate reliability analysis that can be helpful for the engineers to schedule maintenance and repairs to decrease the downtime of the turbine.Keywords: reliability, bayesian parameter inference, maximum likelihood estimation, weibull function, SCADA data
Procedia PDF Downloads 861435 Defining a Reference Architecture for Predictive Maintenance Systems: A Case Study Using the Microsoft Azure IoT-Cloud Components
Authors: Walter Bernhofer, Peter Haber, Tobias Mayer, Manfred Mayr, Markus Ziegler
Abstract:
Current preventive maintenance measures are cost intensive and not efficient. With the available sensor data of state of the art internet of things devices new possibilities of automated data processing emerge. Current advances in data science and in machine learning enable new, so called predictive maintenance technologies, which empower data scientists to forecast possible system failures. The goal of this approach is to cut expenses in preventive maintenance by automating the detection of possible failures and to improve efficiency and quality of maintenance measures. Additionally, a centralization of the sensor data monitoring can be achieved by using this approach. This paper describes the approach of three students to define a reference architecture for a predictive maintenance solution in the internet of things domain with a connected smartphone app for service technicians. The reference architecture is validated by a case study. The case study is implemented with current Microsoft Azure cloud technologies. The results of the case study show that the reference architecture is valid and can be used to achieve a system for predictive maintenance execution with the cloud components of Microsoft Azure. The used concepts are technology platform agnostic and can be reused in many different cloud platforms. The reference architecture is valid and can be used in many use cases, like gas station maintenance, elevator maintenance and many more.Keywords: case study, internet of things, predictive maintenance, reference architecture
Procedia PDF Downloads 2511434 Road Condition Monitoring Using Built-in Vehicle Technology Data, Drones, and Deep Learning
Authors: Judith Mwakalonge, Geophrey Mbatta, Saidi Siuhi, Gurcan Comert, Cuthbert Ruseruka
Abstract:
Transportation agencies worldwide continuously monitor their roads' conditions to minimize road maintenance costs and maintain public safety and rideability quality. Existing methods for carrying out road condition surveys involve manual observations of roads using standard survey forms done by qualified road condition surveyors or engineers either on foot or by vehicle. Automated road condition survey vehicles exist; however, they are very expensive since they require special vehicles equipped with sensors for data collection together with data processing and computing devices. The manual methods are expensive, time-consuming, infrequent, and can hardly provide real-time information for road conditions. This study contributes to this arena by utilizing built-in vehicle technologies, drones, and deep learning to automate road condition surveys while using low-cost technology. A single model is trained to capture flexible pavement distresses (Potholes, Rutting, Cracking, and raveling), thereby providing a more cost-effective and efficient road condition monitoring approach that can also provide real-time road conditions. Additionally, data fusion is employed to enhance the road condition assessment with data from vehicles and drones.Keywords: road conditions, built-in vehicle technology, deep learning, drones
Procedia PDF Downloads 1241433 Pineapple Waste Valorization through Biogas Production: Effect of Substrate Concentration and Microwave Pretreatment
Authors: Khamdan Cahyari, Pratikno Hidayat
Abstract:
Indonesia has produced more than 1.8 million ton pineapple fruit in 2013 of which turned into waste due to industrial processing, deterioration and low qualities. It was estimated that this waste accounted for more than 40 percent of harvested fruits. In addition, pineapple leaves were one of biomass waste from pineapple farming land, which contributed even higher percentages. Most of the waste was only dumped into landfill area without proper pretreatment causing severe environmental problem. This research was meant to valorize the pineapple waste for producing renewable energy source of biogas through mesophilic (30℃) anaerobic digestion process. Especially, it was aimed to investigate effect of substrate concentration of pineapple fruit waste i.e. peel, core as well as effect of microwave pretreatment of pineapple leaves waste. The concentration of substrate was set at value 12, 24 and 36 g VS/liter culture whereas 800-Watt microwave pretreatment conducted at 2 and 5 minutes. It was noticed that optimum biogas production obtained at concentration 24 g VS/l with biogas yield 0.649 liter/g VS (45%v CH4) whereas microwave pretreatment at 2 minutes duration performed better compare to 5 minutes due to shorter exposure of microwave heat. This results suggested that valorization of pineapple waste could be carried out through biogas production at the aforementioned process condition. Application of this method is able to both reduce the environmental problem of the waste and produce renewable energy source of biogas to fulfill local energy demand of pineapple farming areas.Keywords: pineapple waste, substrate concentration, microwave pretreatment, biogas, anaerobic digestion
Procedia PDF Downloads 5801432 Investigating the Relationship between Job Satisfaction, Role Identity, and Turnover Intention for Nurses in Outpatient Department
Authors: Su Hui Tsai, Weir Sen Lin, Rhay Hung Weng
Abstract:
There are numerous outpatient departments at hospitals with enormous amounts of outpatients. Although the work of outpatient nursing staff does not include the ward, emergency and critical care units that involve patient life-threatening conditions, the work is cumbersome and requires facing and dealing with a large number of outpatients in a short period of time. Therefore, nursing staff often do not feel satisfied with their work and cannot identify with their professional role, leading to intentions to leave their job. Thus, the main purpose of this study is to explore the correlation between the job satisfaction and role identity of nursing staff with turnover intention. This research was conducted using a questionnaire, and the subjects were outpatient nursing staff in three regional hospitals in Southern Taiwan. A total of 175 questionnaires were distributed, and 166 valid questionnaires were returned. After collecting the data, the reliability and validity of the study variables were confirmed by confirmatory factor analysis. The influence of role identity and job satisfaction on nursing staff’s turnover intention was analyzed by descriptive analysis, one-way ANOVA, Pearson correlation analysis and multiple regression analysis. Results showed that 'role identity' had significant differences in different types of marriages. Job satisfaction of 'grasp of environment' had significant differences in different levels of education. Job satisfaction of 'professional growth' and 'shifts and days off' showed significant differences in different types of marriages. 'Role identity' and 'job satisfaction' were negatively correlated with turnover intention respectively. Job satisfaction of 'salary and benefits' and 'grasp of environment' were significant predictors of role identity. The higher the job satisfaction of 'salary and benefits' and 'grasp of environment', the higher the role identity. Job satisfaction of 'patient and family interaction' were significant predictors of turnover intention. The lower the job satisfaction of 'patient and family interaction', the higher the turnover intention. This study found that outpatient nursing staff had the lowest satisfaction towards salary structure. It is recommended that bonuses, promotion opportunities and other incentives be established to increase the role identity of outpatient nursing staff. The results showed that the higher the job satisfaction of 'salary and benefits' and 'grasp of environment', the higher the role identity. It is recommended that regular evaluations be conducted to reward nursing staff with excellent service and invite nursing staff to share their work experiences and thoughts, to enhance nursing staff’s expectation and identification of their occupational role, as well as instilling the concept of organizational service and organizational expectations of emotional display. The results showed that the lower the job satisfaction of 'patient and family interaction', the higher the turnover intention. It is recommended that interpersonal communication and workplace violence prevention educational training courses be organized to enhance the communication and interaction of nursing staff with patients and their families.Keywords: outpatient, job satisfaction, turnover, intention
Procedia PDF Downloads 1461431 A Comparative Study of Motion Events Encoding in English and Italian
Authors: Alfonsina Buoniconto
Abstract:
The aim of this study is to investigate the degree of cross-linguistic and intra-linguistic variation in the encoding of motion events (MEs) in English and Italian, these being typologically different languages both showing signs of disobedience to their respective types. As a matter of fact, the traditional typological classification of MEs encoding distributes languages into two macro-types, based on the preferred locus for the expression of Path, the main ME component (other components being Figure, Ground and Manner) characterized by conceptual and structural prominence. According to this model, Satellite-framed (SF) languages typically express Path information in verb-dependent items called satellites (e.g. preverbs and verb particles) with main verbs encoding Manner of motion; whereas Verb-framed languages (VF) tend to include Path information within the verbal locus, leaving Manner to adjuncts. Although this dichotomy is valid altogether, languages do not always behave according to their typical classification patterns. English, for example, is usually ascribed to the SF type due to the rich inventory of postverbal particles and phrasal verbs used to express spatial relations (i.e. the cat climbed down the tree); nevertheless, it is not uncommon to find constructions such as the fog descended slowly, which is typical of the VF type. Conversely, Italian is usually described as being VF (cf. Paolo uscì di corsa ‘Paolo went out running’), yet SF constructions like corse via in lacrime ‘She ran away in tears’ are also frequent. This paper will try to demonstrate that such a typological overlapping is due to the fact that the semantic units making up MEs are distributed within several loci of the sentence –not only verbs and satellites– thus determining a number of different constructions stemming from convergent factors. Indeed, the linguistic expression of motion events depends not only on the typological nature of languages in a traditional sense, but also on a series morphological, lexical, and syntactic resources, as well as on inferential, discursive, usage-related, and cultural factors that make semantic information more or less accessible, frequent, and easy to process. Hence, rather than describe English and Italian in dichotomic terms, this study focuses on the investigation of cross-linguistic and intra-linguistic variation in the use of all the strategies made available by each linguistic system to express motion. Evidence for these assumptions is provided by parallel corpora analysis. The sample texts are taken from two contemporary Italian novels and their respective English translations. The 400 motion occurrences selected (200 in English and 200 in Italian) were scanned according to the MODEG (an acronym for Motion Decoding Grid) methodology, which grants data comparability through the indexation and retrieval of combined morphosyntactic and semantic information at different levels of detail.Keywords: construction typology, motion event encoding, parallel corpora, satellite-framed vs. verb-framed type
Procedia PDF Downloads 2601430 Information-Controlled Laryngeal Feature Variations in Korean Consonants
Authors: Ponghyung Lee
Abstract:
This study seeks to investigate the variations occurring to Korean consonantal variations center around laryngeal features of the concerned sounds, to the exclusion of others. Our fundamental premise is that the weak contrast associated with concerned segments might be held accountable for the oscillation of the status quo of the concerned consonants. What is more, we assume that an array of notions as a measure of communicative efficiency of linguistic units would be significantly influential on triggering those variations. To this end, we have tried to compute the surprisal, entropic contribution, and relative contrastiveness associated with Korean obstruent consonants. What we found therein is that the Information-theoretic perspective is compelling enough to lend support our approach to a considerable extent. That is, the variant realizations, chronologically and stylistically, prove to be profoundly affected by a set of Information-theoretic factors enumerated above. When it comes to the biblical proper names, we use Georgetown University CQP Web-Bible corpora. From the 8 texts (4 from Old Testament and 4 from New Testament) among the total 64 texts, we extracted 199 samples. We address the issue of laryngeal feature variations associated with Korean obstruent consonants under the presumption that the variations stem from the weak contrast among the triad manifestations of laryngeal features. The variants emerge from diverse sources in chronological and stylistic senses: Christianity biblical texts, ordinary casual speech, the shift of loanword adaptation over time, and ideophones. For the purpose of discussing what they are really like from the perspective of Information Theory, it is necessary to closely look at the data. Among them, the massive changes occurring to loanword adaptation of proper nouns during the centennial history of Korean Christianity draw our special attention. We searched 199 types of initially capitalized words among 45,528-word tokens, which account for around 5% of total 901,701-word tokens (12,786-word types) from Georgetown University CQP Web-Bible corpora. We focus on the shift of the laryngeal features incorporated into word-initial consonants, which are available through the two distinct versions of Korean Bible: one came out in the 1960s for the Protestants, and the other was published in the 1990s for the Catholic Church. Of these proper names, we have closely traced the adaptation of plain obstruents, e. g. /b, d, g, s, ʤ/ in the sources. The results show that as much as 41% of the extracted proper names show variations; 37% in terms of aspiration, and 4% in terms of tensing. This study set out in an effort to shed light on the question: to what extent can we attribute the variations occurring to the laryngeal features associated with Korean obstruent consonants to the communicative aspects of linguistic activities? In this vein, the concerted effects of the triad, of surprisal, entropic contribution, and relative contrastiveness can be credited with the ups and downs in the feature specification, despite being contentiousness on the role of surprisal to some extent.Keywords: entropic contribution, laryngeal feature variation, relative contrastiveness, surprisal
Procedia PDF Downloads 1281429 PathoPy2.0: Application of Fractal Geometry for Early Detection and Histopathological Analysis of Lung Cancer
Authors: Rhea Kapoor
Abstract:
Fractal dimension provides a way to characterize non-geometric shapes like those found in nature. The purpose of this research is to estimate Minkowski fractal dimension of human lung images for early detection of lung cancer. Lung cancer is the leading cause of death among all types of cancer and an early histopathological analysis will help reduce deaths primarily due to late diagnosis. A Python application program, PathoPy2.0, was developed for analyzing medical images in pixelated format and estimating Minkowski fractal dimension using a new box-counting algorithm that allows windowing of images for more accurate calculation in the suspected areas of cancerous growth. Benchmark geometric fractals were used to validate the accuracy of the program and changes in fractal dimension of lung images to indicate the presence of issues in the lung. The accuracy of the program for the benchmark examples was between 93-99% of known values of the fractal dimensions. Fractal dimension values were then calculated for lung images, from National Cancer Institute, taken over time to correctly detect the presence of cancerous growth. For example, as the fractal dimension for a given lung increased from 1.19 to 1.27 due to cancerous growth, it represents a significant change in fractal dimension which lies between 1 and 2 for 2-D images. Based on the results obtained on many lung test cases, it was concluded that fractal dimension of human lungs can be used to diagnose lung cancer early. The ideas behind PathoPy2.0 can also be applied to study patterns in the electrical activity of the human brain and DNA matching.Keywords: fractals, histopathological analysis, image processing, lung cancer, Minkowski dimension
Procedia PDF Downloads 1781428 An AI-Based Dynamical Resource Allocation Calculation Algorithm for Unmanned Aerial Vehicle
Authors: Zhou Luchen, Wu Yubing, Burra Venkata Durga Kumar
Abstract:
As the scale of the network becomes larger and more complex than before, the density of user devices is also increasing. The development of Unmanned Aerial Vehicle (UAV) networks is able to collect and transform data in an efficient way by using software-defined networks (SDN) technology. This paper proposed a three-layer distributed and dynamic cluster architecture to manage UAVs by using an AI-based resource allocation calculation algorithm to address the overloading network problem. Through separating services of each UAV, the UAV hierarchical cluster system performs the main function of reducing the network load and transferring user requests, with three sub-tasks including data collection, communication channel organization, and data relaying. In this cluster, a head node and a vice head node UAV are selected considering the Central Processing Unit (CPU), operational (RAM), and permanent (ROM) memory of devices, battery charge, and capacity. The vice head node acts as a backup that stores all the data in the head node. The k-means clustering algorithm is used in order to detect high load regions and form the UAV layered clusters. The whole process of detecting high load areas, forming and selecting UAV clusters, and moving the selected UAV cluster to that area is proposed as offloading traffic algorithm.Keywords: k-means, resource allocation, SDN, UAV network, unmanned aerial vehicles
Procedia PDF Downloads 1111427 Applying Biculturalism in Studying Tourism Host Community Cultural Integrity and Individual Member Stress
Authors: Shawn P. Daly
Abstract:
Communities heavily engaged in the tourism industry discover their values intersect, meld, and conflict with those of visitors. Maintaining cultural integrity in the face of powerful external pressures causes stress among society members. This effect represents a less studied aspect of sustainable tourism. The present paper brings a perspective unique to the tourism literature: biculturalism. The grounded theories, coherent hypotheses, and validated constructs and indicators of biculturalism represent a sound base from which to consider sociocultural issues in sustainable tourism. Five models describe the psychological state of individuals operating at cultural crossroads: assimilation (joining the new culture), acculturation (grasping the new culture but remaining of the original culture), alternation (varying behavior to cultural context), multicultural (maintaining distinct cultures), and fusion (blending cultures). These five processes divide into two units of analysis (individual and society), permitting research questions at levels important for considering sociocultural sustainability. Acculturation modelling has morphed into dual processes of acculturation (new culture adaptation) and enculturation (original culture adaptation). This dichotomy divides sustainability research questions into human impacts from assimilation (acquiring new culture, throwing away original), separation (rejecting new culture, keeping original), integration (acquiring new culture, keeping original), and marginalization (rejecting new culture, throwing away original). Biculturalism is often cast in terms of its emotional, behavioral, and cognitive dimensions. Required cultural adjustments and varying levels of cultural competence lead to physical, psychological, and emotional outcomes, including depression, lowered life satisfaction and self-esteem, headaches, and back pain—or enhanced career success, social skills, and life styles. Numerous studies provide empirical scales and research hypotheses for sustainability research into tourism’s causality and effect on local well-being. One key issue in applying biculturalism to sustainability scholarship concerns identification and specification of the alternative new culture contacting local culture. Evidence exists for tourism industry, universal tourist, and location/event-specific tourist culture. The biculturalism paradigm holds promise for researchers examining evolving cultural identity and integrity in response to mass tourism. In particular, confirmed constructs and scales simplify operationalization of tourism sustainability studies in terms of human impact and adjustment.Keywords: biculturalism, cultural integrity, psychological and sociocultural adjustment, tourist culture
Procedia PDF Downloads 4091426 Anisotropic Total Fractional Order Variation Model in Seismic Data Denoising
Authors: Jianwei Ma, Diriba Gemechu
Abstract:
In seismic data processing, attenuation of random noise is the basic step to improve quality of data for further application of seismic data in exploration and development in different gas and oil industries. The signal-to-noise ratio of the data also highly determines quality of seismic data. This factor affects the reliability as well as the accuracy of seismic signal during interpretation for different purposes in different companies. To use seismic data for further application and interpretation, we need to improve the signal-to-noise ration while attenuating random noise effectively. To improve the signal-to-noise ration and attenuating seismic random noise by preserving important features and information about seismic signals, we introduce the concept of anisotropic total fractional order denoising algorithm. The anisotropic total fractional order variation model defined in fractional order bounded variation is proposed as a regularization in seismic denoising. The split Bregman algorithm is employed to solve the minimization problem of the anisotropic total fractional order variation model and the corresponding denoising algorithm for the proposed method is derived. We test the effectiveness of theproposed method for synthetic and real seismic data sets and the denoised result is compared with F-X deconvolution and non-local means denoising algorithm.Keywords: anisotropic total fractional order variation, fractional order bounded variation, seismic random noise attenuation, split Bregman algorithm
Procedia PDF Downloads 2071425 Anti-Nutritional Factors, In-Vitro Trypsin, Chymotrypsin and Peptidase Multi Enzyme Protein Digestibility of Some Melon (Egusi) Seeds and Their Protein Isolates
Authors: Joan O. Ogundele, Aladesanmi A. Oshodi, Adekunle I. Amoo
Abstract:
Abstract In-vitro multi-enzyme protein digestibility (IVMPD) and some anti-nutritional factors (ANF) of five melon (egusi) seed flours (MSF) and their protein isolates (PI) were carried out. Their PI have potentials comparable to that of soya beans. It is important to know the IVMPD and ANF of these protein sources as to ensure their safety when adapted for use as alternate protein sources to substitute for cow milk, which is relatively expensive in Nigeria. Standard methods were used to produce PI of Citrullus colocynthis, Citrullus vulgaris, African Wine Kettle gourd (Lageneria siceraria I), Basket Ball gourd (Lagenaria siceraria II) and Bushel Giant Gourd (Lageneria siceraria III) seeds and to determine the ANF and IVMPD of the MSF and PI unheated and at 37oC. Multi-enzymes used were trypsin, chymotrypsin and peptidase. IVMPD of MSF ranged from (70.67±0.70) % (C. vulgaris) to (72.07± 1.79) % (L.siceraria I) while for their PI ranged from 74.33% (C.vulgaris) to 77.55% (L.siceraria III). IVMPD of the PI were higher than those of MSF. Heating increased IVMPD of MSF with average value of 79.40% and those of PI with average of 84.14%. ANF average in MSF are tannin (0.11mg/g), phytate (0.23%). Differences in IVMPD of MSF and their PI at different temperatures may arise from processing conditions that alter the release of amino acids from proteins by enzymatic processes. ANF in MSF were relatively low, but were found to be lower in the PI, therefor making the PI safer for human consumption as an alternate source of protein.Keywords: Anti-nutrients, Enzymatic protein digestibility, Melon (egusi)., Protein Isolates.
Procedia PDF Downloads 1231424 Treatment of Leather Industry Wastewater with Advance Treatment Methods
Authors: Seval Yilmaz, Filiz Bayrakci Karel, Ali Savas Koparal
Abstract:
Textile products produced by leather have been indispensable for human consumption. Various chemicals are used to enhance the durability of end-products in the processing of leather products. The wastewaters from the leather industry which contain these chemicals exhibit toxic effects on the receiving environment and threaten the natural ecosystem. In this study, leather industry wastewater (LIW), which has high loads of contaminants, was treated using advanced treatment techniques instead of conventional methods. During the experiments, the performance of electrochemical methods was investigated. During the electrochemical experiments, the performance of batch electrooxidation (EO) using boron-doped diamond (BDD) electrodes with monopolar configuration for removal of chemical oxygen demand (COD) from LIW were investigated. The influences of electrolysis time, current density (which varies as 5 mA/cm², 10 mA/cm², 20 mA/cm², 30 mA/cm², 50 mA/cm²) and initial pH (which varies as 3,80 (natural pH of LIW), 7, 9) on removal efficiency were investigated in a batch stirred cell to determine the best treatment conditions. The current density applied to the electrochemical reactors is directly proportional to the consumption of electric energy, so electrical energy consumption was monitored during the experiment. The best experimental conditions obtained in electrochemical studies were as follows: electrolysis time = 60 min, current density = 30.0 mA/cm², pH 7. Using these parameters, 53.59% COD removal rates for LIW was achieved and total energy consumption was obtained as 13.03 kWh/m³. It is concluded that electrooxidation process constitutes a plausible and developable method for the treatment of LIW.Keywords: BDD electrodes, COD removal, electrochemical treatment, leather industry wastewater
Procedia PDF Downloads 1591423 Electrodeposition and Selenization of Cuin Alloys for the Synthesis of Photoactive Cu2in1-X Gax Se2 (Cigs) Thin Films
Authors: Mohamed Benaicha, Mahdi Allam
Abstract:
A new two stage electrochemical process as a safe, large area and low processing cost technique for the production of semi-conducting CuInSe2 (CIS) thin films is studied. CuIn precursors were first potentiostatically electrodeposited onto molybdenum substrates from an acidic thiocyanate electrolyte. In a second stage, the prepared metallic CuIn layers were used as substrate in the selenium electrochemical deposition system and subjected to a thermal treatment in vacuum atmosphere, to eliminate binary phase formation by reaction of the Cu2-x Se and InxSey selenides, leading to the formation of CuInSe2 thin film. Electrochemical selenization from aqueous electrolyte is introduced as an alternative to toxic and hazardous H2Se or Se vapor phase selenization used in physical techniques. In this study, the influence of film deposition parameters such as bath composition, temperature and potential on film properties was studied. The electrochemical, morphological, structural and compositional properties of electrodeposited thin films were characterized using various techniques. Results of Cyclic and Stripping-Cyclic Voltammetry (CV, SCV), Scanning Electron Microscopy (SEM) and Energy Dispersive X-Ray microanalysis (EDX) investigations revealed good reproducibility and homogeneity of the film composition. Thereby optimal technological parameters for the electrochemical production of CuIn, Se as precursors for CuInSe2 thin layers are determined.Keywords: photovoltaic, CIGS, copper alloys, electrodeposition, thin films
Procedia PDF Downloads 4641422 Electrohydrodynamic Patterning for Surface Enhanced Raman Scattering for Point-of-Care Diagnostics
Authors: J. J. Rickard, A. Belli, P. Goldberg Oppenheimer
Abstract:
Medical diagnostics, environmental monitoring, homeland security and forensics increasingly demand specific and field-deployable analytical technologies for quick point-of-care diagnostics. Although technological advancements have made optical methods well-suited for miniaturization, a highly-sensitive detection technique for minute sample volumes is required. Raman spectroscopy is a well-known analytical tool, but has very weak signals and hence is unsuitable for trace level analysis. Enhancement via localized optical fields (surface plasmons resonances) on nanoscale metallic materials generates huge signals in surface-enhanced Raman scattering (SERS), enabling single molecule detection. This enhancement can be tuned by manipulation of the surface roughness and architecture at the sub-micron level. Nevertheless, the development and application of SERS has been inhibited by the irreproducibility and complexity of fabrication routes. The ability to generate straightforward, cost-effective, multiplex-able and addressable SERS substrates with high enhancements is of profound interest for SERS-based sensing devices. While most SERS substrates are manufactured by conventional lithographic methods, the development of a cost-effective approach to create nanostructured surfaces is a much sought-after goal in the SERS community. Here, a method is established to create controlled, self-organized, hierarchical nanostructures using electrohydrodynamic (HEHD) instabilities. The created structures are readily fine-tuned, which is an important requirement for optimizing SERS to obtain the highest enhancements. HEHD pattern formation enables the fabrication of multiscale 3D structured arrays as SERS-active platforms. Importantly, each of the HEHD-patterned individual structural units yield a considerable SERS enhancement. This enables each single unit to function as an isolated sensor. Each of the formed structures can be effectively tuned and tailored to provide high SERS enhancement, while arising from different HEHD morphologies. The HEHD fabrication of sub-micrometer architectures is straightforward and robust, providing an elegant route for high-throughput biological and chemical sensing. The superior detection properties and the ability to fabricate SERS substrates on the miniaturized scale, will facilitate the development of advanced and novel opto-fluidic devices, such as portable detection systems, and will offer numerous applications in biomedical diagnostics, forensics, ecological warfare and homeland security.Keywords: hierarchical electrohydrodynamic patterning, medical diagnostics, point-of care devices, SERS
Procedia PDF Downloads 3451421 Using Autoencoder as Feature Extractor for Malware Detection
Authors: Umm-E-Hani, Faiza Babar, Hanif Durad
Abstract:
Malware-detecting approaches suffer many limitations, due to which all anti-malware solutions have failed to be reliable enough for detecting zero-day malware. Signature-based solutions depend upon the signatures that can be generated only when malware surfaces at least once in the cyber world. Another approach that works by detecting the anomalies caused in the environment can easily be defeated by diligently and intelligently written malware. Solutions that have been trained to observe the behavior for detecting malicious files have failed to cater to the malware capable of detecting the sandboxed or protected environment. Machine learning and deep learning-based approaches greatly suffer in training their models with either an imbalanced dataset or an inadequate number of samples. AI-based anti-malware solutions that have been trained with enough samples targeted a selected feature vector, thus ignoring the input of leftover features in the maliciousness of malware just to cope with the lack of underlying hardware processing power. Our research focuses on producing an anti-malware solution for detecting malicious PE files by circumventing the earlier-mentioned shortcomings. Our proposed framework, which is based on automated feature engineering through autoencoders, trains the model over a fairly large dataset. It focuses on the visual patterns of malware samples to automatically extract the meaningful part of the visual pattern. Our experiment has successfully produced a state-of-the-art accuracy of 99.54 % over test data.Keywords: malware, auto encoders, automated feature engineering, classification
Procedia PDF Downloads 721420 Ferulic Acid-Grafted Chitosan: Thermal Stability and Feasibility as an Antioxidant for Active Biodegradable Packaging Film
Authors: Sarekha Woranuch, Rangrong Yoksan
Abstract:
Active packaging has been developed based on the incorporation of certain additives, in particular antimicrobial and antioxidant agents, into packaging systems to maintain or extend product quality and shelf-life. Ferulic acid is one of the most effective natural phenolic antioxidants, which has been used in food, pharmaceutical and active packaging film applications. However, most phenolic compounds are sensitive to oxygen, light and heat; its activities are thus lost during product formulation and processing. Grafting ferulic acid onto polymer is an alternative to reduce its loss under thermal processes. Therefore, the objectives of the present research were to study the thermal stability of ferulic acid after grafting onto chitosan, and to investigate the possibility of using ferulic acid-grafted chitosan (FA-g-CTS) as an antioxidant for active biodegradable packaging film. FA-g-CTS was incorporated into biodegradable film via a two-step process, i.e. compounding extrusion at temperature up to 150 °C followed by blown film extrusion at temperature up to 175 °C. Although incorporating FA-g-CTS with a content of 0.02–0.16% (w/w) caused decreased water vapor barrier property and reduced extensibility, the films showed improved oxygen barrier property and antioxidant activity. Radical scavenging activity and reducing power of the film containing FA-g-CTS with a content of 0.04% (w/w) were higher than that of the naked film about 254% and 94%, respectively. Tensile strength and rigidity of the films were not significantly affected by adding FA-g-CTS with a content of 0.02–0.08% (w/w). The results indicated that FA-g-CTS could be potentially used as an antioxidant for active packaging film.Keywords: active packaging film, antioxidant activity, chitosan, ferulic acid
Procedia PDF Downloads 5031419 Innovation Management in E-Health Care: The Implementation of New Technologies for Health Care in Europe and the USA
Authors: Dariusz M. Trzmielak, William Bradley Zehner, Elin Oftedal, Ilona Lipka-Matusiak
Abstract:
The use of new technologies should create new value for all stakeholders in the healthcare system. The article focuses on demonstrating that technologies or products typically enable new functionality, a higher standard of service, or a higher level of knowledge and competence for clinicians. It also highlights the key benefits that can be achieved through the use of artificial intelligence, such as relieving clinicians of many tasks and enabling the expansion and greater specialisation of healthcare services. The comparative analysis allowed the authors to create a classification of new technologies in e-health according to health needs and benefits for patients, doctors, and healthcare systems, i.e., the main stakeholders in the implementation of new technologies and products in healthcare. The added value of the development of new technologies in healthcare is diagnosed. The work is both theoretical and practical in nature. The primary research methods are bibliographic analysis and analysis of research data and market potential of new solutions for healthcare organisations. The bibliographic analysis is complemented by the author's case studies of implemented technologies, mostly based on artificial intelligence or telemedicine. In the past, patients were often passive recipients, the end point of the service delivery system, rather than stakeholders in the system. One of the dangers of powerful new technologies is that patients may become even more marginalised. Healthcare will be provided and delivered in an increasingly administrative, programmed way. The doctor may also become a robot, carrying out programmed activities - using 'non-human services'. An alternative approach is to put the patient at the centre, using technologies, products, and services that allow them to design and control technologies based on their own needs. An important contribution to the discussion is to open up the different dimensions of the user (carer and patient) and to make them aware of healthcare units implementing new technologies. The authors of this article outline the importance of three types of patients in the successful implementation of new medical solutions. The impact of implemented technologies is analysed based on: 1) "Informed users", who are able to use the technology based on a better understanding of it; 2) "Engaged users" who play an active role in the broader healthcare system as a result of the technology; 3) "Innovative users" who bring their own ideas to the table based on a deeper understanding of healthcare issues. The authors' research hypothesis is that the distinction between informed, engaged, and innovative users has an impact on the perceived and actual quality of healthcare services. The analysis is based on case studies of new solutions implemented in different medical centres. In addition, based on the observations of the Polish author, who is a manager at the largest medical research institute in Poland, with analytical input from American and Norwegian partners, the added value of the implementations for patients, clinicians, and the healthcare system will be demonstrated.Keywords: innovation, management, medicine, e-health, artificial intelligence
Procedia PDF Downloads 201418 Wolof Voice Response Recognition System: A Deep Learning Model for Wolof Audio Classification
Authors: Krishna Mohan Bathula, Fatou Bintou Loucoubar, FNU Kaleemunnisa, Christelle Scharff, Mark Anthony De Castro
Abstract:
Voice recognition algorithms such as automatic speech recognition and text-to-speech systems with African languages can play an important role in bridging the digital divide of Artificial Intelligence in Africa, contributing to the establishment of a fully inclusive information society. This paper proposes a Deep Learning model that can classify the user responses as inputs for an interactive voice response system. A dataset with Wolof language words ‘yes’ and ‘no’ is collected as audio recordings. A two stage Data Augmentation approach is adopted for enhancing the dataset size required by the deep neural network. Data preprocessing and feature engineering with Mel-Frequency Cepstral Coefficients are implemented. Convolutional Neural Networks (CNNs) have proven to be very powerful in image classification and are promising for audio processing when sounds are transformed into spectra. For performing voice response classification, the recordings are transformed into sound frequency feature spectra and then applied image classification methodology using a deep CNN model. The inference model of this trained and reusable Wolof voice response recognition system can be integrated with many applications associated with both web and mobile platforms.Keywords: automatic speech recognition, interactive voice response, voice response recognition, wolof word classification
Procedia PDF Downloads 1161417 Spatial Information and Urbanizing Futures
Authors: Mohammad Talei, Neda Ranjbar Nosheri, Reza Kazemi Gorzadini
Abstract:
Today municipalities are searching for the new tools for increasing the public participation in different levels of urban planning. This approach of urban planning involves the community in planning process using participatory approaches instead of the long traditional top-down planning methods. These tools can be used to obtain the particular problems of urban furniture form the residents’ point of view. One of the tools that is designed with this goal is public participation GIS (PPGIS) that enables citizen to record and following up their feeling and spatial knowledge regarding main problems of the city, specifically urban furniture, in the form of maps. However, despite the good intentions of PPGIS, its practical implementation in developing countries faces many problems including the lack of basic supporting infrastructure and services and unavailability of sophisticated public participatory models. In this research we develop a PPGIS using of Web 2 to collect voluntary geodataand to perform spatial analysis based on Spatial OnLine Analytical Processing (SOLAP) and Spatial Data Mining (SDM). These tools provide urban planners with proper informationregarding the type, spatial distribution and the clusters of reported problems. This system is implemented in a case study area in Tehran, Iran and the challenges to make it applicable and its potential for real urban planning have been evaluated. It helps decision makers to better understand, plan and allocate scarce resources for providing most requested urban furniture.Keywords: PPGIS, spatial information, urbanizing futures, urban planning
Procedia PDF Downloads 7261416 A BERT-Based Model for Financial Social Media Sentiment Analysis
Authors: Josiel Delgadillo, Johnson Kinyua, Charles Mutigwe
Abstract:
The purpose of sentiment analysis is to determine the sentiment strength (e.g., positive, negative, neutral) from a textual source for good decision-making. Natural language processing in domains such as financial markets requires knowledge of domain ontology, and pre-trained language models, such as BERT, have made significant breakthroughs in various NLP tasks by training on large-scale un-labeled generic corpora such as Wikipedia. However, sentiment analysis is a strong domain-dependent task. The rapid growth of social media has given users a platform to share their experiences and views about products, services, and processes, including financial markets. StockTwits and Twitter are social networks that allow the public to express their sentiments in real time. Hence, leveraging the success of unsupervised pre-training and a large amount of financial text available on social media platforms could potentially benefit a wide range of financial applications. This work is focused on sentiment analysis using social media text on platforms such as StockTwits and Twitter. To meet this need, SkyBERT, a domain-specific language model pre-trained and fine-tuned on financial corpora, has been developed. The results show that SkyBERT outperforms current state-of-the-art models in financial sentiment analysis. Extensive experimental results demonstrate the effectiveness and robustness of SkyBERT.Keywords: BERT, financial markets, Twitter, sentiment analysis
Procedia PDF Downloads 1521415 Smart Disassembly of Waste Printed Circuit Boards: The Role of IoT and Edge Computing
Authors: Muhammad Mohsin, Fawad Ahmad, Fatima Batool, Muhammad Kaab Zarrar
Abstract:
The integration of the Internet of Things (IoT) and edge computing devices offers a transformative approach to electronic waste management, particularly in the dismantling of printed circuit boards (PCBs). This paper explores how these technologies optimize operational efficiency and improve environmental sustainability by addressing challenges such as data security, interoperability, scalability, and real-time data processing. Proposed solutions include advanced machine learning algorithms for predictive maintenance, robust encryption protocols, and scalable architectures that incorporate edge computing. Case studies from leading e-waste management facilities illustrate benefits such as improved material recovery efficiency, reduced environmental impact, improved worker safety, and optimized resource utilization. The findings highlight the potential of IoT and edge computing to revolutionize e-waste dismantling and make the case for a collaborative approach between policymakers, waste management professionals, and technology developers. This research provides important insights into the use of IoT and edge computing to make significant progress in the sustainable management of electronic wasteKeywords: internet of Things, edge computing, waste PCB disassembly, electronic waste management, data security, interoperability, machine learning, predictive maintenance, sustainable development
Procedia PDF Downloads 301414 Deep Reinforcement Learning for Advanced Pressure Management in Water Distribution Networks
Authors: Ahmed Negm, George Aggidis, Xiandong Ma
Abstract:
With the diverse nature of urban cities, customer demand patterns, landscape topologies or even seasonal weather trends; managing our water distribution networks (WDNs) has proved a complex task. These unpredictable circumstances manifest as pipe failures, intermittent supply and burst events thus adding to water loss, energy waste and increased carbon emissions. Whilst these events are unavoidable, advanced pressure management has proved an effective tool to control and mitigate them. Henceforth, water utilities have struggled with developing a real-time control method that is resilient when confronting the challenges of water distribution. In this paper we use deep reinforcement learning (DRL) algorithms as a novel pressure control strategy to minimise pressure violations and leakage under both burst and background leakage conditions. Agents based on asynchronous actor critic (A2C) and recurrent proximal policy optimisation (Recurrent PPO) were trained and compared to benchmarked optimisation algorithms (differential evolution, particle swarm optimisation. A2C manages to minimise leakage by 32.48% under burst conditions and 67.17% under background conditions which was the highest performance in the DRL algorithms. A2C and Recurrent PPO performed well in comparison to the benchmarks with higher processing speed and lower computational effort.Keywords: deep reinforcement learning, pressure management, water distribution networks, leakage management
Procedia PDF Downloads 911413 The Need for Sustaining Hope during Communication of Unfavourable News in the Care of Children with Palliative Care Needs: The Experience of Mothers and Health Professionals in Jordan
Authors: Maha Atout, Pippa Hemingway, Jane Seymour
Abstract:
A preliminary systematic review shows that health professionals experience a tension when communicating with the parents and family members of children with life-threatening and life-limiting conditions. On the one hand, they want to promote open and honest communication, while on the other, they are apprehensive about fostering an unrealistic sense of hope. Defining the boundaries between information that might offer reasonable hope versus that which results in false reassurance is challenging. Some healthcare providers worry that instilling a false sense of hope could motivate parents to seek continued aggressive treatment for their child, which in turn might cause the patient further unnecessary suffering. To date, there has been a lack of research in the Middle East regarding how healthcare providers do or should communicate bad news; in particular, the issue of hope in the field of paediatric palliative care has not been researched thoroughly. This study aims to explore, from the perspective of patients’ mothers, physicians, and nurses, the experience of communicating and receiving bad news in the care of children with palliative care needs. Data were collected using a collective qualitative case study approach across three paediatric units in a Jordanian hospital. Two data collection methods were employed: participant observation and semi-structured interviews. The overall number of cases was 15, with a total of 56 interviews with mothers (n=24), physicians (n=12), and nurses (n=20) completed, as well as 197 observational hours logged. The findings demonstrate that mothers wanted their doctors to provide them with hopeful information about the future progression of their child’s illness. Although some mothers asked their doctors to provide them with honest information regarding the condition of their child, they still considered a sense of hope to be essential for coping with caring for their child. According to mothers, hope was critical to treatment as it helped them to stay committed to the treatment and protected them to some extent from the extreme emotional suffering that would occur if they lost hope. The health professionals agreed with the mothers on the importance of hope, so long as it was congruent with the stage and severity of each patient’s disease. The findings of this study conclude that while parents typically insist on knowing all relevant information when their child is diagnosed with a severe illness, they considered hope to be an essential part of life, and they found it very difficult to handle suffering without any glimmer of it. This study finds that using negative terms has extremely adverse effects on the parents’ emotions. Hence, although the mothers asked the doctors to be as honest as they could, they still wanted the physicians to provide them with a positive message by communicating this information in a sensitive manner including hope.Keywords: health professionals, children, communication, hope, information, mothers, palliative care
Procedia PDF Downloads 2171412 The Intersection/Union Region Computation for Drosophila Brain Images Using Encoding Schemes Based on Multi-Core CPUs
Authors: Ming-Yang Guo, Cheng-Xian Wu, Wei-Xiang Chen, Chun-Yuan Lin, Yen-Jen Lin, Ann-Shyn Chiang
Abstract:
With more and more Drosophila Driver and Neuron images, it is an important work to find the similarity relationships among them as the functional inference. There is a general problem that how to find a Drosophila Driver image, which can cover a set of Drosophila Driver/Neuron images. In order to solve this problem, the intersection/union region for a set of images should be computed at first, then a comparison work is used to calculate the similarities between the region and other images. In this paper, three encoding schemes, namely Integer, Boolean, Decimal, are proposed to encode each image as a one-dimensional structure. Then, the intersection/union region from these images can be computed by using the compare operations, Boolean operators and lookup table method. Finally, the comparison work is done as the union region computation, and the similarity score can be calculated by the definition of Tanimoto coefficient. The above methods for the region computation are also implemented in the multi-core CPUs environment with the OpenMP. From the experimental results, in the encoding phase, the performance by the Boolean scheme is the best than that by others; in the region computation phase, the performance by Decimal is the best when the number of images is large. The speedup ratio can achieve 12 based on 16 CPUs. This work was supported by the Ministry of Science and Technology under the grant MOST 106-2221-E-182-070.Keywords: Drosophila driver image, Drosophila neuron images, intersection/union computation, parallel processing, OpenMP
Procedia PDF Downloads 239