Search results for: Data quality
27032 Corpus-Based Neural Machine Translation: Empirical Study Multilingual Corpus for Machine Translation of Opaque Idioms - Cloud AutoML Platform
Authors: Khadija Refouh
Abstract:
Culture bound-expressions have been a bottleneck for Natural Language Processing (NLP) and comprehension, especially in the case of machine translation (MT). In the last decade, the field of machine translation has greatly advanced. Neural machine translation NMT has recently achieved considerable development in the quality of translation that outperformed previous traditional translation systems in many language pairs. Neural machine translation NMT is an Artificial Intelligence AI and deep neural networks applied to language processing. Despite this development, there remain some serious challenges that face neural machine translation NMT when translating culture bounded-expressions, especially for low resources language pairs such as Arabic-English and Arabic-French, which is not the case with well-established language pairs such as English-French. Machine translation of opaque idioms from English into French are likely to be more accurate than translating them from English into Arabic. For example, Google Translate Application translated the sentence “What a bad weather! It runs cats and dogs.” to “يا له من طقس سيء! تمطر القطط والكلاب” into the target language Arabic which is an inaccurate literal translation. The translation of the same sentence into the target language French was “Quel mauvais temps! Il pleut des cordes.” where Google Translate Application used the accurate French corresponding idioms. This paper aims to perform NMT experiments towards better translation of opaque idioms using high quality clean multilingual corpus. This Corpus will be collected analytically from human generated idiom translation. AutoML translation, a Google Neural Machine Translation Platform, is used as a custom translation model to improve the translation of opaque idioms. The automatic evaluation of the custom model will be compared to the Google NMT using Bilingual Evaluation Understudy Score BLEU. BLEU is an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another. Human evaluation is integrated to test the reliability of the Blue Score. The researcher will examine syntactical, lexical, and semantic features using Halliday's functional theory.Keywords: multilingual corpora, natural language processing (NLP), neural machine translation (NMT), opaque idioms
Procedia PDF Downloads 14927031 Molecular Approach for the Detection of Lactic Acid Bacteria in the Kenyan Spontaneously Fermented Milk, Mursik
Authors: John Masani Nduko, Joseph Wafula Matofari
Abstract:
Many spontaneously fermented milk products are produced in Kenya, where they are integral to the human diet and play a central role in enhancing food security and income generation via small-scale enterprises. Fermentation enhances product properties such as taste, aroma, shelf-life, safety, texture, and nutritional value. Some of these products have demonstrated therapeutic and probiotic effects although recent reports have linked some to death, biotoxin infections, and esophageal cancer. These products are mostly processed from poor quality raw materials under unhygienic conditions resulting to inconsistent product quality and limited shelf-lives. Though very popular, research on their processing technologies is low, and none of the products has been produced under controlled conditions using starter cultures. To modernize the processing technologies for these products, our study aims at describing the microbiology and biochemistry of a representative Kenyan spontaneously fermented milk product, Mursik using modern biotechnology (DNA sequencing) and their chemical composition. Moreover, co-creation processes reflecting stakeholders’ experiences on traditional fermented milk production technologies and utilization, ideals and senses of value, which will allow the generation of products based on common ground for rapid progress will be discussed. Knowledge of the value of clean starting raw material will be emphasized, the need for the definition of fermentation parameters highlighted, and standard equipment employment to attain controlled fermentation discussed. This presentation will review the available information regarding traditional fermented milk (Mursik) and highlight our current research work on the application of molecular approaches (metagenomics) for the valorization of Mursik production process through starter culture/ probiotic strains isolation and identification, and quality and safety aspects of the product. The importance of the research and future research areas on the same subject will also be highlighted.Keywords: lactic acid bacteria, high throughput biotechnology, spontaneous fermentation, Mursik
Procedia PDF Downloads 29327030 Deep Learning to Improve the 5G NR Uplink Control Channel
Authors: Ahmed Krobba, Meriem Touzene, Mohamed Debeyche
Abstract:
The wireless communications system (5G) will provide more diverse applications and higher quality services for users compared to the long-term evolution 4G (LTE). 5G uses a higher carrier frequency, which suffers from information loss in 5G coverage. Most 5G users often cannot obtain high-quality communications due to transmission channel noise and channel complexity. Physical Uplink Control Channel (PUCCH-NR: Physical Uplink Control Channel New Radio) plays a crucial role in 5G NR telecommunication technology, which is mainly used to transmit link control information uplink (UCI: Uplink Control Information. This study based of evaluating the performance of channel physical uplink control PUCCH-NR under low Signal-to-Noise Ratios with various antenna numbers reception. We propose the artificial intelligence approach based on deep neural networks (Deep Learning) to estimate the PUCCH-NR channel in comparison with this approach with different conventional methods such as least-square (LS) and minimum-mean-square-error (MMSE). To evaluate the channel performance we use the block error rate (BLER) as an evaluation criterion of the communication system. The results show that the deep neural networks method gives best performance compared with MMSE and LSKeywords: 5G network, uplink (Uplink), PUCCH channel, NR-PUCCH channel, deep learning
Procedia PDF Downloads 8327029 Studies of Rule Induction by STRIM from the Decision Table with Contaminated Attribute Values from Missing Data and Noise — in the Case of Critical Dataset Size —
Authors: Tetsuro Saeki, Yuichi Kato, Shoutarou Mizuno
Abstract:
STRIM (Statistical Test Rule Induction Method) has been proposed as a method to effectively induct if-then rules from the decision table which is considered as a sample set obtained from the population of interest. Its usefulness has been confirmed by simulation experiments specifying rules in advance, and by comparison with conventional methods. However, scope for future development remains before STRIM can be applied to the analysis of real-world data sets. The first requirement is to determine the size of the dataset needed for inducting true rules, since finding statistically significant rules is the core of the method. The second is to examine the capacity of rule induction from datasets with contaminated attribute values created by missing data and noise, since real-world datasets usually contain such contaminated data. This paper examines the first problem theoretically, in connection with the rule length. The second problem is then examined in a simulation experiment, utilizing the critical size of dataset derived from the first step. The experimental results show that STRIM is highly robust in the analysis of datasets with contaminated attribute values, and hence is applicable to realworld data.Keywords: rule induction, decision table, missing data, noise
Procedia PDF Downloads 39627028 Household Water Practices in a Rapidly Urbanizing City and Its Implications for the Future of Potable Water: A Case Study of Abuja Nigeria
Authors: Emmanuel Maiyanga
Abstract:
Access to sufficiently good quality freshwater has been a global challenge, but more notably in low-income countries, particularly in the Sub-Saharan countries, which Nigeria is one. Urban population is soaring, especially in many low-income countries, the existing centralised water supply infrastructures are ageing and inadequate, moreover in households peoples’ lifestyles have become more water-demanding. So, people mostly device coping strategies where municipal supply is perceived to have failed. This development threatens the futures of groundwater and calls for a review of management strategy and research approach. The various issues associated with water demand management in low-income countries and Nigeria, in particular, are well documented in the literature. However, the way people use water daily in households and the reasons they do so, and how the situation is constructing demand among the middle-class population in Abuja Nigeria is poorly understood. This is what this research aims to unpack. This is achieved by using the social practices research approach (which is based on the Theory of Practices) to understand how this situation impacts on the shared groundwater resource. A qualitative method was used for data gathering. This involved audio-recorded interviews of householders and water professionals in the private and public sectors. It also involved observation, note-taking, and document study. The data were analysed thematically using NVIVO software. The research reveals the major household practices that draw on the water on a domestic scale, and they include water sourcing, body hygiene and sanitation, laundry, kitchen, and outdoor practices (car washing, domestic livestock farming, and gardening). Among all the practices, water sourcing, body hygiene, kitchen, and laundry practices, are identified to impact most on groundwater, with impact scale varying with household peculiarities. Water sourcing practices involve people sourcing mostly from personal boreholes because the municipal water supply is perceived inadequate and unreliable in terms of service delivery and water quality, and people prefer easier and unlimited access and control using boreholes. Body hygiene practices reveal that every respondent prefers bucket bathing at least once daily, and the majority bathe twice or more every day. Frequency is determined by the feeling of hotness and dirt on the skin. Thus, people bathe to cool down, stay clean, and satisfy perceived social, religious, and hygiene demand. Kitchen practice consumes water significantly as people run the tap for vegetable washing in daily food preparation and dishwashing after each meal. Laundry practice reveals that most people wash clothes most frequently (twice in a week) during hot and dusty weather, and washing with hands in basins and buckets is the most prevalent and water wasting due to soap overdose. The research also reveals poor water governance as a major cause of current inadequate municipal water delivery. The implication poor governance and widespread use of boreholes is an uncontrolled abstraction of groundwater to satisfy desired household practices, thereby putting the future of the shared aquifer at great risk of total depletion with attendant multiplying effects on the people and the environment and population continues to soar.Keywords: boreholes, groundwater, household water practices, self-supply
Procedia PDF Downloads 12327027 A Strategy to Oil Production Placement Zones Based on Maximum Closeness
Authors: Waldir Roque, Gustavo Oliveira, Moises Santos, Tatiana Simoes
Abstract:
Increasing the oil recovery factor of an oil reservoir has been a concern of the oil industry. Usually, the production placement zones are defined after some analysis of geological and petrophysical parameters, being the rock porosity, permeability and oil saturation of fundamental importance. In this context, the determination of hydraulic flow units (HFUs) renders an important step in the process of reservoir characterization since it may provide specific regions in the reservoir with similar petrophysical and fluid flow properties and, in particular, techniques supporting the placement of production zones that favour the tracing of directional wells. A HFU is defined as a representative volume of a total reservoir rock in which petrophysical and fluid flow properties are internally consistent and predictably distinct of other reservoir rocks. Technically, a HFU is characterized as a rock region that exhibit flow zone indicator (FZI) points lying on a straight line of the unit slope. The goal of this paper is to provide a trustful indication for oil production placement zones for the best-fit HFUs. The FZI cloud of points can be obtained from the reservoir quality index (RQI), a function of effective porosity and permeability. Considering log and core data the HFUs are identified and using the discrete rock type (DRT) classification, a set of connected cell clusters can be found and by means a graph centrality metric, the maximum closeness (MaxC) cell is obtained for each cluster. Considering the MaxC cells as production zones, an extensive analysis, based on several oil recovery factor and oil cumulative production simulations were done for the SPE Model 2 and the UNISIM-I-D synthetic fields, where the later was build up from public data available from the actual Namorado Field, Campos Basin, in Brazil. The results have shown that the MaxC is actually technically feasible and very reliable as high performance production placement zones.Keywords: hydraulic flow unit, maximum closeness centrality, oil production simulation, production placement zone
Procedia PDF Downloads 32927026 Regression Approach for Optimal Purchase of Hosts Cluster in Fixed Fund for Hadoop Big Data Platform
Authors: Haitao Yang, Jianming Lv, Fei Xu, Xintong Wang, Yilin Huang, Lanting Xia, Xuewu Zhu
Abstract:
Given a fixed fund, purchasing fewer hosts of higher capability or inversely more of lower capability is a must-be-made trade-off in practices for building a Hadoop big data platform. An exploratory study is presented for a Housing Big Data Platform project (HBDP), where typical big data computing is with SQL queries of aggregate, join, and space-time condition selections executed upon massive data from more than 10 million housing units. In HBDP, an empirical formula was introduced to predict the performance of host clusters potential for the intended typical big data computing, and it was shaped via a regression approach. With this empirical formula, it is easy to suggest an optimal cluster configuration. The investigation was based on a typical Hadoop computing ecosystem HDFS+Hive+Spark. A proper metric was raised to measure the performance of Hadoop clusters in HBDP, which was tested and compared with its predicted counterpart, on executing three kinds of typical SQL query tasks. Tests were conducted with respect to factors of CPU benchmark, memory size, virtual host division, and the number of element physical host in cluster. The research has been applied to practical cluster procurement for housing big data computing.Keywords: Hadoop platform planning, optimal cluster scheme at fixed-fund, performance predicting formula, typical SQL query tasks
Procedia PDF Downloads 23227025 Simulation Study of Multiple-Thick Gas Electron Multiplier-Based Microdosimeters for Fast Neutron Measurements
Authors: Amir Moslehi, Gholamreza Raisali
Abstract:
Microdosimetric detectors based on multiple-thick gas electron multiplier (multiple-THGEM) configurations are being used in various fields of radiation protection and dosimetry. In the present work, microdosimetric response of these detectors to fast neutrons has been investigated by Monte Carlo method. Three similar microdosimeters made of A-150 and rexolite as the wall materials are designed; the first based on single-THGEM, the second based on double-THGEM and the third is based on triple-THGEM. Sensitive volume of the three microdosimeters is a right cylinder of 5 mm height and diameter which is filled with the propane-based tissue-equivalent (TE) gas. The TE gas with 0.11 atm pressure at the room temperature simulates 1 µm of tissue. Lineal energy distributions for several neutron energies from 10 keV to 14 MeV including 241Am-Be neutrons are calculated by the Geant4 simulation toolkit. Also, mean quality factor and dose-equivalent value for any neutron energy has been determined by these distributions. Obtained data derived from the three microdosimeters are in agreement. Therefore, we conclude that the multiple-THGEM structures present similar microdosimetric responses to fast neutrons.Keywords: fast neutrons, geant4, multiple-thick gas electron multiplier, microdosimeter
Procedia PDF Downloads 35027024 Point Estimation for the Type II Generalized Logistic Distribution Based on Progressively Censored Data
Authors: Rana Rimawi, Ayman Baklizi
Abstract:
Skewed distributions are important models that are frequently used in applications. Generalized distributions form a class of skewed distributions and gain widespread use in applications because of their flexibility in data analysis. More specifically, the Generalized Logistic Distribution with its different types has received considerable attention recently. In this study, based on progressively type-II censored data, we will consider point estimation in type II Generalized Logistic Distribution (Type II GLD). We will develop several estimators for its unknown parameters, including maximum likelihood estimators (MLE), Bayes estimators and linear estimators (BLUE). The estimators will be compared using simulation based on the criteria of bias and Mean square error (MSE). An illustrative example of a real data set will be given.Keywords: point estimation, type II generalized logistic distribution, progressive censoring, maximum likelihood estimation
Procedia PDF Downloads 19827023 Multi Agent Based Pre-Hospital Emergency Management Architecture
Authors: Jaleh Shoshtarian Malak, Niloofar Mohamadzadeh
Abstract:
Managing pre-hospital emergency patients requires real-time practices and efficient resource utilization. Since we are facing a distributed Network of healthcare providers, services and applications choosing the right resources and treatment protocol considering patient situation is a critical task. Delivering care to emergency patients at right time and with the suitable treatment settings can save ones live and prevent further complication. In recent years Multi Agent Systems (MAS) introduced great solutions to deal with real-time, distributed and complicated problems. In this paper we propose a multi agent based pre-hospital emergency management architecture in order to manage coordination, collaboration, treatment protocol and healthcare provider selection between different parties in pre-hospital emergency in a self-organizing manner. We used AnyLogic Agent Based Modeling (ABM) tool in order to simulate our proposed architecture. We have analyzed and described the functionality of EMS center, Ambulance, Consultation Center, EHR Repository and Quality of Care Monitoring as main collaborating agents. Future work includes implementation of the proposed architecture and evaluation of its impact on patient quality of care improvement.Keywords: multi agent systems, pre-hospital emergency, simulation, software architecture
Procedia PDF Downloads 42627022 Spatial Disparity in Education and Medical Facilities: A Case Study of Barddhaman District, West Bengal, India
Authors: Amit Bhattacharyya
Abstract:
The economic scenario of any region does not show the real picture for the measurement of overall development. Therefore, economic development must be accompanied by social development to be able to make an assessment to measure the level of development. The spatial variation with respect to social development has been discussed taking into account the quality of functioning of a social system in a specific area. In this paper, an attempt has been made to study the spatial distribution of social infrastructural facilities and analyze the magnitude of regional disparities at inter- block level in Barddhman district. It starts with the detailed account of the selection process of social infrastructure indicators and describes the methodology employed in the empirical analysis. Analyzing the block level data, this paper tries to identify the disparity among the blocks in the levels of social development. The results have been subsequently explained using both statistical analysis and geo spatial technique. The paper reveals that the social development is not going on at the same rate in every part of the district. Health facilities and educational facilities are concentrated at some selected point. So overall development activities come to be concentrated in a few centres and the disparity is seen over the blocks.Keywords: disparity, inter-block, social development, spatial variation
Procedia PDF Downloads 16827021 Improvement of the Q-System Using the Rock Engineering System: A Case Study of Water Conveyor Tunnel of Azad Dam
Authors: Sahand Golmohammadi, Sana Hosseini Shirazi
Abstract:
Because the status and mechanical parameters of discontinuities in the rock mass are included in the calculations, various methods of rock engineering classification are often used as a starting point for the design of different types of structures. The Q-system is one of the most frequently used methods for stability analysis and determination of support systems of underground structures in rock, including tunnel. In this method, six main parameters of the rock mass, namely, the rock quality designation (RQD), joint set number (Jn), joint roughness number (Jr), joint alteration number (Ja), joint water parameter (Jw) and stress reduction factor (SRF) are required. In this regard, in order to achieve a reasonable and optimal design, identifying the effective parameters for the stability of the mentioned structures is one of the most important goals and the most necessary actions in rock engineering. Therefore, it is necessary to study the relationships between the parameters of a system and how they interact with each other and, ultimately, the whole system. In this research, it has attempted to determine the most effective parameters (key parameters) from the six parameters of rock mass in the Q-system using the rock engineering system (RES) method to improve the relationships between the parameters in the calculation of the Q value. The RES system is, in fact, a method by which one can determine the degree of cause and effect of a system's parameters by making an interaction matrix. In this research, the geomechanical data collected from the water conveyor tunnel of Azad Dam were used to make the interaction matrix of the Q-system. For this purpose, instead of using the conventional methods that are always accompanied by defects such as uncertainty, the Q-system interaction matrix is coded using a technique that is actually a statistical analysis of the data and determining the correlation coefficient between them. So, the effect of each parameter on the system is evaluated with greater certainty. The results of this study show that the formed interaction matrix provides a reasonable estimate of the effective parameters in the Q-system. Among the six parameters of the Q-system, the SRF and Jr parameters have the maximum and minimum impact on the system, respectively, and also the RQD and Jw parameters have the maximum and minimum impact on the system, respectively. Therefore, by developing this method, we can obtain a more accurate relation to the rock mass classification by weighting the required parameters in the Q-system.Keywords: Q-system, rock engineering system, statistical analysis, rock mass, tunnel
Procedia PDF Downloads 7327020 MhAGCN: Multi-Head Attention Graph Convolutional Network for Web Services Classification
Authors: Bing Li, Zhi Li, Yilong Yang
Abstract:
Web classification can promote the quality of service discovery and management in the service repository. It is widely used to locate developers desired services. Although traditional classification methods based on supervised learning models can achieve classification tasks, developers need to manually mark web services, and the quality of these tags may not be enough to establish an accurate classifier for service classification. With the doubling of the number of web services, the manual tagging method has become unrealistic. In recent years, the attention mechanism has made remarkable progress in the field of deep learning, and its huge potential has been fully demonstrated in various fields. This paper designs a multi-head attention graph convolutional network (MHAGCN) service classification method, which can assign different weights to the neighborhood nodes without complicated matrix operations or relying on understanding the entire graph structure. The framework combines the advantages of the attention mechanism and graph convolutional neural network. It can classify web services through automatic feature extraction. The comprehensive experimental results on a real dataset not only show the superior performance of the proposed model over the existing models but also demonstrate its potentially good interpretability for graph analysis.Keywords: attention mechanism, graph convolutional network, interpretability, service classification, service discovery
Procedia PDF Downloads 13627019 Omni: Data Science Platform for Evaluate Performance of a LoRaWAN Network
Authors: Emanuele A. Solagna, Ricardo S, Tozetto, Roberto dos S. Rabello
Abstract:
Nowadays, physical processes are becoming digitized by the evolution of communication, sensing and storage technologies which promote the development of smart cities. The evolution of this technology has generated multiple challenges related to the generation of big data and the active participation of electronic devices in society. Thus, devices can send information that is captured and processed over large areas, but there is no guarantee that all the obtained data amount will be effectively stored and correctly persisted. Because, depending on the technology which is used, there are parameters that has huge influence on the full delivery of information. This article aims to characterize the project, currently under development, of a platform that based on data science will perform a performance and effectiveness evaluation of an industrial network that implements LoRaWAN technology considering its main parameters configuration relating these parameters to the information loss.Keywords: Internet of Things, LoRa, LoRaWAN, smart cities
Procedia PDF Downloads 14827018 Feeding Effects of Increasing Levels of Yerba Mate on Lamb Meat Quality
Authors: Yuli Andrea P. Bermudez, Richard R. Lobo, Tamyres R. D. Amorim, Danny Alexander R. Moreno, Angelica Simone C. Pereira, Ives Claudio D. Bueno
Abstract:
The use of natural antioxidants in animal feed can positively modify the profile of fatty acids (FAs) in meat, due to the presence of secondary metabolites, mainly phenolic and flavonoid compounds, which promote an increase in the associated polyunsaturated fatty acids (PUFA) with beneficial factors in human health. The goal of this study was to evaluate the effect of the dietary inclusion percentage of yerba mate extract (Ilex paraguariensis St. Hilaire) as a natural antioxidant on lamb meat quality. The animals were confined for 53 days and fed with corn silage and concentrated in the proportion of 60:40, respectively, were divided into four homogeneous groups (n = 9 lambs/group), to each of the treatments, one control group without yerba mate extract - YME (0%) and three treatments with 1, 2 and 4% the inclusion of YME on a DM basis. Samples of the Longissimus thoracis (LT) muscle were collected from the deboning of 36 lambs, analyzing pH values, color parameters (brightness: L*, red value: a*, and yellow: b*), fatty acid profile, total lipids, and sensory analysis. The inclusion of YME modified the value of b* (P = 0.0041), indicating a higher value of yellow color in the meat, for the group supplemented with 4% YME. All data were statistically evaluated using the MIXED procedure of the statistical package SAS 9.4. However, it did not show differences in the final live weight in the groups evaluated, as well as in the pH values (P = 0.1923) and the total lipid concentration (P = 0.0752). The FAs (P ≥ 0.1360) and health indexes were not altered by the inclusion of YME (P ≥ 0.1360); only branched-chain fatty acids (BCFA) exhibited a diet effect (P = 0.0092) in the group that had 4% of the extract. In the sensory analysis test with a hedonic scale it did not show differences between the treatments (P ≥ 0.1251). Nevertheless, in the just about-right test, using (note 1) to 'very strong, softness or moist' (note 5); the softness was different between the evaluated treatments (P = 0.0088) where groups with 2% YME had a better acceptance of tasters (4.15 ± 0.08) compared to the control (3.89 ± 0.08). In conclusion, although the addition of YME has shown positive results in sensory acceptance and in increasing the concentration of BCFA, fatty acids beneficial to human health, without changing the physical-chemical parameters in lamb meat, the absolute changes are considered to have been quite small, which was probably related to the high efficiency of PUFA biohydrogenation in the n the rumen.Keywords: composition, health, antioxidant, meat analysis
Procedia PDF Downloads 11227017 Time and Wavelength Division Multiplexing Passive Optical Network Comparative Analysis: Modulation Formats and Channel Spacings
Authors: A. Fayad, Q. Alqhazaly, T. Cinkler
Abstract:
In light of the substantial increase in end-user requirements and the incessant need of network operators to upgrade the capabilities of access networks, in this paper, the performance of the different modulation formats on eight-channels Time and Wavelength Division Multiplexing Passive Optical Network (TWDM-PON) transmission system has been examined and compared. Limitations and features of modulation formats have been determined to outline the most suitable design to enhance the data rate and transmission reach to obtain the best performance of the network. The considered modulation formats are On-Off Keying Non-Return-to-Zero (NRZ-OOK), Carrier Suppressed Return to Zero (CSRZ), Duo Binary (DB), Modified Duo Binary (MODB), Quadrature Phase Shift Keying (QPSK), and Differential Quadrature Phase Shift Keying (DQPSK). The performance has been analyzed by varying transmission distances and bit rates under different channel spacing. Furthermore, the system is evaluated in terms of minimum Bit Error Rate (BER) and Quality factor (Qf) without applying any dispersion compensation technique, or any optical amplifier. Optisystem software was used for simulation purposes.Keywords: BER, DuoBinary, NRZ-OOK, TWDM-PON
Procedia PDF Downloads 14927016 Developing a SOA-Based E-Healthcare Systems
Authors: Hend Albassam, Nouf Alrumaih
Abstract:
Nowadays we are in the age of technologies and communication and there is no doubt that technologies such as the Internet can offer many advantages for many business fields, and the health field is no execution. In fact, using the Internet provide us with a new path to improve the quality of health care throughout the world. The e-healthcare offers many advantages such as: efficiency by reducing the cost and avoiding duplicate diagnostics, empowerment of patients by enabling them to access their medical records, enhancing the quality of healthcare and enabling information exchange and communication between healthcare organizations. There are many problems that result from using papers as a way of communication, for example, paper-based prescriptions. Usually, the doctor writes a prescription and gives it to the patient who in turn carries it to the pharmacy. After that, the pharmacist takes the prescription to fill it and give it to the patient. Sometimes the pharmacist might find difficulty in reading the doctor’s handwriting; the patient could change and counterfeit the prescription. These existing problems and many others heighten the need to improve the quality of the healthcare. This project is set out to develop a distributed e-healthcare system that offers some features of e-health and addresses some of the above-mentioned problems. The developed system provides an electronic health record (EHR) and enables communication between separate health care organizations such as the clinic, pharmacy and laboratory. To develop this system, the Service Oriented Architecture (SOA) is adopted as a design approach, which helps to design several independent modules that communicate by using web services. The layering design pattern is used in designing each module as it provides reusability that allows the business logic layer to be reused by different higher layers such as the web service or the website in our system. The experimental analysis has shown that the project has successfully achieved its aims toward solving the problems related to the paper-based healthcare systems and it enables different health organization to communicate effectively. It implements four independent modules including healthcare provider, pharmacy, laboratory and medication information provider. Each module provides different functionalities and is used by a different type of user. These modules interoperate with each other using a set of web services.Keywords: e-health, services oriented architecture (SOA), web services, interoperability
Procedia PDF Downloads 30527015 Principles and Practice of Therapeutic Architecture
Authors: Umedov Mekhroz, Griaznova Svetlana
Abstract:
The quality of life and well-being of patients, staff and visitors are central to the delivery of health care. Architecture and design are becoming an integral part of the healing and recovery approach. The most significant point that can be implemented in hospital buildings is the therapeutic value of the artificial environment, the design and integration of plants to bring the natural world into the healthcare environment. The hospital environment should feel like home comfort. The techniques that therapeutic architecture uses are very cheap, but provide real benefit to patients, staff and visitors, demonstrating that the difference is not in cost but in design quality. The best environment is not necessarily more expensive - it is about special use of light and color, rational use of materials and flexibility of premises. All this forms innovative concepts in modern hospital architecture, in new construction, renovation or expansion projects. The aim of the study is to identify the methods and principles of therapeutic architecture. The research methodology consists in studying and summarizing international experience in scientific research, literature, standards, methodological manuals and project materials on the research topic. The result of the research is the development of graphic-analytical tables based on the system analysis of the processed information; 3d visualization of hospital interiors based on processed information.Keywords: therapeutic architecture, healthcare interiors, sustainable design, materials, color scheme, lighting, environment.
Procedia PDF Downloads 12427014 Good Governance Complementary to Corruption Abatement: A Cross-Country Analysis
Authors: Kamal Ray, Tapati Bhattacharya
Abstract:
Private use of public office for private gain could be a tentative definition of corruption and most distasteful event of corruption is that it is not there, nor that it is pervasive, but it is socially acknowledged in the global economy, especially in the developing nations. We attempted to assess the interrelationship between the Corruption perception index (CPI) and the principal components of governance indicators as per World Bank like Control of Corruption (CC), rule of law (RL), regulatory quality (RQ) and government effectiveness (GE). Our empirical investigation concentrates upon the degree of reflection of governance indicators upon the CPI in order to single out the most powerful corruption-generating indicator in the selected countries. We have collected time series data on above governance indicators such as CC, RL, RQ and GE of the selected eleven countries from the year of 1996 to 2012 from World Bank data set. The countries are USA, UK, France, Germany, Greece, China, India, Japan, Thailand, Brazil, and South Africa. Corruption Perception Index (CPI) of the countries mentioned above for the period of 1996 to 2012is also collected. Graphical method of simple line diagram against the time series data on CPI is applied for quick view for the relative positions of different trend lines of different nations. The correlation coefficient is enough to assess primarily the degree and direction of association between the variables as we get the numerical data on governance indicators of the selected countries. The tool of Granger Causality Test (1969) is taken into account for investigating causal relationships between the variables, cause and effect to speak of. We do not need to verify stationary test as length of time series is short. Linear regression is taken as a tool for quantification of a change in explained variables due to change in explanatory variable in respect of governance vis a vis corruption. A bilateral positive causal link between CPI and CC is noticed in UK, index-value of CC increases by 1.59 units as CPI increases by one unit and CPI rises by 0.39 units as CC rises by one unit, and hence it has a multiplier effect so far as reduction in corruption is concerned in UK. GE causes strongly to the reduction of corruption in UK. In France, RQ is observed to be a most powerful indicator in reducing corruption whereas it is second most powerful indicator after GE in reducing of corruption in Japan. Governance-indicator like GE plays an important role to push down the corruption in Japan. In China and India, GE is proactive as well as influencing indicator to curb corruption. The inverse relationship between RL and CPI in Thailand indicates that ongoing machineries related to RL is not complementary to the reduction of corruption. The state machineries of CC in S. Africa are highly relevant to reduce the volume of corruption. In Greece, the variations of CPI positively influence the variations of CC and the indicator like GE is effective in controlling corruption as reflected by CPI. All the governance-indicators selected so far have failed to arrest their state level corruptions in USA, Germany and Brazil.Keywords: corruption perception index, governance indicators, granger causality test, regression
Procedia PDF Downloads 30427013 Microwave-Assisted Torrefaction of Teakwood Biomass Residues: The Effect of Power Level and Fluid Flows
Authors: Lukas Kano Mangalla, Raden Rinova Sisworo, Luther Pagiling
Abstract:
Torrefaction is an emerging thermo-chemical treatment process that aims to improve the quality of biomass fuels. This study focused on upgrading the waste teakwood through microwave torrefaction processes and investigating the key operating parameters to improve energy density for the quality of biochar production. The experiments were carried out in a 250 mL reactor placed in a microwave cavity on two different media, inert and non-inert. The microwave was operated at a frequency of 2.45GHz with power level variations of 540W, 720W, and 900W, respectively. During torrefaction processes, the nitrogen gas flows into the reactor at a rate of 0.125 mL/min, and the air flows naturally. The temperature inside the reactor was observed every 0.5 minutes for 20 minutes using a K-Type thermocouple. Changes in the mass and the properties of the torrefied products were analyzed to predict the correlation between calorific value, mass yield, and level power of the microwave. The results showed that with the increase in the operating power of microwave torrefaction, the calorific value and energy density of the product increased significantly, while mass and energy yield tended to decrease. Air can be a great potential media for substituting the expensive nitrogen to perform the microwave torrefaction for teakwood biomass.Keywords: torrefaction, microwave heating, energy enhancement, mass and energy yield
Procedia PDF Downloads 9227012 Cybervetting and Online Privacy in Job Recruitment – Perspectives on the Current and Future Legislative Framework Within the EU
Authors: Nicole Christiansen, Hanne Marie Motzfeldt
Abstract:
In recent years, more and more HR professionals have been using cyber-vetting in job recruitment in an effort to find the perfect match for the company. These practices are growing rapidly, accessing a vast amount of data from social networks, some of which is privileged and protected information. Thus, there is a risk that the right to privacy is becoming a duty to manage your private data. This paper investigates to which degree a job applicant's fundamental rights are protected adequately in current and future legislation in the EU. This paper argues that current data protection regulations and forthcoming regulations on the use of AI ensure sufficient protection. However, even though the regulation on paper protects employees within the EU, the recruitment sector may not pay sufficient attention to the regulation as it not specifically targeting this area. Therefore, the lack of specific labor and employment regulation is a concern that the social partners should attend to.Keywords: AI, cyber vetting, data protection, job recruitment, online privacy
Procedia PDF Downloads 8627011 Sequential Pattern Mining from Data of Medical Record with Sequential Pattern Discovery Using Equivalent Classes (SPADE) Algorithm (A Case Study : Bolo Primary Health Care, Bima)
Authors: Rezky Rifaini, Raden Bagus Fajriya Hakim
Abstract:
This research was conducted at the Bolo primary health Care in Bima Regency. The purpose of the research is to find out the association pattern that is formed of medical record database from Bolo Primary health care’s patient. The data used is secondary data from medical records database PHC. Sequential pattern mining technique is the method that used to analysis. Transaction data generated from Patient_ID, Check_Date and diagnosis. Sequential Pattern Discovery Algorithms Using Equivalent Classes (SPADE) is one of the algorithm in sequential pattern mining, this algorithm find frequent sequences of data transaction, using vertical database and sequence join process. Results of the SPADE algorithm is frequent sequences that then used to form a rule. It technique is used to find the association pattern between items combination. Based on association rules sequential analysis with SPADE algorithm for minimum support 0,03 and minimum confidence 0,75 is gotten 3 association sequential pattern based on the sequence of patient_ID, check_Date and diagnosis data in the Bolo PHC.Keywords: diagnosis, primary health care, medical record, data mining, sequential pattern mining, SPADE algorithm
Procedia PDF Downloads 40127010 Estimation of Reservoirs Fracture Network Properties Using an Artificial Intelligence Technique
Authors: Reda Abdel Azim, Tariq Shehab
Abstract:
The main objective of this study is to develop a subsurface fracture map of naturally fractured reservoirs by overcoming the limitations associated with different data sources in characterising fracture properties. Some of these limitations are overcome by employing a nested neuro-stochastic technique to establish inter-relationship between different data, as conventional well logs, borehole images (FMI), core description, seismic attributes, and etc. and then characterise fracture properties in terms of fracture density and fractal dimension for each data source. Fracture density is an important property of a system of fracture network as it is a measure of the cumulative area of all the fractures in a unit volume of a fracture network system and Fractal dimension is also used to characterize self-similar objects such as fractures. At the wellbore locations, fracture density and fractal dimension can only be estimated for limited sections where FMI data are available. Therefore, artificial intelligence technique is applied to approximate the quantities at locations along the wellbore, where the hard data is not available. It should be noted that Artificial intelligence techniques have proven their effectiveness in this domain of applications.Keywords: naturally fractured reservoirs, artificial intelligence, fracture intensity, fractal dimension
Procedia PDF Downloads 25527009 Governance, Risk Management, and Compliance Factors Influencing the Adoption of Cloud Computing in Australia
Authors: Tim Nedyalkov
Abstract:
A business decision to move to the cloud brings fundamental changes in how an organization develops and delivers its Information Technology solutions. The accelerated pace of digital transformation across businesses and government agencies increases the reliance on cloud-based services. They are collecting, managing, and retaining large amounts of data in cloud environments makes information security and data privacy protection essential. It becomes even more important to understand what key factors drive successful cloud adoption following the commencement of the Privacy Amendment Notifiable Data Breaches (NDB) Act 2017 in Australia as the regulatory changes impact many organizations and industries. This quantitative correlational research investigated the governance, risk management, and compliance factors contributing to cloud security success. The factors influence the adoption of cloud computing within an organizational context after the commencement of the NDB scheme. The results and findings demonstrated that corporate information security policies, data storage location, management understanding of data governance responsibilities, and regular compliance assessments are the factors influencing cloud computing adoption. The research has implications for organizations, future researchers, practitioners, policymakers, and cloud computing providers to meet the rapidly changing regulatory and compliance requirements.Keywords: cloud compliance, cloud security, data governance, privacy protection
Procedia PDF Downloads 11627008 The Appearance of Identity in the Urban Landscape by Enjoying the Natural Factors
Authors: Mehrdad Karimi, Farshad Negintaji
Abstract:
This study has examined the appearance of identity in the urban landscape and its effects on the natural factors. For this purpose, the components of place identity, emotional attachment, place dependence and social bond which totally constitute place attachment, measures it in three domains of cognitive (place identity), affective (emotional attachment) and behavioral (place dependence and social bond). In order to measure the natural factors, three components of the absolute elements, living entities, natural elements have been measured. The study is descriptive and the statistical population has been Yasouj, a city in Iran. To analyze the data the SPSS software has been used. The results in two level of descriptive and inferential statistics have been investigated. In the inferential statistics, Pearson correlation coefficient test has been used to evaluate the research hypotheses. In this study, the variable of identity is in high level and the natural factors are also in high level. These results indicate a positive relationship between place identity and natural factors. Development of environment and reaching the quality level of the personality or identity will develop the individual and society.Keywords: identity, place identity, landscape, urban landscape, landscaping
Procedia PDF Downloads 51627007 Performance Management; Hotel Managers and Owners Dilemma
Authors: Olokode Enitan Aishat
Abstract:
People can perform to the best of their abilities and produce the highest-quality work most effectively and efficiently with the aid of performance management tools. The performance, goal-setting, activation, monitoring, measurement, and evaluation aspects of hospitality operations are key. The hospitality industry, the investors, and management would become irrelevant without performance since the industry would no longer be viable. The goal of this study is to elucidate the quandary for both management and investor, which derives from an intrinsic perspective in which both parties seek to reach and exceed goals while maximizing returns on investment. The desire for achievement and a return on investment is a major conundrum for all parties concerned. It is envisaged that there would be returns on the investments and expenses made in maintaining hospitality facilities with human resources. Secondary research was used to develop the theoretical framework. A random sample of respondents from hotels employee and investors within the city of Abuja was used to collect data, which was then analyzed using SPSS. This study confirms the validity of simple and straightforward common misunderstandings and provides tried and tested strategies for understanding and working together as a team among managers and owners in a business, as this would guarantee a return for business owners and management.Keywords: performance management, hospitality industry, conflict, alignment of key performance indicator
Procedia PDF Downloads 5627006 Magnetic Navigation in Underwater Networks
Authors: Kumar Divyendra
Abstract:
Underwater Sensor Networks (UWSNs) have wide applications in areas such as water quality monitoring, marine wildlife management etc. A typical UWSN system consists of a set of sensors deployed randomly underwater which communicate with each other using acoustic links. RF communication doesn't work underwater, and GPS too isn't available underwater. Additionally Automated Underwater Vehicles (AUVs) are deployed to collect data from some special nodes called Cluster Heads (CHs). These CHs aggregate data from their neighboring nodes and forward them to the AUVs using optical links when an AUV is in range. This helps reduce the number of hops covered by data packets and helps conserve energy. We consider the three-dimensional model of the UWSN. Nodes are initially deployed randomly underwater. They attach themselves to the surface using a rod and can only move upwards or downwards using a pump and bladder mechanism. We use graph theory concepts to maximize the coverage volume while every node maintaining connectivity with at least one surface node. We treat the surface nodes as landmarks and each node finds out its hop distance from every surface node. We treat these hop-distances as coordinates and use them for AUV navigation. An AUV intending to move closer to a node with given coordinates moves hop by hop through nodes that are closest to it in terms of these coordinates. In absence of GPS, multiple different approaches like Inertial Navigation System (INS), Doppler Velocity Log (DVL), computer vision-based navigation, etc., have been proposed. These systems have their own drawbacks. INS accumulates error with time, vision techniques require prior information about the environment. We propose a method that makes use of the earth's magnetic field values for navigation and combines it with other methods that simultaneously increase the coverage volume under the UWSN. The AUVs are fitted with magnetometers that measure the magnetic intensity (I), horizontal inclination (H), and Declination (D). The International Geomagnetic Reference Field (IGRF) is a mathematical model of the earth's magnetic field, which provides the field values for the geographical coordinateson earth. Researchers have developed an inverse deep learning model that takes the magnetic field values and predicts the location coordinates. We make use of this model within our work. We combine this with with the hop-by-hop movement described earlier so that the AUVs move in such a sequence that the deep learning predictor gets trained as quickly and precisely as possible We run simulations in MATLAB to prove the effectiveness of our model with respect to other methods described in the literature.Keywords: clustering, deep learning, network backbone, parallel computing
Procedia PDF Downloads 9827005 Efficient Pre-Processing of Single-Cell Assay for Transposase Accessible Chromatin with High-Throughput Sequencing Data
Authors: Fan Gao, Lior Pachter
Abstract:
The primary tool currently used to pre-process 10X Chromium single-cell ATAC-seq data is Cell Ranger, which can take very long to run on standard datasets. To facilitate rapid pre-processing that enables reproducible workflows, we present a suite of tools called scATAK for pre-processing single-cell ATAC-seq data that is 15 to 18 times faster than Cell Ranger on mouse and human samples. Our tool can also calculate chromatin interaction potential matrices, and generate open chromatin signal and interaction traces for cell groups. We use scATAK tool to explore the chromatin regulatory landscape of a healthy adult human brain and unveil cell-type specific features, and show that it provides a convenient and computational efficient approach for pre-processing single-cell ATAC-seq data.Keywords: single-cell, ATAC-seq, bioinformatics, open chromatin landscape, chromatin interactome
Procedia PDF Downloads 15527004 Transitivity System in Research Journal Articles
Authors: Noni Agustina, Nuryansyah Adijaya
Abstract:
Writing research report plays an important role in a process of conducting research, especially a research report which is written in English. A researcher should consider many language elements; grammar, word-appropriateness, punctuation, etc in a research report. However, many researchers face some problems in research report, especially for non-native writers. This study is aimed to find out the characteristics of internationally published research journal articles based on functional grammar viewpoint especially transitivity system. Six published research journal articles which consist of English Language Teaching, linguistics, and medical fields were takes as the data. Each of field comprises native and non-native English speaking research journal articles. Qualitative content analysis was employed as the method of the study The results show that all six published research journal articles both native and non-native use material and relational process. The participants are dominated by goal, phenomenon, attribute, value, verbiage, and existent. They reflect the objectivity in research journal articles. Moreover, circumstance of place and quality occur more frequently. Transitivity system that consists of process types, participants, and circumstances have roles in describing the characteristics of research journal articles.Keywords: transitivity system, SFL, ideational meaning, research journal article
Procedia PDF Downloads 28227003 Effect of Rapeseed Press Cake on Extrusion System Parameters and Physical Pellet Quality of Fish Feed
Authors: Anna Martin, Raffael Osen
Abstract:
The demand for fish from aquaculture is constantly growing. Concurrently, due to a shortage of fishmeal caused by extensive overfishing, fishmeal substitution by plant proteins is getting increasingly important for the production of sustainable aquafeed. Several research studies evaluated the impact of plant protein meals, concentrates or isolates on fish health and fish feed quality. However, these protein raw materials often require elaborate and expensive manufacturing and their availability is limited. Rapeseed press cake (RPC) – a side product of de-oiling processes – exhibits a high potential as a plant-based fishmeal alternative in fish feed for carnivorous species due to its availability, low costs and protein content. In order to produce aquafeed with RPC, it is important to systematically assess i) inclusion levels of RPC with similar pellet qualities compared to fishmeal containing formulations and ii) how extrusion parameters can be adjusted to achieve targeted pellet qualities. However, the effect of RPC on extrusion system parameters and pellet quality has only scarcely been investigated. Therefore, the aim of this study was to evaluate the impact of feed formulation, extruder barrel temperature (90, 100, 110 °C) and screw speed (200, 300, 400 rpm) on extrusion system parameters and the physical properties of fish feed pellets. A co-rotating pilot-scale twin screw extruder was used to produce five iso-nitrogenous feed formulations: a fish meal based reference formulation including 16 g/100g fishmeal and four formulations in which fishmeal was substituted by RPC to 25, 50, 75 or 100 %. Extrusion system parameters, being product temperature, pressure at the die, specific mechanical energy (SME) and torque, were monitored while samples were taken. After drying, pellets were analyzed regarding to optical appearance, sectional and longitudinal expansion, sinking velocity, bulk density, water stability, durability and specific hardness. In our study, the addition of minor amounts of RPC already had high impact on pellet quality parameters, especially on expansion but only marginally affected extrusion system parameters. Increasing amounts of RPC reduced sectional expansion, sinking velocity, bulk density and specific hardness and increased longitudinal expansion compared to a reference formulation without RPC. Water stability and durability were almost not affected by RPC addition. Moreover, pellets with rapeseed components showed a more coarse structure than pellets containing only fishmeal. When the adjustment of barrel temperature and screw speed was investigated, it could be seen that the increase of extruder barrel temperature led to a slight decrease of SME and die pressure and an increased sectional expansion of the reference pellets but did almost not affect rapeseed containing fish feed pellets. Also changes in screw speed had little effects on the physical properties of pellets however with raised screw speed the SME and the product temperature increased. In summary, a one-to-one substitution of fishmeal with RPC without the adjustment of extrusion process parameters does not result in fish feed of a designated quality. Therefore, a deeper knowledge of raw materials and their behavior under thermal and mechanical stresses as applied during extrusion is required.Keywords: extrusion, fish feed, press cake, rapeseed
Procedia PDF Downloads 148