Search results for: encrypted traffic classification
284 Implications of Measuring the Progress towards Financial Risk Protection Using Varied Survey Instruments: A Case Study of Ghana
Authors: Jemima C. A. Sumboh
Abstract:
Given the urgency and consensus for countries to move towards Universal Health Coverage (UHC), health financing systems need to be accurately and consistently monitored to provide valuable data to inform policy and practice. Most of the indicators for monitoring UHC, particularly catastrophe and impoverishment, are established based on the impact of out-of-pocket health payments (OOPHP) on households’ living standards, collected through varied household surveys. These surveys, however, vary substantially in survey methods such as the length of the recall period or the number of items included in the survey questionnaire or the farming of questions, potentially influencing the level of OOPHP. Using different survey instruments can provide inaccurate, inconsistent, erroneous and misleading estimates of UHC, subsequently influencing wrong policy decisions. Using data from a household budget survey conducted by the Navrongo Health Research Center in Ghana from May 2017 to December 2018, this study intends to explore the potential implications of using surveys with varied levels of disaggregation of OOPHP data on estimates of financial risk protection. The household budget survey, structured around food and non-food expenditure, compared three OOPHP measuring instruments: Version I (existing questions used to measure OOPHP in household budget surveys), Version II (new questions developed through benchmarking the existing Classification of the Individual Consumption by Purpose (COICOP) OOPHP questions in household surveys) and Version III (existing questions used to measure OOPHP in health surveys integrated into household budget surveys- for this, the demographic and health surveillance (DHS) health survey was used). Version I, II and III contained 11, 44, and 56 health items, respectively. However, the choice of recall periods was held constant across versions. The sample size for Version I, II and III were 930, 1032 and 1068 households, respectively. Financial risk protection will be measured based on the catastrophic and impoverishment methodologies using STATA 15 and Adept Software for each version. It is expected that findings from this study will present valuable contributions to the repository of knowledge on standardizing survey instruments to obtain estimates of financial risk protection that are valid and consistent.Keywords: Ghana, household budget surveys, measuring financial risk protection, out-of-pocket health payments, survey instruments, universal health coverage
Procedia PDF Downloads 140283 Urban Heat Island Intensity Assessment through Comparative Study on Land Surface Temperature and Normalized Difference Vegetation Index: A Case Study of Chittagong, Bangladesh
Authors: Tausif A. Ishtiaque, Zarrin T. Tasin, Kazi S. Akter
Abstract:
Current trend of urban expansion, especially in the developing countries has caused significant changes in land cover, which is generating great concern due to its widespread environmental degradation. Energy consumption of the cities is also increasing with the aggravated heat island effect. Distribution of land surface temperature (LST) is one of the most significant climatic parameters affected by urban land cover change. Recent increasing trend of LST is causing elevated temperature profile of the built up area with less vegetative cover. Gradual change in land cover, especially decrease in vegetative cover is enhancing the Urban Heat Island (UHI) effect in the developing cities around the world. Increase in the amount of urban vegetation cover can be a useful solution for the reduction of UHI intensity. LST and Normalized Difference Vegetation Index (NDVI) have widely been accepted as reliable indicators of UHI and vegetation abundance respectively. Chittagong, the second largest city of Bangladesh, has been a growth center due to rapid urbanization over the last several decades. This study assesses the intensity of UHI in Chittagong city by analyzing the relationship between LST and NDVI based on the type of land use/land cover (LULC) in the study area applying an integrated approach of Geographic Information System (GIS), remote sensing (RS), and regression analysis. Land cover map is prepared through an interactive supervised classification using remotely sensed data from Landsat ETM+ image along with NDVI differencing using ArcGIS. LST and NDVI values are extracted from the same image. The regression analysis between LST and NDVI indicates that within the study area, UHI is directly correlated with LST while negatively correlated with NDVI. It interprets that surface temperature reduces with increase in vegetation cover along with reduction in UHI intensity. Moreover, there are noticeable differences in the relationship between LST and NDVI based on the type of LULC. In other words, depending on the type of land usage, increase in vegetation cover has a varying impact on the UHI intensity. This analysis will contribute to the formulation of sustainable urban land use planning decisions as well as suggesting suitable actions for mitigation of UHI intensity within the study area.Keywords: land cover change, land surface temperature, normalized difference vegetation index, urban heat island
Procedia PDF Downloads 274282 Augmenting Navigational Aids: The Development of an Assistive Maritime Navigation Application
Abstract:
On the bridge of a ship the officers are looking for visual aids to guide navigation in order to reconcile the outside world with the position communicated by the digital navigation system. Aids to navigation include: Lighthouses, lightships, sector lights, beacons, buoys, and others. They are designed to help navigators calculate their position, establish their course or avoid dangers. In poor visibility and dense traffic areas, it can be very difficult to identify these critical aids to guide navigation. The paper presents the usage of Augmented Reality (AR) as a means to present digital information about these aids to support navigation. To date, nautical navigation related mobile AR applications have been limited to the leisure industry. If proved viable, this prototype can facilitate the creation of other similar applications that could help commercial officers with navigation. While adopting a user centered design approach, the team has developed the prototype based on insights from initial research carried on board of several ships. The prototype, built on Nexus 9 tablet and Wikitude, features a head-up display of the navigational aids (lights) in the area, presented in AR and a bird’s eye view mode presented on a simplified map. The application employs the aids to navigation data managed by Hydrographic Offices and the tablet’s sensors: GPS, gyroscope, accelerometer, compass and camera. Sea trials on board of a Navy and a commercial ship revealed the end-users’ interest in using the application and further possibility of other data to be presented in AR. The application calculates the GPS position of the ship, the bearing and distance to the navigational aids; all within a high level of accuracy. However, during testing several issues were highlighted which need to be resolved as the prototype is developed further. The prototype stretched the capabilities of Wikitude, loading over 500 objects during tests in a major port. This overloaded the display and required over 45 seconds to load the data. Therefore, extra filters for the navigational aids are being considered in order to declutter the screen. At night, the camera is not powerful enough to distinguish all the lights in the area. Also, magnetic interference with the bridge of the ship generated a continuous compass error of the AR display that varied between 5 and 12 degrees. The deviation of the compass was consistent over the whole testing durations so the team is now looking at the possibility of allowing users to manually calibrate the compass. It is expected that for the usage of AR in professional maritime contexts, further development of existing AR tools and hardware is needed. Designers will also need to implement a user-centered design approach in order to create better interfaces and display technologies for enhanced solutions to aid navigation.Keywords: compass error, GPS, maritime navigation, mobile augmented reality
Procedia PDF Downloads 334281 Human Identification Using Local Roughness Patterns in Heartbeat Signal
Authors: Md. Khayrul Bashar, Md. Saiful Islam, Kimiko Yamashita, Yano Midori
Abstract:
Despite having some progress in human authentication, conventional biometrics (e.g., facial features, fingerprints, retinal scans, gait, voice patterns) are not robust against falsification because they are neither confidential nor secret to an individual. As a non-invasive tool, electrocardiogram (ECG) has recently shown a great potential in human recognition due to its unique rhythms characterizing the variability of human heart structures (chest geometry, sizes, and positions). Moreover, ECG has a real-time vitality characteristic that signifies the live signs, which ensure legitimate individual to be identified. However, the detection accuracy of the current ECG-based methods is not sufficient due to a high variability of the individual’s heartbeats at a different instance of time. These variations may occur due to muscle flexure, the change of mental or emotional states, and the change of sensor positions or long-term baseline shift during the recording of ECG signal. In this study, a new method is proposed for human identification, which is based on the extraction of the local roughness of ECG heartbeat signals. First ECG signal is preprocessed using a second order band-pass Butterworth filter having cut-off frequencies of 0.00025 and 0.04. A number of local binary patterns are then extracted by applying a moving neighborhood window along the ECG signal. At each instant of the ECG signal, the pattern is formed by comparing the ECG intensities at neighboring time points with the central intensity in the moving window. Then, binary weights are multiplied with the pattern to come up with the local roughness description of the signal. Finally, histograms are constructed that describe the heartbeat signals of individual subjects in the database. One advantage of the proposed feature is that it does not depend on the accuracy of detecting QRS complex, unlike the conventional methods. Supervised recognition methods are then designed using minimum distance to mean and Bayesian classifiers to identify authentic human subjects. An experiment with sixty (60) ECG signals from sixty adult subjects from National Metrology Institute of Germany (NMIG) - PTB database, showed that the proposed new method is promising compared to a conventional interval and amplitude feature-based method.Keywords: human identification, ECG biometrics, local roughness patterns, supervised classification
Procedia PDF Downloads 405280 Spatial Variability of Soil Metal Contamination to Detect Cancer Risk Zones in Coimbatore Region of India
Authors: Aarthi Mariappan, Janani Selvaraj, P. B. Harathi, M. Prashanthi Devi
Abstract:
Anthropogenic modification of the urban environment has largely increased in the recent years in order to sustain the growing human population. Intense industrial activity, permanent and high traffic on the roads, a developed subterranean infrastructure network, land use patterns are just some specific characteristics. Every day, the urban environment is polluted by more or less toxic emissions, organic or metals wastes discharged from specific activities such as industrial, commercial, municipal. When these eventually deposit into the soil, the physical and chemical properties of the surrounding soil is changed, transforming it into a human exposure indicator. Metals are non-degradable and occur cumulative in soil due to regular deposits are a result of permanent human activity. Due to this, metals are a contaminant factor for soil when persistent over a long period of time and a possible danger for inhabitant’s health on prolonged exposure. Metals accumulated in contaminated soil may be transferred to humans directly, by inhaling the dust raised from top soil, or by ingesting, or by dermal contact and indirectly, through plants and animals grown on contaminated soil and used for food. Some metals, like Cu, Mn, Zn, are beneficial for human’s health and represent a danger only if their concentration is above permissible levels, but other metals, like Pb, As, Cd, Hg, are toxic even at trace level causing gastrointestinal and lung cancers. In urban areas, metals can be emitted from a wide variety of sources like industrial, residential, commercial activities. Our study interrogates the spatial distribution of heavy metals in soil in relation to their permissible levels and their association with the health risk to the urban population in Coimbatore, India. Coimbatore region is a high cancer risk zone and case records of gastro intestinal and respiratory cancer patients were collected from hospitals and geocoded in ArcGIS10.1. The data of patients pertaining to the urban limits were retained and checked for their diseases history based on their diagnosis and treatment. A disease map of cancer was prepared to show the disease distribution. It has been observed that in our study area Cr, Pb, As, Fe and Mg exceeded their permissible levels in the soil. Using spatial overlay analysis a relationship between environmental exposure to these potentially toxic elements in soil and cancer distribution in Coimbatore district was established to show areas of cancer risk. Through this, our study throws light on the impact of prolonged exposure to soil contamination in soil in the urban zones, thereby exploring the possibility to detect cancer risk zones and to create awareness among the exposed groups on cancer risk.Keywords: soil contamination, cancer risk, spatial analysis, India
Procedia PDF Downloads 403279 Investigation of the EEG Signal Parameters during Epileptic Seizure Phases in Consequence to the Application of External Healing Therapy on Subjects
Authors: Karan Sharma, Ajay Kumar
Abstract:
Epileptic seizure is a type of disease due to which electrical charge in the brain flows abruptly resulting in abnormal activity by the subject. One percent of total world population gets epileptic seizure attacks.Due to abrupt flow of charge, EEG (Electroencephalogram) waveforms change. On the display appear a lot of spikes and sharp waves in the EEG signals. Detection of epileptic seizure by using conventional methods is time-consuming. Many methods have been evolved that detect it automatically. The initial part of this paper provides the review of techniques used to detect epileptic seizure automatically. The automatic detection is based on the feature extraction and classification patterns. For better accuracy decomposition of the signal is required before feature extraction. A number of parameters are calculated by the researchers using different techniques e.g. approximate entropy, sample entropy, Fuzzy approximate entropy, intrinsic mode function, cross-correlation etc. to discriminate between a normal signal & an epileptic seizure signal.The main objective of this review paper is to present the variations in the EEG signals at both stages (i) Interictal (recording between the epileptic seizure attacks). (ii) Ictal (recording during the epileptic seizure), using most appropriate methods of analysis to provide better healthcare diagnosis. This research paper then investigates the effects of a noninvasive healing therapy on the subjects by studying the EEG signals using latest signal processing techniques. The study has been conducted with Reiki as a healing technique, beneficial for restoring balance in cases of body mind alterations associated with an epileptic seizure. Reiki is practiced around the world and is recommended for different health services as a treatment approach. Reiki is an energy medicine, specifically a biofield therapy developed in Japan in the early 20th century. It is a system involving the laying on of hands, to stimulate the body’s natural energetic system. Earlier studies have shown an apparent connection between Reiki and the autonomous nervous system. The Reiki sessions are applied by an experienced therapist. EEG signals are measured at baseline, during session and post intervention to bring about effective epileptic seizure control or its elimination altogether.Keywords: EEG signal, Reiki, time consuming, epileptic seizure
Procedia PDF Downloads 407278 Nondestructive Prediction and Classification of Gel Strength in Ethanol-Treated Kudzu Starch Gels Using Near-Infrared Spectroscopy
Authors: John-Nelson Ekumah, Selorm Yao-Say Solomon Adade, Mingming Zhong, Yufan Sun, Qiufang Liang, Muhammad Safiullah Virk, Xorlali Nunekpeku, Nana Adwoa Nkuma Johnson, Bridget Ama Kwadzokpui, Xiaofeng Ren
Abstract:
Enhancing starch gel strength and stability is crucial. However, traditional gel property assessment methods are destructive, time-consuming, and resource-intensive. Thus, understanding ethanol treatment effects on kudzu starch gel strength and developing a rapid, nondestructive gel strength assessment method is essential for optimizing the treatment process and ensuring product quality consistency. This study investigated the effects of different ethanol concentrations on the microstructure of kudzu starch gels using a comprehensive microstructural analysis. We also developed a nondestructive method for predicting gel strength and classifying treatment levels using near-infrared (NIR) spectroscopy, and advanced data analytics. Scanning electron microscopy revealed progressive network densification and pore collapse with increasing ethanol concentration, correlating with enhanced mechanical properties. NIR spectroscopy, combined with various variable selection methods (CARS, GA, and UVE) and modeling algorithms (PLS, SVM, and ELM), was employed to develop predictive models for gel strength. The UVE-SVM model demonstrated exceptional performance, with the highest R² values (Rc = 0.9786, Rp = 0.9688) and lowest error rates (RMSEC = 6.1340, RMSEP = 6.0283). Pattern recognition algorithms (PCA, LDA, and KNN) successfully classified gels based on ethanol treatment levels, achieving near-perfect accuracy. This integrated approach provided a multiscale perspective on ethanol-induced starch gel modification, from molecular interactions to macroscopic properties. Our findings demonstrate the potential of NIR spectroscopy, coupled with advanced data analysis, as a powerful tool for rapid, nondestructive quality assessment in starch gel production. This study contributes significantly to the understanding of starch modification processes and opens new avenues for research and industrial applications in food science, pharmaceuticals, and biomaterials.Keywords: kudzu starch gel, near-infrared spectroscopy, gel strength prediction, support vector machine, pattern recognition algorithms, ethanol treatment
Procedia PDF Downloads 41277 Towards End-To-End Disease Prediction from Raw Metagenomic Data
Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker
Abstract:
Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.Keywords: deep learning, disease prediction, end-to-end machine learning, metagenomics, multiple instance learning, precision medicine
Procedia PDF Downloads 126276 Comparison of Two Strategies in Thoracoscopic Ablation of Atrial Fibrillation
Authors: Alexander Zotov, Ilkin Osmanov, Emil Sakharov, Oleg Shelest, Aleksander Troitskiy, Robert Khabazov
Abstract:
Objective: Thoracoscopic surgical ablation of atrial fibrillation (AF) includes two technologies in performing of operation. 1st strategy used is the AtriCure device (bipolar, nonirrigated, non clamping), 2nd strategy is- the Medtronic device (bipolar, irrigated, clamping). The study presents a comparative analysis of clinical outcomes of two strategies in thoracoscopic ablation of AF using AtriCure vs. Medtronic devices. Methods: In 2 center study, 123 patients underwent thoracoscopic ablation of AF for the period from 2016 to 2020. Patients were divided into two groups. The first group is represented by patients who applied the AtriCure device (N=63), and the second group is - the Medtronic device (N=60), respectively. Patients were comparable in age, gender, and initial severity of the condition. Among the patients, in group 1 were 65% males with a median age of 57 years, while in group 2 – 75% and 60 years, respectively. Group 1 included patients with paroxysmal form -14,3%, persistent form - 68,3%, long-standing persistent form – 17,5%, group 2 – 13,3%, 13,3% and 73,3% respectively. Median ejection fraction and indexed left atrial volume amounted in group 1 – 63% and 40,6 ml/m2, in group 2 - 56% and 40,5 ml/m2. In addition, group 1 consisted of 39,7% patients with chronic heart failure (NYHA Class II) and 4,8% with chronic heart failure (NYHA Class III), when in group 2 – 45% and 6,7%, respectively. Follow-up consisted of laboratory tests, chest Х-ray, ECG, 24-hour Holter monitor, and cardiopulmonary exercise test. Duration of freedom from AF, distant mortality rate, and prevalence of cerebrovascular events were compared between the two groups. Results: Exit block was achieved in all patients. According to the Clavien-Dindo classification of surgical complications fraction of adverse events was 14,3% and 16,7% (1st group and 2nd group, respectively). Mean follow-up period in the 1st group was 50,4 (31,8; 64,8) months, in 2nd group - 30,5 (14,1; 37,5) months (P=0,0001). In group 1 - total freedom of AF was in 73,3% of patients, among which 25% had additional antiarrhythmic drugs (AADs) therapy or catheter ablation (CA), in group 2 – 90% and 18,3%, respectively (for total freedom of AF P<0,02). At follow-up, the distant mortality rate in the 1st group was – 4,8%, and in the 2nd – no fatal events. Prevalence of cerebrovascular events was higher in the 1st group than in the 2nd (6,7% vs. 1,7% respectively). Conclusions: Despite the relatively shorter follow-up of the 2nd group in the study, applying the strategy using the Medtronic device showed quite encouraging results. Further research is needed to evaluate the effectiveness of this strategy in the long-term period.Keywords: atrial fibrillation, clamping, ablation, thoracoscopic surgery
Procedia PDF Downloads 110275 Correlation Between the Toxicity Grade of the Adverse Effects in the Course of the Immunotherapy of Lung Cancer and Efficiency of the Treatment in Anti-PD-L1 and Anti-PD-1 Drugs - Own Clinical Experience
Authors: Anna Rudzińska, Katarzyna Szklener, Pola Juchaniuk, Anna Rodzajweska, Katarzyna Machulska-Ciuraj, Monika Rychlik- Grabowska, Michał łOziński, Agnieszka Kolak-Bruks, SłAwomir Mańdziuk
Abstract:
Introduction: Immune checkpoint inhibition (ICI) belongs to the modern forms of anti-cancer treatment. Due to the constant development and continuous research in the field of ICI, many aspects of the treatment are yet to be discovered. One of the less researched aspects of ICI treatment is the influence of the adverse effects on the treatment success rate. It is suspected that adverse events in the course of the ICI treatment indicate a better response rate and correlate with longer progression-free- survival. Methodology: The research was conducted with the usage of the documentation of the Department of Clinical Oncology and Chemotherapy. Data of the patients with a lung cancer diagnosis who were treated between 2019-2022 and received ICI treatment were analyzed. Results: Out of over 133 patients whose data was analyzed, the vast majority were diagnosed with non-small cell lung cancer. The majority of the patients did not experience adverse effects. Most adverse effects reported were classified as grade 1 or grade 2 according to CTCAE classification. Most adverse effects involved skin, thyroid and liver toxicity. Statistical significance was found for the adverse effect incidence and overall survival (OS) and progression-free survival (PFS) (p=0,0263) and for the time of toxicity onset and OS and PFS (p<0,001). The number of toxicity sites was statistically significant for prolonged PFS (p=0.0315). The highest OS was noted in the group presenting grade 1 and grade 2 adverse effects. Conclusions: Obtained results confirm the existence of the prolonged OS and PFS in the adverse-effects-charged patients, mostly in the group presenting mild to intermediate (Grade 1 and Grade 2) adverse effects and late toxicity onset. Simultaneously our results suggest a correlation between treatment response rate and the toxicity grade of the adverse effects and the time of the toxicity onset. Similar results were obtained in several similar research conducted - with the proven tendency of better survival in mild and moderate toxicity; meanwhile, other studies in the area suggested an advantage in patients with any toxicity regardless of the grade. The contradictory results strongly suggest the need for further research on this topic, with a focus on additional factors influencing the course of the treatment.Keywords: adverse effects, immunotherapy, lung cancer, PD-1/PD-L1 inhibitors
Procedia PDF Downloads 92274 Segmented Pupil Phasing with Deep Learning
Authors: Dumont Maxime, Correia Carlos, Sauvage Jean-François, Schwartz Noah, Gray Morgan
Abstract:
Context: The concept of the segmented telescope is unavoidable to build extremely large telescopes (ELT) in the quest for spatial resolution, but it also allows one to fit a large telescope within a reduced volume of space (JWST) or into an even smaller volume (Standard Cubesat). Cubesats have tight constraints on the computational burden available and the small payload volume allowed. At the same time, they undergo thermal gradients leading to large and evolving optical aberrations. The pupil segmentation comes nevertheless with an obvious difficulty: to co-phase the different segments. The CubeSat constraints prevent the use of a dedicated wavefront sensor (WFS), making the focal-plane images acquired by the science detector the most practical alternative. Yet, one of the challenges for the wavefront sensing is the non-linearity between the image intensity and the phase aberrations. Plus, for Earth observation, the object is unknown and unrepeatable. Recently, several studies have suggested Neural Networks (NN) for wavefront sensing; especially convolutional NN, which are well known for being non-linear and image-friendly problem solvers. Aims: We study in this paper the prospect of using NN to measure the phasing aberrations of a segmented pupil from the focal-plane image directly without a dedicated wavefront sensing. Methods: In our application, we take the case of a deployable telescope fitting in a CubeSat for Earth observations which triples the aperture size (compared to the 10cm CubeSat standard) and therefore triples the angular resolution capacity. In order to reach the diffraction-limited regime in the visible wavelength, typically, a wavefront error below lambda/50 is required. The telescope focal-plane detector, used for imaging, will be used as a wavefront-sensor. In this work, we study a point source, i.e. the Point Spread Function [PSF] of the optical system as an input of a VGG-net neural network, an architecture designed for image regression/classification. Results: This approach shows some promising results (about 2nm RMS, which is sub lambda/50 of residual WFE with 40-100nm RMS of input WFE) using a relatively fast computational time less than 30 ms which translates a small computation burder. These results allow one further study for higher aberrations and noise.Keywords: wavefront sensing, deep learning, deployable telescope, space telescope
Procedia PDF Downloads 106273 The Extension of the Kano Model by the Concept of Over-Service
Authors: Lou-Hon Sun, Yu-Ming Chiu, Chen-Wei Tao, Chia-Yun Tsai
Abstract:
It is common practice for many companies to ask employees to provide heart-touching service for customers and to emphasize the attitude of 'customer first'. However, services may not necessarily gain praise, and may actually be considered excessive, if customers do not appreciate such behaviors. In reality, many restaurant businesses try to provide as much service as possible without taking into account whether over-provision may lead to negative customer reception. A survey of 894 people in Britain revealed that 49 percent of respondents consider over-attentive waiters the most annoying aspect of dining out. It can be seen that merely aiming to exceed customers’ expectations without actually addressing their needs, only further distances and dissociates the standard of services from the goals of customer satisfaction itself. Over-service is defined, as 'service provided that exceeds customer expectations, or simply that customers deemed redundant, resulting in negative perception'. It was found that customers’ reactions and complaints concerning over-service are not as intense as those against service failures caused by the inability to meet expectations; consequently, it is more difficult for managers to become aware of the existence of over-service. Thus the ability to manage over-service behaviors is a significant topic for consideration. The Kano model classifies customer preferences into five categories: attractive quality attribute, one-dimensional quality attribute, must-be quality attribute, indifferent quality attribute and reverse quality attributes. The model is still very popular for researchers to explore the quality aspects and customer satisfaction. Nevertheless, several studies indicated that Kano’s model could not fully capture the nature of service quality. The concept of over-service can be used to restructure the model and provide a better understanding of the service quality construct. In this research, the structure of Kano's two-dimensional questionnaire will be used to classify the factors into different dimensions. The same questions will be used in the second questionnaire for identifying the over-service experienced of the respondents. The finding of these two questionnaires will be used to analyze the relevance between service quality classification and over-service behaviors. The subjects of this research are customers of fine dining chain restaurants. Three hundred questionnaires will be issued based on the stratified random sampling method. Items for measurement will be derived from DINESERV scale. The tangible dimension of the questionnaire will be eliminated due to this research is focused on the employee behaviors. Quality attributes of the Kano model are often regarded as an instrument for improving customer satisfaction. The concept of over-service can be used to restructure the model and provide a better understanding of service quality construct. The extension of the Kano model will not only develop a better understanding of customer needs and expectations but also enhance the management of service quality.Keywords: consumer satisfaction, DINESERV, kano model, over-service
Procedia PDF Downloads 164272 Landing Performance Improvement Using Genetic Algorithm for Electric Vertical Take Off and Landing Aircrafts
Authors: Willian C. De Brito, Hernan D. C. Munoz, Erlan V. C. Carvalho, Helder L. C. De Oliveira
Abstract:
In order to improve commute time for small distance trips and relieve large cities traffic, a new transport category has been the subject of research and new designs worldwide. The air taxi travel market promises to change the way people live and commute by using the concept of vehicles with the ability to take-off and land vertically and to provide passenger’s transport equivalent to a car, with mobility within large cities and between cities. Today’s civil air transport remains costly and accounts for 2% of the man-made CO₂ emissions. Taking advantage of this scenario, many companies have developed their own Vertical Take Off and Landing (VTOL) design, seeking to meet comfort, safety, low cost and flight time requirements in a sustainable way. Thus, the use of green power supplies, especially batteries, and fully electric power plants is the most common choice for these arising aircrafts. However, it is still a challenge finding a feasible way to handle with the use of batteries rather than conventional petroleum-based fuels. The batteries are heavy and have an energy density still below from those of gasoline, diesel or kerosene. Therefore, despite all the clear advantages, all electric aircrafts (AEA) still have low flight autonomy and high operational cost, since the batteries must be recharged or replaced. In this sense, this paper addresses a way to optimize the energy consumption in a typical mission of an aerial taxi aircraft. The approach and landing procedure was chosen to be the subject of an optimization genetic algorithm, while final programming can be adapted for take-off and flight level changes as well. A real tilt rotor aircraft with fully electric power plant data was used to fit the derived dynamic equations of motion. Although a tilt rotor design is used as a proof of concept, it is possible to change the optimization to be applied for other design concepts, even those with independent motors for hover and cruise flight phases. For a given trajectory, the best set of control variables are calculated to provide the time history response for aircraft´s attitude, rotors RPM and thrust direction (or vertical and horizontal thrust, for independent motors designs) that, if followed, results in the minimum electric power consumption through that landing path. Safety, comfort and design constraints are assumed to give representativeness to the solution. Results are highly dependent on these constraints. For the tested cases, performance improvement ranged from 5 to 10% changing initial airspeed, altitude, flight path angle, and attitude.Keywords: air taxi travel, all electric aircraft, batteries, energy consumption, genetic algorithm, landing performance, optimization, performance improvement, tilt rotor, VTOL design
Procedia PDF Downloads 116271 Design and Evaluation of a Prototype for Non-Invasive Screening of Diabetes – Skin Impedance Technique
Authors: Pavana Basavakumar, Devadas Bhat
Abstract:
Diabetes is a disease which often goes undiagnosed until its secondary effects are noticed. Early detection of the disease is necessary to avoid serious consequences which could lead to the death of the patient. Conventional invasive tests for screening of diabetes are mostly painful, time consuming and expensive. There’s also a risk of infection involved, therefore it is very essential to develop non-invasive methods to screen and estimate the level of blood glucose. Extensive research is going on with this perspective, involving various techniques that explore optical, electrical, chemical and thermal properties of the human body that directly or indirectly depend on the blood glucose concentration. Thus, non-invasive blood glucose monitoring has grown into a vast field of research. In this project, an attempt was made to device a prototype for screening of diabetes by measuring electrical impedance of the skin and building a model to predict a patient’s condition based on the measured impedance. The prototype developed, passes a negligible amount of constant current (0.5mA) across a subject’s index finger through tetra polar silver electrodes and measures output voltage across a wide range of frequencies (10 KHz – 4 MHz). The measured voltage is proportional to the impedance of the skin. The impedance was acquired in real-time for further analysis. Study was conducted on over 75 subjects with permission from the institutional ethics committee, along with impedance, subject’s blood glucose values were also noted, using conventional method. Nonlinear regression analysis was performed on the features extracted from the impedance data to obtain a model that predicts blood glucose values for a given set of features. When the predicted data was depicted on Clarke’s Error Grid, only 58% of the values predicted were clinically acceptable. Since the objective of the project was to screen diabetes and not actual estimation of blood glucose, the data was classified into three classes ‘NORMAL FASTING’,’NORMAL POSTPRANDIAL’ and ‘HIGH’ using linear Support Vector Machine (SVM). Classification accuracy obtained was 91.4%. The developed prototype was economical, fast and pain free. Thus, it can be used for mass screening of diabetes.Keywords: Clarke’s error grid, electrical impedance of skin, linear SVM, nonlinear regression, non-invasive blood glucose monitoring, screening device for diabetes
Procedia PDF Downloads 327270 A Comparative Analysis on Survival in Patients with Node Positive Cutaneous Head and Neck Squamous Cell Carcinoma as per TNM 7th and Tnm 8th Editions
Authors: Petr Daniel Edward Kovarik, Malcolm Jackson, Charles Kelly, Rahul Patil, Shahid Iqbal
Abstract:
Introduction: Recognition of the presence of extra capsular spread (ECS) has been a major change in the TNM 8th edition published by the American Joint Committee on Cancer in 2018. Irrespective of the size or number of lymph nodes, the presence of ECS makes N3b disease a stage IV disease. The objective of this retrospective observational study was to conduct a comparative analysis of survival outcomes in patients with lymph node-positive cutaneous head and neck squamous cell carcinoma (CHNSCC) based on their TNM 7th and TNM 8th editions classification. Materials and Methods: From January 2010 to December 2020, 71 patients with CHNSCC were identified from our centre’s database who were treated with radical surgery and adjuvant radiotherapy. All histopathological reports were reviewed, and comprehensive nodal mapping was performed. The data were collected retrospectively and survival outcomes were compared using TNM 7th and 8th editions. Results: The median age of the whole group of 71 patients was 78 years, range 54 – 94 years, 63 were male and 8 female. In total, 2246 lymph nodes were analysed; 195 were positive for cancer. ECS was present in 130 lymph nodes, which led to a change in TNM staging. The details on N-stage as per TNM 7th edition was as follows; pN1 = 23, pN2a = 14, pN2b = 32, pN2c = 0, pN3 = 2. After incorporating the TNM 8th edition criterion (presence of ECS), the details on N-stage were as follows; pN1 = 6, pN2a = 5, pN2b = 3, pN2c = 0, pN3a = 0, pN3b = 57. This showed an increase in overall stage. According to TNM 7th edition, there were 23 patients were with stage III and remaining 48 patients, stage IV. As per TNM 8th edition, there were only 6 patients with stage III as compared to 65 patients with stage IV. For all patients, 2-year disease specific survival (DSS) and overall survival (OS) were 70% and 46%. 5-year DSS and OS rates were 66% and 20% respectively. Comparing the survival between stage III and stage IV of the two cohorts using both TNM 7th and 8th editions, there is an obvious greater survival difference between the stages if TNM 8th staging is used. However, meaningful statistics were not possible as the majority of patients (n = 65) were with stage IV and only 6 patients were stage III in the TNM 8th cohort. Conclusion: Our study provides a comprehensive analysis on lymph node data mapping in this specific patient population. It shows a better differentiation between stage III and stage IV in the TNM 8th edition as compared to TNM 7th however meaningful statistics were not possible due to the imbalance of patients in the sub-cohorts of the groups.Keywords: cutaneous head and neck squamous cell carcinoma, extra capsular spread, neck lymphadenopathy, TNM 7th and 8th editions
Procedia PDF Downloads 107269 The Effectiveness of an Occupational Therapy Metacognitive-Functional Intervention for the Improvement of Human Risk Factors of Bus Drivers
Authors: Navah Z. Ratzon, Rachel Shichrur
Abstract:
Background: Many studies have assessed and identified the risk factors of safe driving, but there is relatively little research-based evidence concerning the ability to improve the driving skills of drivers in general and in particular of bus drivers, who are defined as a population at risk. Accidents involving bus drivers can endanger dozens of passengers and cause high direct and indirect damages. Objective: To examine the effectiveness of a metacognitive-functional intervention program for the reduction of risk factors among professional drivers relative to a control group. Methods: The study examined 77 bus drivers working for a large public company in the center of the country, aged 27-69. Twenty-one drivers continued to the intervention stage; four of them dropped out before the end of the intervention. The intervention program we developed was based on previous driving models and the guiding occupational therapy practice framework model in Israel, while adjusting the model to the professional driving in public transportation and its particular risk factors. Treatment focused on raising awareness to safe driving risk factors identified at prescreening (ergonomic, perceptual-cognitive and on-road driving data), with reference to the difficulties that the driver raises and providing coping strategies. The intervention has been customized for each driver and included three sessions of two hours. The effectiveness of the intervention was tested using objective measures: In-Vehicle Data Recorders (IVDR) for monitoring natural driving data, traffic accident data before and after the intervention, and subjective measures (occupational performance questionnaire for bus drivers). Results: Statistical analysis found a significant difference between the degree of change in the rate of IVDR perilous events (t(17)=2.14, p=0.046), before and after the intervention. There was significant difference in the number of accidents per year before and after the intervention in the intervention group (t(17)=2.11, p=0.05), but no significant change in the control group. Subjective ratings of the level of performance and of satisfaction with performance improved in all areas tested following the intervention. The change in the ‘human factors/person’ field, was significant (performance : t=- 2.30, p=0.04; satisfaction with performance : t=-3.18, p=0.009). The change in the ‘driving occupation/tasks’ field, was not significant but showed a tendency toward significance (t=-1.94, p=0.07,). No significant differences were found in driving environment-related variables. Conclusions: The metacognitive-functional intervention significantly improved the objective and subjective measures of safety of bus drivers’ driving. These novel results highlight the potential contribution of occupational therapists, using metacognitive functional treatment, to preventing car accidents among the healthy drivers population and improving the well-being of these drivers. This study also enables familiarity with advanced technologies of IVDR systems and enriches the knowledge of occupational therapists in regards to using a wide variety of driving assessment tools and making the best practice decisions.Keywords: bus drivers, IVDR, human risk factors, metacognitive-functional intervention
Procedia PDF Downloads 348268 The Study of Intangible Assets at Various Firm States
Authors: Gulnara Galeeva, Yulia Kasperskaya
Abstract:
The study deals with the relevant problem related to the formation of the efficient investment portfolio of an enterprise. The structure of the investment portfolio is connected to the degree of influence of intangible assets on the enterprise’s income. This determines the importance of research on the content of intangible assets. However, intangible assets studies do not take into consideration how the enterprise state can affect the content and the importance of intangible assets for the enterprise`s income. This affects accurateness of the calculations. In order to study this problem, the research was divided into several stages. In the first stage, intangible assets were classified based on their synergies as the underlying intangibles and the additional intangibles. In the second stage, this classification was applied. It showed that the lifecycle model and the theory of abrupt development of the enterprise, that are taken into account while designing investment projects, constitute limit cases of a more general theory of bifurcations. The research identified that the qualitative content of intangible assets significant depends on how close the enterprise is to being in crisis. In the third stage, the author developed and applied the Wide Pairwise Comparison Matrix method. This allowed to establish that using the ratio of the standard deviation to the mean value of the elements of the vector of priority of intangible assets makes it possible to estimate the probability of a full-blown crisis of the enterprise. The author has identified a criterion, which allows making fundamental decisions on investment feasibility. The study also developed an additional rapid method of assessing the enterprise overall status based on using the questionnaire survey with its Director. The questionnaire consists only of two questions. The research specifically focused on the fundamental role of stochastic resonance in the emergence of bifurcation (crisis) in the economic development of the enterprise. The synergetic approach made it possible to describe the mechanism of the crisis start in details and also to identify a range of universal ways of overcoming the crisis. It was outlined that the structure of intangible assets transforms into a more organized state with the strengthened synchronization of all processes as a result of the impact of the sporadic (white) noise. Obtained results offer managers and business owners a simple and an affordable method of investment portfolio optimization, which takes into account how close the enterprise is to a state of a full-blown crisis.Keywords: analytic hierarchy process, bifurcation, investment portfolio, intangible assets, wide matrix
Procedia PDF Downloads 209267 Petrology and Petrochemistry of Basement Rocks in Ila Orangun Area, Southwestern Nigeria
Authors: Jayeola A. O., Ayodele O. S., Olususi J. I.
Abstract:
From field studies, six (6) lithological units were identified to be common around the study area, which includes quartzites, granites, granite gneiss, porphyritic granites, amphibolite and pegmatites. Petrographical analysis was done to establish the major mineral assemblages and accessory minerals present in selected rock samples, which represents the major rock types in the area. For the purpose of this study, twenty (20) pulverized rock samples were taken to the laboratory for geochemical analysis with their results used in the classification, as well as suggest the geochemical attributes of the rocks. Results from petrographical studies of the rocks under both plane and cross polarized lights revealed the major minerals identified under thin sections to include quartz, feldspar, biotite, hornblende, plagioclase and muscovite with opaque other accessory minerals, which include actinolite, spinel and myrmekite. Geochemical results obtained and interpreted using various geochemical plots or discrimination plots all classified the rocks in the area as belonging to both the peralkaline metaluminous and peraluminous types. Results for the major oxides ratios produced for Na₂O/K₂O, Al₂O₃/Na₂O + CaO + K₂O and Na₂O + CaO + K₂O/Al₂O₃ show the excess of alumina, Al₂O₃ over the alkaline Na₂O +CaO +K₂O thus suggesting peraluminous rocks. While the excess of the alkali over the alumina suggests the peralkaline metaluminous rock type. The results of correlation coefficient show a perfect strong positive correlation, which shows that they are of same geogenic sources, while negative correlation coefficient values indicate a perfect weak negative correlation, suggesting that they are of heterogeneous geogenic sources. From factor analysis, five component groups were identified as Group 1 consists of Ag-Cr-Ni elemental associations suggesting Ag, Cr, and Ni mineralization, predicting the possibility of sulphide mineralization. in the study area. Group ll and lll consist of As-Ni-Hg-Fe-Sn-Co-Pb-Hg element association, which are pathfinder elements to the mineralization of gold. Group 1V and V consist of Cd-Cu-Ag-Co-Zn, which concentrations are significant to elemental associations and mineralization. In conclusion, from the potassium radiometric anomaly map produced, the eastern section (northeastern and southeastern) is observed to be the hot spot and mineralization zone for the study area.Keywords: petrography, Ila Orangun, petrochemistry, pegmatites, peraluminous
Procedia PDF Downloads 64266 Vulnerability of the Rural Self-Constructed Housing with Social Programs and His Economic Impact in the South-East of Mexico
Authors: Castillo-Acevedo J, Mena-Rivero R, Silva-Poot H
Abstract:
In Mexico, as largely of the developing countries, the rural housing is a study object, since the diversity of constructive idiosyncrasies for locality, involves various factors that make it vulnerable; an important aspect of study is the progressive deterioration that is seen in the rural housing. Various social programs, contribute financial resources in the field of housing to provide support for families living in rural areas, however, they do not provide a coordination with the self-construction that is usually the way in which is built in these areas. The present study, exposes the physical situation and an economic assessment that presents the rural self-constructed housing in three rural communities in the south of the state of Quintana Roo, Mexico, which were built with funding from federal social programs. The information compilation was carried out in a period of seven months in which there was used the intentional sampling of typical cases, where the object study was the housing constructed with supports of the program “Rural Housing” between the year 2009 and 2014. Instruments were used as the interview, ballot papers of observation, ballot papers of technical verification and various measuring equipment laboratory for the classification of pathologies; for the determination of some pathologies constructive Mexican standards were applied how NMX-C-192-ONNCCE, NMX-C-111-ONNCCE, NMX-C-404-ONNCCE and finally used the software of Opus CMS ® with the help of tables of the National Consumer Price Index (CPI) for update of costs and wages according to the line of being applied in Mexico, were used for an economic valuation. The results show 11 different constructive pathologies and exposes greater presence with the 22.50% to the segregation of the concrete; the economic assessment shows that 80% of self-constructed housing, exceed the cost of construction it would have compared to a similar dwelling built by a construction company; It is also exposed to the 46.10% of the universe of study represent economic losses in materials to the social activities by houses not built. The system of self-construction used by the social programs, affect to some extent the program objectives applied in underserved areas, as implicit and additional costs affect the economic capacity of beneficiaries who invest time and effort in an activity that are not specialists, which this research provides foundations for sustainable alternatives or possibly eliminate the practice of self-construction of implemented social programs in marginalized rural communities in the south of state of Quintana Roo, Mexico.Keywords: economic valuation, pathologies constructive, rural housing, social programs
Procedia PDF Downloads 533265 An EEG-Based Scale for Comatose Patients' Vigilance State
Authors: Bechir Hbibi, Lamine Mili
Abstract:
Understanding the condition of comatose patients can be difficult, but it is crucial to their optimal treatment. Consequently, numerous scoring systems have been developed around the world to categorize patient states based on physiological assessments. Although validated and widely adopted by medical communities, these scores still present numerous limitations and obstacles. Even with the addition of additional tests and extensions, these scoring systems have not been able to overcome certain limitations, and it appears unlikely that they will be able to do so in the future. On the other hand, physiological tests are not the only way to extract ideas about comatose patients. EEG signal analysis has helped extensively to understand the human brain and human consciousness and has been used by researchers in the classification of different levels of disease. The use of EEG in the ICU has become an urgent matter in several cases and has been recommended by medical organizations. In this field, the EEG is used to investigate epilepsy, dementia, brain injuries, and many other neurological disorders. It has recently also been used to detect pain activity in some regions of the brain, for the detection of stress levels, and to evaluate sleep quality. In our recent findings, our aim was to use multifractal analysis, a very successful method of handling multifractal signals and feature extraction, to establish a state of awareness scale for comatose patients based on their electrical brain activity. The results show that this score could be instantaneous and could overcome many limitations with which the physiological scales stock. On the contrary, multifractal analysis stands out as a highly effective tool for characterizing non-stationary and self-similar signals. It demonstrates strong performance in extracting the properties of fractal and multifractal data, including signals and images. As such, we leverage this method, along with other features derived from EEG signal recordings from comatose patients, to develop a scale. This scale aims to accurately depict the vigilance state of patients in intensive care units and to address many of the limitations inherent in physiological scales such as the Glasgow Coma Scale (GCS) and the FOUR score. The results of applying version V0 of this approach to 30 patients with known GCS showed that the EEG-based score similarly describes the states of vigilance but distinguishes between the states of 8 sedated patients where the GCS could not be applied. Therefore, our approach could show promising results with patients with disabilities, injected with painkillers, and other categories where physiological scores could not be applied.Keywords: coma, vigilance state, EEG, multifractal analysis, feature extraction
Procedia PDF Downloads 77264 Visual Design of Walkable City as Sidewalk Integration with Dukuh Atas MRT Station in Jakarta
Authors: Nadia E. Christiana, Azzahra A. N. Ginting, Ardhito Nurcahya, Havisa P. Novira
Abstract:
One of the quickest ways to do a short trip in urban areas is by walking, either individually, in couple or groups. Walkability nowadays becomes one of the parameters to measure the quality of an urban neighborhood. As a Central Business District and public transport transit hub, Dukuh Atas area becomes one of the highest numbers of commuters that pass by the area and interchange between transportation modes daily. Thus, as a public transport hub, a lot of investment should be focused to speed up the development of the area that would support urban transit activity between transportation modes, one of them is revitalizing pedestrian walkways. The purpose of this research is to formulate the visual design concept of 'Walkable City' based on the results of the observation and a series of rankings. To achieve this objective, it is necessary to accomplish several stages of the research that consists of (1) Identifying the system of pedestrian paths in Dukuh Atas area using descriptive qualitative method (2) Analyzing the sidewalk walkability rate according to the perception and the walkability satisfaction rate using the characteristics of pedestrians and non-pedestrians in Dukuh Atas area by using Global Walkability Index analysis and Multicriteria Satisfaction Analysis (3) Analyzing the factors that determine the integration of pedestrian walkways in Dukuh Atas area using descriptive qualitative method. The results achieved in this study is that the walkability level of Dukuh Atas corridor area is 44.45 where the value is included in the classification of 25-49, which is a bit of facility that can be reached by foot. Furthermore, based on the questionnaire, satisfaction rate of pedestrian walkway in Dukuh Atas area reached a number of 64%. It is concluded that commuters have not been fully satisfied with the condition of the sidewalk. Besides, the factors that influence the integration in Dukuh Atas area have been reasonable as it is supported by the utilization of land and modes such as KRL, Busway, and MRT. From the results of all analyzes conducted, the visual design and the application of the concept of walkable city along the pathway pedestrian corridor of Dukuh Atas area are formulated. Achievement of the results of this study amounted to 80% which needs to be done further review of the results of the analysis. The work of this research is expected to be a recommendation or input for the government in the development of pedestrian paths in maximizing the use of public transportation modes.Keywords: design, global walkability index, mass rapid transit, walkable city
Procedia PDF Downloads 193263 Diversity and Distribution Ecology of Coprophilous Mushrooms of Family Psathyrellaceae from Punjab, India
Authors: Amandeep Kaur, Ns Atri, Munruchi Kaur
Abstract:
Mushrooms have shaped our environment in ways that we are only beginning to understand. The weather patterns, topography, flora and fauna of Punjab state in India create favorable growing conditions for thousands of species of mushrooms, but the complete region was unexplored when it comes to coprophilous mushrooms growing on herbivorous dung. Coprophilous mushrooms are the most specialized fungi ecologically, which germinate and grow directly on different types of animal dung or on manured soil. In the present work, the diversity of coprophilous mushrooms' of Family Psathyrellaceae of the order Agaricales is explored, their relationship to the human world is sketched out, and their supreme significance to life on this planet is revealed. During the investigation, different dung localities from 16 districts of Punjab state have been explored for the collection of material. The macroscopic features of the collected mushrooms were documented on the Field key. The hand cut sections of the various parts of carpophore, such as pileus, gills, stipe and the basidiospores details, were studied microscopically under different magnification. Various authentic publications were consulted for the identification of the investigated taxa. The classification, authentic names and synonyms of the investigated taxa are as per the latest version of Dictionary of Fungi and the MycoBank. The present work deals with the taxonomy of 81 collections belonging to 39 species spread over 05 coprophilous genera, namely Psathyrella, Panaeolus, Parasola, Coprinopsis, and Coprinellus of family Psathyrellaceae. In the text, the investigated taxa have been arranged as they appear in the key to the genera and species investigated. In this work, have been thoroughly examined for their macroscopic, microscopic, ecological, and chemical reaction details. The authors dig deeper to give indication of their ecology and the dung type where they can be obtained. Each taxon is accompanied by a detailed listing of its prominent features and an illustration with habitat photographs and line drawings of morphological and anatomical features. Taxa are organized as per their status in the keys, which allow easy recognition. All the taxa are compared with similar taxa. The study has shown that dung is an important substrate which serves as a favorable niche for the growth of a variety of mushrooms. This paper shows an insight what short-lived coprophilous mushrooms can teach us about sustaining life on earth!Keywords: abundance, basidiomycota, biodiversity, seasonal availability, systematics
Procedia PDF Downloads 69262 Knowledge Management Barriers: A Statistical Study of Hardware Development Engineering Teams within Restricted Environments
Authors: Nicholas S. Norbert Jr., John E. Bischoff, Christopher J. Willy
Abstract:
Knowledge Management (KM) is globally recognized as a crucial element in securing competitive advantage through building and maintaining organizational memory, codifying and protecting intellectual capital and business intelligence, and providing mechanisms for collaboration and innovation. KM frameworks and approaches have been developed and defined identifying critical success factors for conducting KM within numerous industries ranging from scientific to business, and for ranges of organization scales from small groups to large enterprises. However, engineering and technical teams operating within restricted environments are subject to unique barriers and KM challenges which cannot be directly treated using the approaches and tools prescribed for other industries. This research identifies barriers in conducting KM within Hardware Development Engineering (HDE) teams and statistically compares significance to barriers upholding the four KM pillars of organization, technology, leadership, and learning for HDE teams. HDE teams suffer from restrictions in knowledge sharing (KS) due to classification of information (national security risks), customer proprietary restrictions (non-disclosure agreement execution for designs), types of knowledge, complexity of knowledge to be shared, and knowledge seeker expertise. As KM evolved leveraging information technology (IT) and web-based tools and approaches from Web 1.0 to Enterprise 2.0, KM may also seek to leverage emergent tools and analytics including expert locators and hybrid recommender systems to enable KS across barriers of the technical teams. The research will test hypothesis statistically evaluating if KM barriers for HDE teams affect the general set of expected benefits of a KM System identified through previous research. If correlations may be identified, then generalizations of success factors and approaches may also be garnered for HDE teams. Expert elicitation will be conducted using a questionnaire hosted on the internet and delivered to a panel of experts including engineering managers, principal and lead engineers, senior systems engineers, and knowledge management experts. The feedback to the questionnaire will be processed using analysis of variance (ANOVA) to identify and rank statistically significant barriers of HDE teams within the four KM pillars. Subsequently, KM approaches will be recommended for upholding the KM pillars within restricted environments of HDE teams.Keywords: engineering management, knowledge barriers, knowledge management, knowledge sharing
Procedia PDF Downloads 281261 An Infinite Mixture Model for Modelling Stutter Ratio in Forensic Data Analysis
Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer
Abstract:
Forensic DNA analysis has received much attention over the last three decades, due to its incredible usefulness in human identification. The statistical interpretation of DNA evidence is recognised as one of the most mature fields in forensic science. Peak heights in an Electropherogram (EPG) are approximately proportional to the amount of template DNA in the original sample being tested. A stutter is a minor peak in an EPG, which is not masking as an allele of a potential contributor, and considered as an artefact that is presumed to be arisen due to miscopying or slippage during the PCR. Stutter peaks are mostly analysed in terms of stutter ratio that is calculated relative to the corresponding parent allele height. Analysis of mixture profiles has always been problematic in evidence interpretation, especially with the presence of PCR artefacts like stutters. Unlike binary and semi-continuous models; continuous models assign a probability (as a continuous weight) for each possible genotype combination, and significantly enhances the use of continuous peak height information resulting in more efficient reliable interpretations. Therefore, the presence of a sound methodology to distinguish between stutters and real alleles is essential for the accuracy of the interpretation. Sensibly, any such method has to be able to focus on modelling stutter peaks. Bayesian nonparametric methods provide increased flexibility in applied statistical modelling. Mixture models are frequently employed as fundamental data analysis tools in clustering and classification of data and assume unidentified heterogeneous sources for data. In model-based clustering, each unknown source is reflected by a cluster, and the clusters are modelled using parametric models. Specifying the number of components in finite mixture models, however, is practically difficult even though the calculations are relatively simple. Infinite mixture models, in contrast, do not require the user to specify the number of components. Instead, a Dirichlet process, which is an infinite-dimensional generalization of the Dirichlet distribution, is used to deal with the problem of a number of components. Chinese restaurant process (CRP), Stick-breaking process and Pólya urn scheme are frequently used as Dirichlet priors in Bayesian mixture models. In this study, we illustrate an infinite mixture of simple linear regression models for modelling stutter ratio and introduce some modifications to overcome weaknesses associated with CRP.Keywords: Chinese restaurant process, Dirichlet prior, infinite mixture model, PCR stutter
Procedia PDF Downloads 331260 Demographic Determinants of Spatial Patterns of Urban Crime
Authors: Natalia Sypion-Dutkowska
Abstract:
Abstract — The main research objective of the paper is to discover the relationship between the age groups of residents and crime in particular districts of a large city. The basic analytical tool is specific crime rates, calculated not in relation to the total population, but for age groups in a different social situation - property, housing, work, and representing different generations with different behavior patterns. They are the communities from which criminals and victims of crimes come. The analysis of literature and national police reports gives rise to hypotheses about the ability of a given age group to generate crime as a source of offenders and as a group of victims. These specific indicators are spatially differentiated, which makes it possible to detect socio-demographic determinants of spatial patterns of urban crime. A multi-feature classification of districts was also carried out, in which specific crime rates are the diagnostic features. In this way, areas with a similar structure of socio-demographic determinants of spatial patterns on urban crime were designated. The case study is the city of Szczecin in Poland. It has about 400,000 inhabitants and its area is about 300 sq km. Szczecin is located in the immediate vicinity of Germany and is the economic, academic and cultural capital of the region. It also has a seaport and an airport. Moreover, according to ESPON 2007, Szczecin is the Transnational and National Functional Urban Area. Szczecin is divided into 37 districts - auxiliary administrative units of the municipal government. The population of each of them in 2015-17 was divided into 8 age groups: babes (0-2 yrs.), children (3-11 yrs.), teens (12-17 yrs.), younger adults (18-30 yrs.), middle-age adults (31-45 yrs.), older adults (46-65 yrs.), early older (66-80) and late older (from 81 yrs.). The crimes reported in 2015-17 in each of the districts were divided into 10 groups: fights and beatings, other theft, car theft, robbery offenses, burglary into an apartment, break-in into a commercial facility, car break-in, break-in into other facilities, drug offenses, property damage. In total, 80 specific crime rates have been calculated for each of the districts. The analysis was carried out on an intra-city scale, this is a novel approach as this type of analysis is usually carried out at the national or regional level. Another innovative research approach is the use of specific crime rates in relation to age groups instead of standard crime rates. Acknowledgments: This research was funded by the National Science Centre, Poland, registration number 2019/35/D/HS4/02942.Keywords: age groups, determinants of crime, spatial crime pattern, urban crime
Procedia PDF Downloads 172259 Analyzing Growth Trends of the Built Area in the Precincts of Various Types of Tourist Attractions in India: 2D and 3D Analysis
Authors: Yarra Sulina, Nunna Tagore Sai Priya, Ankhi Banerjee
Abstract:
With the rapid growth in tourist arrivals, there has been a huge demand for the growth of infrastructure in the destinations. With the increasing preference of tourists to stay near attractions, there has been a considerable change in the land use around tourist sites. However, with the inclusion of certain regulations and guidelines provided by the authorities based on the nature of tourism activity and geographical constraints, the pattern of growth of built form is different for various tourist sites. Therefore, this study explores the patterns of growth of built-up for a decade from 2009 to 2019 through two-dimensional and three-dimensional analysis. Land use maps are created through supervised classification of satellite images obtained from LANDSAT 4-5 and LANDSAT 8 for 2009 and 2019, respectively. The overall expansion of the built-up area in the region is analyzed in relation to the distance from the city's geographical center and the tourism-related growth regions are identified which are influenced by the proximity of tourist attractions. The primary tourist sites of various destinations with different geographical characteristics and tourism activities, that have undergone a significant increase in built-up area and are occupied with tourism-related infrastructure are selected for further study. Proximity analysis of the tourism-related growth sites is carried out to delineate the influence zone of the tourist site in a destination. Further, a temporal analysis of volumetric growth of built form is carried out to understand the morphology of the tourist precincts over time. The Digital Surface Model (DSM) and Digital Terrain Model (DTM) are used to extract the building footprints along with building height. Factors such as building height, and building density are evaluated to understand the patterns of three-dimensional growth of the built area in the region. The study also explores the underlying reasons for such changes in built form around various tourist sites and predicts the impact of such growth patterns in the region. The building height and building density around tourist site creates a huge impact on the appeal of the destination. The surroundings that are incompatible with the theme of the tourist site have a negative impact on the attractiveness of the destination that leads to negative feedback by the tourists, which is not a sustainable form of development. Therefore, proper spatial measures are necessary in terms of area and volume of the built environment for a healthy and sustainable environment around the tourist sites in the destination.Keywords: sustainable tourism, growth patterns, land-use changes, 3-dimensional analysis of built-up area
Procedia PDF Downloads 79258 Evaluation of Bone and Body Mineral Profile in Association with Protein Content, Fat, Fat-Free, Skeletal Muscle Tissues According to Obesity Classification among Adult Men
Authors: Orkide Donma, Mustafa M. Donma
Abstract:
Obesity is associated with increased fat mass as well as fat percentage. Minerals are the elements, which are of vital importance. In this study, the relationships between body as well as bone mineral profile and the percentage as well as mass values of fat, fat-free portion, protein, skeletal muscle were evaluated in adult men with normal body mass index (N-BMI), and those classified according to different stages of obesity. A total of 103 adult men classified into five groups participated in this study. Ages were within 19-79 years range. Groups were N-BMI (Group 1), overweight (OW) (Group 2), first level of obesity (FLO) (Group 3), second level of obesity (SLO) (Group 4) and third level of obesity (TLO) (Group 5). Anthropometric measurements were performed. BMI values were calculated. Obesity degree, total body fat mass, fat percentage, basal metabolic rate (BMR), visceral adiposity, body mineral mass, body mineral percentage, bone mineral mass, bone mineral percentage, fat-free mass, fat-free percentage, protein mass, protein percentage, skeletal muscle mass and skeletal muscle percentage were determined by TANITA body composition monitor using bioelectrical impedance analysis technology. Statistical package (SPSS) for Windows Version 16.0 was used for statistical evaluations. The values below 0.05 were accepted as statistically significant. All the groups were matched based upon age (p > 0.05). BMI values were calculated as 22.6 ± 1.7 kg/m2, 27.1 ± 1.4 kg/m2, 32.0 ± 1.2 kg/m2, 37.2 ± 1.8 kg/m2, and 47.1 ± 6.1 kg/m2 for groups 1, 2, 3, 4, and 5, respectively. Visceral adiposity and BMR values were also within an increasing trend. Percentage values of mineral, protein, fat-free portion and skeletal muscle masses were decreasing going from normal to TLO. Upon evaluation of the percentages of protein, fat-free portion and skeletal muscle, statistically significant differences were noted between NW and OW as well as OW and FLO (p < 0.05). However, such differences were not observed for body and bone mineral percentages. Correlation existed between visceral adiposity and BMI was stronger than that detected between visceral adiposity and obesity degree. Correlation between visceral adiposity and BMR was significant at the 0.05 level. Visceral adiposity was not correlated with body mineral mass but correlated with bone mineral mass whereas significant negative correlations were observed with percentages of these parameters (p < 0.001). BMR was not correlated with body mineral percentage whereas a negative correlation was found between BMR and bone mineral percentage (p < 0.01). It is interesting to note that mineral percentages of both body as well as bone are highly affected by the visceral adiposity. Bone mineral percentage was also associated with BMR. From these findings, it is plausible to state that minerals are highly associated with the critical stages of obesity as prominent parameters.Keywords: bone, men, minerals, obesity
Procedia PDF Downloads 118257 Steel Concrete Composite Bridge: Modelling Approach and Analysis
Authors: Kaviyarasan D., Satish Kumar S. R.
Abstract:
India being vast in area and population with great scope of international business, roadways and railways network connection within the country is expected to have a big growth. There are numerous rail-cum-road bridges constructed across many major rivers in India and few are getting very old. So there is more possibility of repairing or coming up with such new bridges in India. Analysis and design of such bridges are practiced through conventional procedure and end up with heavy and uneconomical sections. Such heavy class steel bridges when subjected to high seismic shaking has more chance to fail by stability because the members are too much rigid and stocky rather than being flexible to dissipate the energy. This work is the collective study of the researches done in the truss bridge and steel concrete composite truss bridges presenting the method of analysis, tools for numerical and analytical modeling which evaluates its seismic behaviour and collapse mechanisms. To ascertain the inelastic and nonlinear behaviour of the structure, generally at research level static pushover analysis is adopted. Though the static pushover analysis is now extensively used for the framed steel and concrete buildings to study its lateral action behaviour, those findings by pushover analysis done for the buildings cannot directly be used for the bridges as such, because the bridges have completely a different performance requirement, behaviour and typology as compared to that of the buildings. Long span steel bridges are mostly the truss bridges. Truss bridges being formed by many members and connections, the failure of the system does not happen suddenly with single event or failure of one member. Failure usually initiates from one member and progresses gradually to the next member and so on when subjected to further loading. This kind of progressive collapse of the truss bridge structure is dependent on many factors, in which the live load distribution and span to length ratio are most significant. The ultimate collapse is anyhow by the buckling of the compression members only. For regular bridges, single step pushover analysis gives results closer to that of the non-linear dynamic analysis. But for a complicated bridge like heavy class steel bridge or the skewed bridges or complicated dynamic behaviour bridges, nonlinear analysis capturing the progressive yielding and collapse pattern is mandatory. With the knowledge of the postelastic behaviour of the bridge and advancements in the computational facility, the current level of analysis and design of bridges has moved to state of ascertaining the performance levels of the bridges based on the damage caused by seismic shaking. This is because the buildings performance levels deals much with the life safety and collapse prevention levels, whereas the bridges mostly deal with the extent damages and how quick it can be repaired with or without disturbing the traffic after a strong earthquake event. The paper would compile the wide spectrum of modeling to analysis of the steel concrete composite truss bridges in general.Keywords: bridge engineering, performance based design of steel truss bridge, seismic design of composite bridge, steel-concrete composite bridge
Procedia PDF Downloads 186256 Application of Unstructured Mesh Modeling in Evolving SGE of an Airport at the Confluence of Multiple Rivers in a Macro Tidal Region
Authors: A. A. Purohit, M. M. Vaidya, M. D. Kudale
Abstract:
Among the various developing countries in the world like China, Malaysia, Korea etc., India is also developing its infrastructures in the form of Road/Rail/Airports and Waterborne facilities at an exponential rate. Mumbai, the financial epicenter of India is overcrowded and to relieve the pressure of congestion, Navi Mumbai suburb is being developed on the east bank of Thane creek near Mumbai. The government due to limited space at existing Mumbai Airports (domestic and international) to cater for the future demand of airborne traffic, proposes to build a new international airport near Panvel at Navi Mumbai. Considering the precedence of extreme rainfall on 26th July 2005 and nearby townships being in a low-lying area, wherein new airport is proposed, it is inevitable to study this complex confluence area from a hydrodynamic consideration under both tidal and extreme events (predicted discharge hydrographs), to avoid inundation of the surrounding due to the proposed airport reclamation (1160 hectares) and to determine the safe grade elevation (SGE). The model studies conducted using the application of unstructured mesh to simulate the Panvel estuarine area (93 km2), calibration, validation of a model for hydraulic field measurements and determine the maxima water levels around the airport for various extreme hydrodynamic events, namely the simultaneous occurrence of highest tide from the Arabian Sea and peak flood discharges (Probable Maximum Precipitation and 26th July 2005) from five rivers, the Gadhi, Kalundri, Taloja, Kasadi and Ulwe, meeting at the proposed airport area revealed that: (a) The Ulwe River flowing beneath the proposed airport needs to be diverted. The 120m wide proposed Ulwe diversion channel having a wider base width of 200 m at SH-54 Bridge on the Ulwe River along with the removal of the existing bund in Moha Creek is inevitable to keep the SGE of the airport to a minimum. (b) The clear waterway of 80 m at SH-54 Bridge (Ulwe River) and 120 m at Amra Marg Bridge near Moha Creek is also essential for the Ulwe diversion and (c) The river bank protection works on the right bank of Gadhi River between the NH-4B and SH-54 bridges as well as upstream of the Ulwe River diversion channel are essential to avoid inundation of low lying areas. The maxima water levels predicted around the airport keeps SGE to a minimum of 11m with respect to Chart datum of Ulwe Bundar and thus development is not only technologically-economically feasible but also sustainable. The unstructured mesh modeling is a promising tool to simulate complex extreme hydrodynamic events and provides a reliable solution to evolve optimal SGE of airport.Keywords: airport, hydrodynamics, safe grade elevation, tides
Procedia PDF Downloads 262255 Evaluating Multiple Diagnostic Tests: An Application to Cervical Intraepithelial Neoplasia
Authors: Areti Angeliki Veroniki, Sofia Tsokani, Evangelos Paraskevaidis, Dimitris Mavridis
Abstract:
The plethora of diagnostic test accuracy (DTA) studies has led to the increased use of systematic reviews and meta-analysis of DTA studies. Clinicians and healthcare professionals often consult DTA meta-analyses to make informed decisions regarding the optimum test to choose and use for a given setting. For example, the human papilloma virus (HPV) DNA, mRNA, and cytology can be used for the cervical intraepithelial neoplasia grade 2+ (CIN2+) diagnosis. But which test is the most accurate? Studies directly comparing test accuracy are not always available, and comparisons between multiple tests create a network of DTA studies that can be synthesized through a network meta-analysis of diagnostic tests (DTA-NMA). The aim is to summarize the DTA-NMA methods for at least three index tests presented in the methodological literature. We illustrate the application of the methods using a real data set for the comparative accuracy of HPV DNA, HPV mRNA, and cytology tests for cervical cancer. A search was conducted in PubMed, Web of Science, and Scopus from inception until the end of July 2019 to identify full-text research articles that describe a DTA-NMA method for three or more index tests. Since the joint classification of the results from one index against the results of another index test amongst those with the target condition and amongst those without the target condition are rarely reported in DTA studies, only methods requiring the 2x2 tables of the results of each index test against the reference standard were included. Studies of any design published in English were eligible for inclusion. Relevant unpublished material was also included. Ten relevant studies were finally included to evaluate their methodology. DTA-NMA methods that have been presented in the literature together with their advantages and disadvantages are described. In addition, using 37 studies for cervical cancer obtained from a published Cochrane review as a case study, an application of the identified DTA-NMA methods to determine the most promising test (in terms of sensitivity and specificity) for use as the best screening test to detect CIN2+ is presented. As a conclusion, different approaches for the comparative DTA meta-analysis of multiple tests may conclude to different results and hence may influence decision-making. Acknowledgment: This research is co-financed by Greece and the European Union (European Social Fund- ESF) through the Operational Programme «Human Resources Development, Education and Lifelong Learning 2014-2020» in the context of the project “Extension of Network Meta-Analysis for the Comparison of Diagnostic Tests ” (MIS 5047640).Keywords: colposcopy, diagnostic test, HPV, network meta-analysis
Procedia PDF Downloads 141