Search results for: email classification
237 Reasons for Lack of an Ideal Disinfectant after Dental Treatments
Authors: Ilma Robo, Saimir Heta, Rialda Xhizdari, Kers Kapaj
Abstract:
Background: The ideal disinfectant for surfaces, instruments, air, skin, both in dentistry and in the fields of medicine, does not exist.This is for the sole reason that all the characteristics of the ideal disinfectant cannot be contained in one; these are the characteristics that if one of them is emphasized, it will conflict with the other. A disinfectant must be stable, not be affected by changes in the environmental conditions where it stands, which means that it should not be affected by an increase in temperature or an increase in the humidity of the environment. Both of these elements contradict the other element of the idea of an ideal disinfectant, as they disrupt the solubility ratios of the base substance of the disinfectant versus the diluent. Material and methods: The study aims to extract the constant of each disinfectant/antiseptic used during dental disinfection protocols, accompanied by the side effects of the surface of the skin or mucosa where it is applied in the role of antiseptic. In the end, attempts were made to draw conclusions about the best possible combination for disinfectants after a dental procedure, based on the data extracted from the basic literature required during the development of the pharmacology module, as a module in the formation of a dentist, against data published in the literature. Results: The sensitivity of the disinfectant to changes in the atmospheric conditions of the environment where it is kept is a known fact. The care against this element is always accompanied by the advice on the application of the specific disinfectant, in order to have the desired clinical result. The constants of disinfectants according to the classification based on the data collected and presented are for alcohols 70-120, glycols 0.2, aldehydes 30-200, phenols 15-60, acids 100, povidone iodine halogens 5-75, hypochlorous acid halogens 150, sodium hypochlorite halogens 30-35, oxidants 18-60, metals 0.2-10. The part of halogens should be singled out, where specific results were obtained according to the representatives of this class, since it is these representatives that find scope for clinical application in dentistry. Conclusions: The search for the "ideal", in the conditions where its defining criteria are also established, not only for disinfectants but also for any medication or pharmaceutical product, is an ongoing search, without any definitive results. In this mine of data in the published literature if there is something fixed, calculable, such as the specific constant for disinfectants, the search for the ideal is more concrete. During the disinfection protocols, different disinfectants are applied since the field of action is different, including water, air, aspiration devices, tools, disinfectants used in full accordance with the production indications.Keywords: disinfectant, constant, ideal, side effects
Procedia PDF Downloads 69236 Approaches to Valuing Ecosystem Services in Agroecosystems From the Perspectives of Ecological Economics and Agroecology
Authors: Sandra Cecilia Bautista-Rodríguez, Vladimir Melgarejo
Abstract:
Climate change, loss of ecosystems, increasing poverty, increasing marginalization of rural communities and declining food security are global issues that require urgent attention. In this regard, a great deal of research has focused on how agroecosystems respond to these challenges as they provide ecosystem services (ES) that lead to higher levels of resilience, adaptation, productivity and self-sufficiency. Hence, the valuing of ecosystem services plays an important role in the decision-making process for the design and management of agroecosystems. This paper aims to define the link between ecosystem service valuation methods and ES value dimensions in agroecosystems from ecological economics and agroecology. The method used to identify valuation methodologies was a literature review in the fields of Agroecology and Ecological Economics, based on a strategy of information search and classification. The conceptual framework of the work is based on the multidimensionality of value, considering the social, ecological, political, technological and economic dimensions. Likewise, the valuation process requires consideration of the ecosystem function associated with ES, such as regulation, habitat, production and information functions. In this way, valuation methods for ES in agroecosystems can integrate more than one value dimension and at least one ecosystem function. The results allow correlating the ecosystem functions with the ecosystem services valued, and the specific tools or models used, the dimensions and valuation methods. The main methodologies identified are multi-criteria valuation (1), deliberative - consultative valuation (2), valuation based on system dynamics modeling (3), valuation through energy or biophysical balances (4), valuation through fuzzy logic modeling (5), valuation based on agent-based modeling (6). Amongst the main conclusions, it is highlighted that the system dynamics modeling approach has a high potential for development in valuation processes, due to its ability to integrate other methods, especially multi-criteria valuation and energy and biophysical balances, to describe through causal cycles the interrelationships between ecosystem services, the dimensions of value in agroecosystems, thus showing the relationships between the value of ecosystem services and the welfare of communities. As for methodological challenges, it is relevant to achieve the integration of tools and models provided by different methods, to incorporate the characteristics of a complex system such as the agroecosystem, which allows reducing the limitations in the processes of valuation of ES.Keywords: ecological economics, agroecosystems, ecosystem services, valuation of ecosystem services
Procedia PDF Downloads 123235 Recommendations for Teaching Word Formation for Students of Linguistics Using Computer Terminology as an Example
Authors: Svetlana Kostrubina, Anastasia Prokopeva
Abstract:
This research presents a comprehensive study of the word formation processes in computer terminology within English and Russian languages and provides listeners with a system of exercises for training these skills. The originality is that this study focuses on a comparative approach, which shows both general patterns and specific features of English and Russian computer terms word formation. The key point is the system of exercises development for training computer terminology based on Bloom’s taxonomy. Data contain 486 units (228 English terms from the Glossary of Computer Terms and 258 Russian terms from the Terminological Dictionary-Reference Book). The objective is to identify the main affixation models in the English and Russian computer terms formation and to develop exercises. To achieve this goal, the authors employed Bloom’s Taxonomy as a methodological framework to create a systematic exercise program aimed at enhancing students’ cognitive skills in analyzing, applying, and evaluating computer terms. The exercises are appropriate for various levels of learning, from basic recall of definitions to higher-order thinking skills, such as synthesizing new terms and critically assessing their usage in different contexts. Methodology also includes: a method of scientific and theoretical analysis for systematization of linguistic concepts and clarification of the conceptual and terminological apparatus; a method of nominative and derivative analysis for identifying word-formation types; a method of word-formation analysis for organizing linguistic units; a classification method for determining structural types of abbreviations applicable to the field of computer communication; a quantitative analysis technique for determining the productivity of methods for forming abbreviations of computer vocabulary based on the English and Russian computer terms, as well as a technique of tabular data processing for a visual presentation of the results obtained. a technique of interlingua comparison for identifying common and different features of abbreviations of computer terms in the Russian and English languages. The research shows that affixation retains its productivity in the English and Russian computer terms formation. Bloom’s taxonomy allows us to plan a training program and predict the effectiveness of the compiled program based on the assessment of the teaching methods used.Keywords: word formation, affixation, computer terms, Bloom's taxonomy
Procedia PDF Downloads 12234 Obese and Overweight Women and Public Health Issues in Hillah City, Iraq
Authors: Amean A. Yasir, Zainab Kh. A. Al-Mahdi Al-Amean
Abstract:
In both developed and developing countries, obesity among women is increasing, but in different patterns and at very different speeds. It may have a negative effect on health, leading to reduced life expectancy and/or increased health problems. This research studied the age distribution among obese women, the types of overweight and obesity, and the extent of the problem of overweight/obesity and the obesity etiological factors among women in Hillah city in central Iraq. A total of 322 overweight and obese women were included in the study, those women were randomly selected. The Body Mass Index was used as indicator for overweight/ obesity. The incidence of overweight/obesity among age groups were estimated, the etiology factors included genetic, environmental, genetic/environmental and endocrine disease. The overweight and obese women were screened for incidence of infection and/or diseases. The study found that the prevalence of 322 overweight and obese women in Hillah city in central Iraq was 19.25% and 80.78%, respectively. The obese women types were recorded based on BMI and WHO classification as class-1 obesity (29.81%), class-2 obesity (24.22%) and class-3 obesity (26.70%), the result was discrepancy non-significant, P value < 0.05. The incidence of overweight in women was high among those aged 20-29 years (90.32%), 6.45% aged 30-39 years old and 3.22% among ≥ 60 years old, while the incidence of obesity was 20.38% for those in the age group 20-29 years, 17.30% were 30-39 years, 23.84% were 40-49 years, 16.92% were 50-59 years group and 21.53% were ≥ 60 years age group. These results confirm that the age can be considered as a significant factor for obesity types (P value < 0.0001). The result also showed that the both genetic factors and environmental factors were responsible for incidents of overweight or obesity (84.78%) p value < 0.0001. The results also recorded cases of different repeated infections (skin infection, recurrent UTI and influenza), cancer, gallstones, high blood pressure, type 2 diabetes, and infertility. Weight stigma and bias generally refers to negative attitudes; Obesity can affect quality of life, and the results of this study recorded depression among overweight or obese women. This can lead to sexual problems, shame and guilt, social isolation and reduced work performance. Overweight and Obesity are real problems among women of all age groups and is associated with the risk of diseases and infection and negatively affects quality of life. This result warrants further studies into the prevalence of obesity among women in Hillah City in central Iraq and the immune response of obese women.Keywords: obesity, overweight, Iraq, body mass index
Procedia PDF Downloads 385233 On Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Primary Distant Metastases Growth
Authors: Ella Tyuryumina, Alexey Neznanov
Abstract:
Finding algorithms to predict the growth of tumors has piqued the interest of researchers ever since the early days of cancer research. A number of studies were carried out as an attempt to obtain reliable data on the natural history of breast cancer growth. Mathematical modeling can play a very important role in the prognosis of tumor process of breast cancer. However, mathematical models describe primary tumor growth and metastases growth separately. Consequently, we propose a mathematical growth model for primary tumor and primary metastases which may help to improve predicting accuracy of breast cancer progression using an original mathematical model referred to CoM-IV and corresponding software. We are interested in: 1) modelling the whole natural history of primary tumor and primary metastases; 2) developing adequate and precise CoM-IV which reflects relations between PT and MTS; 3) analyzing the CoM-IV scope of application; 4) implementing the model as a software tool. The CoM-IV is based on exponential tumor growth model and consists of a system of determinate nonlinear and linear equations; corresponds to TNM classification. It allows to calculate different growth periods of primary tumor and primary metastases: 1) ‘non-visible period’ for primary tumor; 2) ‘non-visible period’ for primary metastases; 3) ‘visible period’ for primary metastases. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. Thus, the CoM-IV model and predictive software: a) detect different growth periods of primary tumor and primary metastases; b) make forecast of the period of primary metastases appearance; c) have higher average prediction accuracy than the other tools; d) can improve forecasts on survival of BC and facilitate optimization of diagnostic tests. The following are calculated by CoM-IV: the number of doublings for ‘nonvisible’ and ‘visible’ growth period of primary metastases; tumor volume doubling time (days) for ‘nonvisible’ and ‘visible’ growth period of primary metastases. The CoM-IV enables, for the first time, to predict the whole natural history of primary tumor and primary metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on primary tumor sizes. Summarizing: a) CoM-IV describes correctly primary tumor and primary distant metastases growth of IV (T1-4N0-3M1) stage with (N1-3) or without regional metastases in lymph nodes (N0); b) facilitates the understanding of the appearance period and manifestation of primary metastases.Keywords: breast cancer, exponential growth model, mathematical modelling, primary metastases, primary tumor, survival
Procedia PDF Downloads 335232 New Advanced Medical Software Technology Challenges and Evolution of the Regulatory Framework in Expert Software, Artificial Intelligence, and Machine Learning
Authors: Umamaheswari Shanmugam, Silvia Ronchi, Radu Vornicu
Abstract:
Software, artificial intelligence, and machine learning can improve healthcare through innovative and advanced technologies that are able to use the large amount and variety of data generated during healthcare services every day. As we read the news, over 500 machine learning or other artificial intelligence medical devices have now received FDA clearance or approval, the first ones even preceding the year 2000. One of the big advantages of these new technologies is the ability to get experience and knowledge from real-world use and to continuously improve their performance. Healthcare systems and institutions can have a great benefit because the use of advanced technologies improves the same time efficiency and efficacy of healthcare. Software-defined as a medical device, is stand-alone software that is intended to be used for patients for one or more of these specific medical intended uses: - diagnosis, prevention, monitoring, prediction, prognosis, treatment or alleviation of a disease, any other health conditions, replacing or modifying any part of a physiological or pathological process–manage the received information from in vitro specimens derived from the human samples (body) and without principal main action of its principal intended use by pharmacological, immunological or metabolic definition. Software qualified as medical devices must comply with the general safety and performance requirements applicable to medical devices. These requirements are necessary to ensure high performance and quality and also to protect patients’ safety. The evolution and the continuous improvement of software used in healthcare must take into consideration the increase in regulatory requirements, which are becoming more complex in each market. The gap between these advanced technologies and the new regulations is the biggest challenge for medical device manufacturers. Regulatory requirements can be considered a market barrier, as they can delay or obstacle the device approval, but they are necessary to ensure performance, quality, and safety, and at the same time, they can be a business opportunity if the manufacturer is able to define in advance the appropriate regulatory strategy. The abstract will provide an overview of the current regulatory framework, the evolution of the international requirements, and the standards applicable to medical device software in the potential market all over the world.Keywords: artificial intelligence, machine learning, SaMD, regulatory, clinical evaluation, classification, international requirements, MDR, 510k, PMA, IMDRF, cyber security, health care systems.
Procedia PDF Downloads 89231 The Textual Criticism on the Age of ‘Wan Li’ Shipwreck Porcelain and Its Comparison with ‘Whitte Leeuw’ and Hatcher Shipwreck Porcelain
Authors: Yang Liu, Dongliang Lyu
Abstract:
After the Wan li shipwreck was discovered 60 miles off the east coast of Tan jong Jara in Malaysia, numerous marvelous ceramic shards have been salvaged from the seabed. Remarkable pieces of Jing dezhen blue-and-white porcelain recovered from the site represent the essential part of the fascinating research. The porcelain cargo of Wan li shipwreck is significant to the studies on exported porcelains and Jing dezhen porcelain manufacture industry of Late-Ming dynasty. Using the ceramic shards categorization and the study of the Chinese and Western historical documents as a research strategy, the paper wants to shed new light on the Wan li shipwreck wares classification with Jingdezhen kiln ceramic as its main focus. The article is also discussing Jing dezhen blue-and-white porcelains from the perspective of domestic versus export markets and further proceeding to the systematization and analyses of Wan li shipwreck porcelain which bears witness to the forms, styles, and types of decoration that were being traded in this period. The porcelain data from two other shipwrecked projects -White Leeuw and Hatcher- were chosen as comparative case studies and Wan li shipwreck Jing dezhen blue-and-white porcelain is being reinterpreted in the context of art history and archeology of the region. The marine archaeologist Sten Sjostrand named the ship ‘Wanli shipwreck’ because its porcelain cargoes are typical of those made during the reign of Emperor Wan li of Ming dynasty. Though some scholars question the appropriateness of the name, the final verdict of the history is still to be made. Based on previous historical argumentation, the article uses a comparative approach to review the Wan li shipwreck blue-and-white porcelains on the grounds of the porcelains unearthed from the tomb or abandoned in the towns and carrying the time-specific reign mark. All these materials provide a very strong evidence which suggests that the porcelain recovered from Wan li ship can be dated to as early as the second year of Tianqi era (1622) and early Chongzhen reign. Lastly, some blue-and-white porcelain intended for the domestic market and some bowls of blue-and-white porcelain from Jing dezhen kilns recovered from the Wan li shipwreck all carry at the bottom the specific residue from the firing process. The author makes the corresponding analysis for these two interesting phenomena.Keywords: blue-and-white porcelain, Ming dynasty, Jing dezhen kiln, Wan li shipwreck
Procedia PDF Downloads 189230 Advanced Magnetic Field Mapping Utilizing Vertically Integrated Deployment Platforms
Authors: John E. Foley, Martin Miele, Raul Fonda, Jon Jacobson
Abstract:
This paper presents development and implementation of new and innovative data collection and analysis methodologies based on deployment of total field magnetometer arrays. Our research has focused on the development of a vertically-integrated suite of platforms all utilizing common data acquisition, data processing and analysis tools. These survey platforms include low-altitude helicopters and ground-based vehicles, including robots, for terrestrial mapping applications. For marine settings the sensor arrays are deployed from either a hydrodynamic bottom-following wing towed from a surface vessel or from a towed floating platform for shallow-water settings. Additionally, sensor arrays are deployed from tethered remotely operated vehicles (ROVs) for underwater settings where high maneuverability is required. While the primary application of these systems is the detection and mapping of unexploded ordnance (UXO), these system are also used for various infrastructure mapping and geologic investigations. For each application, success is driven by the integration of magnetometer arrays, accurate geo-positioning, system noise mitigation, and stable deployment of the system in appropriate proximity of expected targets or features. Each of the systems collects geo-registered data compatible with a web-enabled data management system providing immediate access of data and meta-data for remote processing, analysis and delivery of results. This approach allows highly sophisticated magnetic processing methods, including classification based on dipole modeling and remanent magnetization, to be efficiently applied to many projects. This paper also briefly describes the initial development of magnetometer-based detection systems deployed from low-altitude helicopter platforms and the subsequent successful transition of this technology to the marine environment. Additionally, we present examples from a range of terrestrial and marine settings as well as ongoing research efforts related to sensor miniaturization for unmanned aerial vehicle (UAV) magnetic field mapping applications.Keywords: dipole modeling, magnetometer mapping systems, sub-surface infrastructure mapping, unexploded ordnance detection
Procedia PDF Downloads 464229 Equivalences and Contrasts in the Morphological Formation of Echo Words in Two Indo-Aryan Languages: Bengali and Odia
Authors: Subhanan Mandal, Bidisha Hore
Abstract:
The linguistic process whereby repetition of all or part of the base word with or without internal change before or after the base itself takes place is regarded as reduplication. The reduplicated morphological construction annotates with itself a new grammatical category and meaning. Reduplication is a very frequent and abundant phenomenon in the eastern Indian languages from the states of West Bengal and Odisha, i.e. Bengali and Odia respectively. Bengali, an Indo-Aryan language and a part of the Indo-European language family is one of the largest spoken languages in India and is the national language of Bangladesh. Despite this classification, Bengali has certain influences in terms of vocabulary and grammar due to its geographical proximity to Tibeto-Burman and Austro-Asiatic language speaking communities. Bengali along with Odia belonged to a single linguistic branch. But with time and gradual linguistic changes due to various factors, Odia was the first to break away and develop as a separate distinct language. However, less of contrasts and more of similarities still exist among these languages along the line of linguistics, leaving apart the script. This paper deals with the procedure of echo word formations in Bengali and Odia. The morphological research of the two languages concerning the field of reduplication reveals several linguistic processes. The revelation is based on the information elicited from native language speakers and also on the analysis of echo words found in discourse and conversational patterns. For the purpose of partial reduplication analysis, prefixed class and suffixed class word formations are taken into consideration which show specific rule based changes. For example, in suffixed class categorization, both consonant and vowel alterations are found, following the rules: i) CVx à tVX, ii) CVCV à CVCi. Further classifications were also found on sentential studies of both languages which revealed complete reduplication complexities while forming echo words where the head word lose its original meaning. Complexities based on onomatopoetic/phonetic imitation of natural phenomena and not according to any rule-based occurrences were also found. Taking these aspects into consideration which are very prevalent in both the languages, inferences are drawn from the study which bring out many similarities in both the languages in this area in spite of branching away from each other several years ago.Keywords: consonant alteration, onomatopoetic, partial reduplication and complete reduplication, reduplication, vowel alteration
Procedia PDF Downloads 242228 Implications of Measuring the Progress towards Financial Risk Protection Using Varied Survey Instruments: A Case Study of Ghana
Authors: Jemima C. A. Sumboh
Abstract:
Given the urgency and consensus for countries to move towards Universal Health Coverage (UHC), health financing systems need to be accurately and consistently monitored to provide valuable data to inform policy and practice. Most of the indicators for monitoring UHC, particularly catastrophe and impoverishment, are established based on the impact of out-of-pocket health payments (OOPHP) on households’ living standards, collected through varied household surveys. These surveys, however, vary substantially in survey methods such as the length of the recall period or the number of items included in the survey questionnaire or the farming of questions, potentially influencing the level of OOPHP. Using different survey instruments can provide inaccurate, inconsistent, erroneous and misleading estimates of UHC, subsequently influencing wrong policy decisions. Using data from a household budget survey conducted by the Navrongo Health Research Center in Ghana from May 2017 to December 2018, this study intends to explore the potential implications of using surveys with varied levels of disaggregation of OOPHP data on estimates of financial risk protection. The household budget survey, structured around food and non-food expenditure, compared three OOPHP measuring instruments: Version I (existing questions used to measure OOPHP in household budget surveys), Version II (new questions developed through benchmarking the existing Classification of the Individual Consumption by Purpose (COICOP) OOPHP questions in household surveys) and Version III (existing questions used to measure OOPHP in health surveys integrated into household budget surveys- for this, the demographic and health surveillance (DHS) health survey was used). Version I, II and III contained 11, 44, and 56 health items, respectively. However, the choice of recall periods was held constant across versions. The sample size for Version I, II and III were 930, 1032 and 1068 households, respectively. Financial risk protection will be measured based on the catastrophic and impoverishment methodologies using STATA 15 and Adept Software for each version. It is expected that findings from this study will present valuable contributions to the repository of knowledge on standardizing survey instruments to obtain estimates of financial risk protection that are valid and consistent.Keywords: Ghana, household budget surveys, measuring financial risk protection, out-of-pocket health payments, survey instruments, universal health coverage
Procedia PDF Downloads 137227 Urban Heat Island Intensity Assessment through Comparative Study on Land Surface Temperature and Normalized Difference Vegetation Index: A Case Study of Chittagong, Bangladesh
Authors: Tausif A. Ishtiaque, Zarrin T. Tasin, Kazi S. Akter
Abstract:
Current trend of urban expansion, especially in the developing countries has caused significant changes in land cover, which is generating great concern due to its widespread environmental degradation. Energy consumption of the cities is also increasing with the aggravated heat island effect. Distribution of land surface temperature (LST) is one of the most significant climatic parameters affected by urban land cover change. Recent increasing trend of LST is causing elevated temperature profile of the built up area with less vegetative cover. Gradual change in land cover, especially decrease in vegetative cover is enhancing the Urban Heat Island (UHI) effect in the developing cities around the world. Increase in the amount of urban vegetation cover can be a useful solution for the reduction of UHI intensity. LST and Normalized Difference Vegetation Index (NDVI) have widely been accepted as reliable indicators of UHI and vegetation abundance respectively. Chittagong, the second largest city of Bangladesh, has been a growth center due to rapid urbanization over the last several decades. This study assesses the intensity of UHI in Chittagong city by analyzing the relationship between LST and NDVI based on the type of land use/land cover (LULC) in the study area applying an integrated approach of Geographic Information System (GIS), remote sensing (RS), and regression analysis. Land cover map is prepared through an interactive supervised classification using remotely sensed data from Landsat ETM+ image along with NDVI differencing using ArcGIS. LST and NDVI values are extracted from the same image. The regression analysis between LST and NDVI indicates that within the study area, UHI is directly correlated with LST while negatively correlated with NDVI. It interprets that surface temperature reduces with increase in vegetation cover along with reduction in UHI intensity. Moreover, there are noticeable differences in the relationship between LST and NDVI based on the type of LULC. In other words, depending on the type of land usage, increase in vegetation cover has a varying impact on the UHI intensity. This analysis will contribute to the formulation of sustainable urban land use planning decisions as well as suggesting suitable actions for mitigation of UHI intensity within the study area.Keywords: land cover change, land surface temperature, normalized difference vegetation index, urban heat island
Procedia PDF Downloads 272226 Human Identification Using Local Roughness Patterns in Heartbeat Signal
Authors: Md. Khayrul Bashar, Md. Saiful Islam, Kimiko Yamashita, Yano Midori
Abstract:
Despite having some progress in human authentication, conventional biometrics (e.g., facial features, fingerprints, retinal scans, gait, voice patterns) are not robust against falsification because they are neither confidential nor secret to an individual. As a non-invasive tool, electrocardiogram (ECG) has recently shown a great potential in human recognition due to its unique rhythms characterizing the variability of human heart structures (chest geometry, sizes, and positions). Moreover, ECG has a real-time vitality characteristic that signifies the live signs, which ensure legitimate individual to be identified. However, the detection accuracy of the current ECG-based methods is not sufficient due to a high variability of the individual’s heartbeats at a different instance of time. These variations may occur due to muscle flexure, the change of mental or emotional states, and the change of sensor positions or long-term baseline shift during the recording of ECG signal. In this study, a new method is proposed for human identification, which is based on the extraction of the local roughness of ECG heartbeat signals. First ECG signal is preprocessed using a second order band-pass Butterworth filter having cut-off frequencies of 0.00025 and 0.04. A number of local binary patterns are then extracted by applying a moving neighborhood window along the ECG signal. At each instant of the ECG signal, the pattern is formed by comparing the ECG intensities at neighboring time points with the central intensity in the moving window. Then, binary weights are multiplied with the pattern to come up with the local roughness description of the signal. Finally, histograms are constructed that describe the heartbeat signals of individual subjects in the database. One advantage of the proposed feature is that it does not depend on the accuracy of detecting QRS complex, unlike the conventional methods. Supervised recognition methods are then designed using minimum distance to mean and Bayesian classifiers to identify authentic human subjects. An experiment with sixty (60) ECG signals from sixty adult subjects from National Metrology Institute of Germany (NMIG) - PTB database, showed that the proposed new method is promising compared to a conventional interval and amplitude feature-based method.Keywords: human identification, ECG biometrics, local roughness patterns, supervised classification
Procedia PDF Downloads 404225 Investigation of the EEG Signal Parameters during Epileptic Seizure Phases in Consequence to the Application of External Healing Therapy on Subjects
Authors: Karan Sharma, Ajay Kumar
Abstract:
Epileptic seizure is a type of disease due to which electrical charge in the brain flows abruptly resulting in abnormal activity by the subject. One percent of total world population gets epileptic seizure attacks.Due to abrupt flow of charge, EEG (Electroencephalogram) waveforms change. On the display appear a lot of spikes and sharp waves in the EEG signals. Detection of epileptic seizure by using conventional methods is time-consuming. Many methods have been evolved that detect it automatically. The initial part of this paper provides the review of techniques used to detect epileptic seizure automatically. The automatic detection is based on the feature extraction and classification patterns. For better accuracy decomposition of the signal is required before feature extraction. A number of parameters are calculated by the researchers using different techniques e.g. approximate entropy, sample entropy, Fuzzy approximate entropy, intrinsic mode function, cross-correlation etc. to discriminate between a normal signal & an epileptic seizure signal.The main objective of this review paper is to present the variations in the EEG signals at both stages (i) Interictal (recording between the epileptic seizure attacks). (ii) Ictal (recording during the epileptic seizure), using most appropriate methods of analysis to provide better healthcare diagnosis. This research paper then investigates the effects of a noninvasive healing therapy on the subjects by studying the EEG signals using latest signal processing techniques. The study has been conducted with Reiki as a healing technique, beneficial for restoring balance in cases of body mind alterations associated with an epileptic seizure. Reiki is practiced around the world and is recommended for different health services as a treatment approach. Reiki is an energy medicine, specifically a biofield therapy developed in Japan in the early 20th century. It is a system involving the laying on of hands, to stimulate the body’s natural energetic system. Earlier studies have shown an apparent connection between Reiki and the autonomous nervous system. The Reiki sessions are applied by an experienced therapist. EEG signals are measured at baseline, during session and post intervention to bring about effective epileptic seizure control or its elimination altogether.Keywords: EEG signal, Reiki, time consuming, epileptic seizure
Procedia PDF Downloads 406224 Nondestructive Prediction and Classification of Gel Strength in Ethanol-Treated Kudzu Starch Gels Using Near-Infrared Spectroscopy
Authors: John-Nelson Ekumah, Selorm Yao-Say Solomon Adade, Mingming Zhong, Yufan Sun, Qiufang Liang, Muhammad Safiullah Virk, Xorlali Nunekpeku, Nana Adwoa Nkuma Johnson, Bridget Ama Kwadzokpui, Xiaofeng Ren
Abstract:
Enhancing starch gel strength and stability is crucial. However, traditional gel property assessment methods are destructive, time-consuming, and resource-intensive. Thus, understanding ethanol treatment effects on kudzu starch gel strength and developing a rapid, nondestructive gel strength assessment method is essential for optimizing the treatment process and ensuring product quality consistency. This study investigated the effects of different ethanol concentrations on the microstructure of kudzu starch gels using a comprehensive microstructural analysis. We also developed a nondestructive method for predicting gel strength and classifying treatment levels using near-infrared (NIR) spectroscopy, and advanced data analytics. Scanning electron microscopy revealed progressive network densification and pore collapse with increasing ethanol concentration, correlating with enhanced mechanical properties. NIR spectroscopy, combined with various variable selection methods (CARS, GA, and UVE) and modeling algorithms (PLS, SVM, and ELM), was employed to develop predictive models for gel strength. The UVE-SVM model demonstrated exceptional performance, with the highest R² values (Rc = 0.9786, Rp = 0.9688) and lowest error rates (RMSEC = 6.1340, RMSEP = 6.0283). Pattern recognition algorithms (PCA, LDA, and KNN) successfully classified gels based on ethanol treatment levels, achieving near-perfect accuracy. This integrated approach provided a multiscale perspective on ethanol-induced starch gel modification, from molecular interactions to macroscopic properties. Our findings demonstrate the potential of NIR spectroscopy, coupled with advanced data analysis, as a powerful tool for rapid, nondestructive quality assessment in starch gel production. This study contributes significantly to the understanding of starch modification processes and opens new avenues for research and industrial applications in food science, pharmaceuticals, and biomaterials.Keywords: kudzu starch gel, near-infrared spectroscopy, gel strength prediction, support vector machine, pattern recognition algorithms, ethanol treatment
Procedia PDF Downloads 37223 Towards End-To-End Disease Prediction from Raw Metagenomic Data
Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker
Abstract:
Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.Keywords: deep learning, disease prediction, end-to-end machine learning, metagenomics, multiple instance learning, precision medicine
Procedia PDF Downloads 125222 Comparison of Two Strategies in Thoracoscopic Ablation of Atrial Fibrillation
Authors: Alexander Zotov, Ilkin Osmanov, Emil Sakharov, Oleg Shelest, Aleksander Troitskiy, Robert Khabazov
Abstract:
Objective: Thoracoscopic surgical ablation of atrial fibrillation (AF) includes two technologies in performing of operation. 1st strategy used is the AtriCure device (bipolar, nonirrigated, non clamping), 2nd strategy is- the Medtronic device (bipolar, irrigated, clamping). The study presents a comparative analysis of clinical outcomes of two strategies in thoracoscopic ablation of AF using AtriCure vs. Medtronic devices. Methods: In 2 center study, 123 patients underwent thoracoscopic ablation of AF for the period from 2016 to 2020. Patients were divided into two groups. The first group is represented by patients who applied the AtriCure device (N=63), and the second group is - the Medtronic device (N=60), respectively. Patients were comparable in age, gender, and initial severity of the condition. Among the patients, in group 1 were 65% males with a median age of 57 years, while in group 2 – 75% and 60 years, respectively. Group 1 included patients with paroxysmal form -14,3%, persistent form - 68,3%, long-standing persistent form – 17,5%, group 2 – 13,3%, 13,3% and 73,3% respectively. Median ejection fraction and indexed left atrial volume amounted in group 1 – 63% and 40,6 ml/m2, in group 2 - 56% and 40,5 ml/m2. In addition, group 1 consisted of 39,7% patients with chronic heart failure (NYHA Class II) and 4,8% with chronic heart failure (NYHA Class III), when in group 2 – 45% and 6,7%, respectively. Follow-up consisted of laboratory tests, chest Х-ray, ECG, 24-hour Holter monitor, and cardiopulmonary exercise test. Duration of freedom from AF, distant mortality rate, and prevalence of cerebrovascular events were compared between the two groups. Results: Exit block was achieved in all patients. According to the Clavien-Dindo classification of surgical complications fraction of adverse events was 14,3% and 16,7% (1st group and 2nd group, respectively). Mean follow-up period in the 1st group was 50,4 (31,8; 64,8) months, in 2nd group - 30,5 (14,1; 37,5) months (P=0,0001). In group 1 - total freedom of AF was in 73,3% of patients, among which 25% had additional antiarrhythmic drugs (AADs) therapy or catheter ablation (CA), in group 2 – 90% and 18,3%, respectively (for total freedom of AF P<0,02). At follow-up, the distant mortality rate in the 1st group was – 4,8%, and in the 2nd – no fatal events. Prevalence of cerebrovascular events was higher in the 1st group than in the 2nd (6,7% vs. 1,7% respectively). Conclusions: Despite the relatively shorter follow-up of the 2nd group in the study, applying the strategy using the Medtronic device showed quite encouraging results. Further research is needed to evaluate the effectiveness of this strategy in the long-term period.Keywords: atrial fibrillation, clamping, ablation, thoracoscopic surgery
Procedia PDF Downloads 110221 Correlation Between the Toxicity Grade of the Adverse Effects in the Course of the Immunotherapy of Lung Cancer and Efficiency of the Treatment in Anti-PD-L1 and Anti-PD-1 Drugs - Own Clinical Experience
Authors: Anna Rudzińska, Katarzyna Szklener, Pola Juchaniuk, Anna Rodzajweska, Katarzyna Machulska-Ciuraj, Monika Rychlik- Grabowska, Michał łOziński, Agnieszka Kolak-Bruks, SłAwomir Mańdziuk
Abstract:
Introduction: Immune checkpoint inhibition (ICI) belongs to the modern forms of anti-cancer treatment. Due to the constant development and continuous research in the field of ICI, many aspects of the treatment are yet to be discovered. One of the less researched aspects of ICI treatment is the influence of the adverse effects on the treatment success rate. It is suspected that adverse events in the course of the ICI treatment indicate a better response rate and correlate with longer progression-free- survival. Methodology: The research was conducted with the usage of the documentation of the Department of Clinical Oncology and Chemotherapy. Data of the patients with a lung cancer diagnosis who were treated between 2019-2022 and received ICI treatment were analyzed. Results: Out of over 133 patients whose data was analyzed, the vast majority were diagnosed with non-small cell lung cancer. The majority of the patients did not experience adverse effects. Most adverse effects reported were classified as grade 1 or grade 2 according to CTCAE classification. Most adverse effects involved skin, thyroid and liver toxicity. Statistical significance was found for the adverse effect incidence and overall survival (OS) and progression-free survival (PFS) (p=0,0263) and for the time of toxicity onset and OS and PFS (p<0,001). The number of toxicity sites was statistically significant for prolonged PFS (p=0.0315). The highest OS was noted in the group presenting grade 1 and grade 2 adverse effects. Conclusions: Obtained results confirm the existence of the prolonged OS and PFS in the adverse-effects-charged patients, mostly in the group presenting mild to intermediate (Grade 1 and Grade 2) adverse effects and late toxicity onset. Simultaneously our results suggest a correlation between treatment response rate and the toxicity grade of the adverse effects and the time of the toxicity onset. Similar results were obtained in several similar research conducted - with the proven tendency of better survival in mild and moderate toxicity; meanwhile, other studies in the area suggested an advantage in patients with any toxicity regardless of the grade. The contradictory results strongly suggest the need for further research on this topic, with a focus on additional factors influencing the course of the treatment.Keywords: adverse effects, immunotherapy, lung cancer, PD-1/PD-L1 inhibitors
Procedia PDF Downloads 91220 Segmented Pupil Phasing with Deep Learning
Authors: Dumont Maxime, Correia Carlos, Sauvage Jean-François, Schwartz Noah, Gray Morgan
Abstract:
Context: The concept of the segmented telescope is unavoidable to build extremely large telescopes (ELT) in the quest for spatial resolution, but it also allows one to fit a large telescope within a reduced volume of space (JWST) or into an even smaller volume (Standard Cubesat). Cubesats have tight constraints on the computational burden available and the small payload volume allowed. At the same time, they undergo thermal gradients leading to large and evolving optical aberrations. The pupil segmentation comes nevertheless with an obvious difficulty: to co-phase the different segments. The CubeSat constraints prevent the use of a dedicated wavefront sensor (WFS), making the focal-plane images acquired by the science detector the most practical alternative. Yet, one of the challenges for the wavefront sensing is the non-linearity between the image intensity and the phase aberrations. Plus, for Earth observation, the object is unknown and unrepeatable. Recently, several studies have suggested Neural Networks (NN) for wavefront sensing; especially convolutional NN, which are well known for being non-linear and image-friendly problem solvers. Aims: We study in this paper the prospect of using NN to measure the phasing aberrations of a segmented pupil from the focal-plane image directly without a dedicated wavefront sensing. Methods: In our application, we take the case of a deployable telescope fitting in a CubeSat for Earth observations which triples the aperture size (compared to the 10cm CubeSat standard) and therefore triples the angular resolution capacity. In order to reach the diffraction-limited regime in the visible wavelength, typically, a wavefront error below lambda/50 is required. The telescope focal-plane detector, used for imaging, will be used as a wavefront-sensor. In this work, we study a point source, i.e. the Point Spread Function [PSF] of the optical system as an input of a VGG-net neural network, an architecture designed for image regression/classification. Results: This approach shows some promising results (about 2nm RMS, which is sub lambda/50 of residual WFE with 40-100nm RMS of input WFE) using a relatively fast computational time less than 30 ms which translates a small computation burder. These results allow one further study for higher aberrations and noise.Keywords: wavefront sensing, deep learning, deployable telescope, space telescope
Procedia PDF Downloads 104219 The Extension of the Kano Model by the Concept of Over-Service
Authors: Lou-Hon Sun, Yu-Ming Chiu, Chen-Wei Tao, Chia-Yun Tsai
Abstract:
It is common practice for many companies to ask employees to provide heart-touching service for customers and to emphasize the attitude of 'customer first'. However, services may not necessarily gain praise, and may actually be considered excessive, if customers do not appreciate such behaviors. In reality, many restaurant businesses try to provide as much service as possible without taking into account whether over-provision may lead to negative customer reception. A survey of 894 people in Britain revealed that 49 percent of respondents consider over-attentive waiters the most annoying aspect of dining out. It can be seen that merely aiming to exceed customers’ expectations without actually addressing their needs, only further distances and dissociates the standard of services from the goals of customer satisfaction itself. Over-service is defined, as 'service provided that exceeds customer expectations, or simply that customers deemed redundant, resulting in negative perception'. It was found that customers’ reactions and complaints concerning over-service are not as intense as those against service failures caused by the inability to meet expectations; consequently, it is more difficult for managers to become aware of the existence of over-service. Thus the ability to manage over-service behaviors is a significant topic for consideration. The Kano model classifies customer preferences into five categories: attractive quality attribute, one-dimensional quality attribute, must-be quality attribute, indifferent quality attribute and reverse quality attributes. The model is still very popular for researchers to explore the quality aspects and customer satisfaction. Nevertheless, several studies indicated that Kano’s model could not fully capture the nature of service quality. The concept of over-service can be used to restructure the model and provide a better understanding of the service quality construct. In this research, the structure of Kano's two-dimensional questionnaire will be used to classify the factors into different dimensions. The same questions will be used in the second questionnaire for identifying the over-service experienced of the respondents. The finding of these two questionnaires will be used to analyze the relevance between service quality classification and over-service behaviors. The subjects of this research are customers of fine dining chain restaurants. Three hundred questionnaires will be issued based on the stratified random sampling method. Items for measurement will be derived from DINESERV scale. The tangible dimension of the questionnaire will be eliminated due to this research is focused on the employee behaviors. Quality attributes of the Kano model are often regarded as an instrument for improving customer satisfaction. The concept of over-service can be used to restructure the model and provide a better understanding of service quality construct. The extension of the Kano model will not only develop a better understanding of customer needs and expectations but also enhance the management of service quality.Keywords: consumer satisfaction, DINESERV, kano model, over-service
Procedia PDF Downloads 161218 Design and Evaluation of a Prototype for Non-Invasive Screening of Diabetes – Skin Impedance Technique
Authors: Pavana Basavakumar, Devadas Bhat
Abstract:
Diabetes is a disease which often goes undiagnosed until its secondary effects are noticed. Early detection of the disease is necessary to avoid serious consequences which could lead to the death of the patient. Conventional invasive tests for screening of diabetes are mostly painful, time consuming and expensive. There’s also a risk of infection involved, therefore it is very essential to develop non-invasive methods to screen and estimate the level of blood glucose. Extensive research is going on with this perspective, involving various techniques that explore optical, electrical, chemical and thermal properties of the human body that directly or indirectly depend on the blood glucose concentration. Thus, non-invasive blood glucose monitoring has grown into a vast field of research. In this project, an attempt was made to device a prototype for screening of diabetes by measuring electrical impedance of the skin and building a model to predict a patient’s condition based on the measured impedance. The prototype developed, passes a negligible amount of constant current (0.5mA) across a subject’s index finger through tetra polar silver electrodes and measures output voltage across a wide range of frequencies (10 KHz – 4 MHz). The measured voltage is proportional to the impedance of the skin. The impedance was acquired in real-time for further analysis. Study was conducted on over 75 subjects with permission from the institutional ethics committee, along with impedance, subject’s blood glucose values were also noted, using conventional method. Nonlinear regression analysis was performed on the features extracted from the impedance data to obtain a model that predicts blood glucose values for a given set of features. When the predicted data was depicted on Clarke’s Error Grid, only 58% of the values predicted were clinically acceptable. Since the objective of the project was to screen diabetes and not actual estimation of blood glucose, the data was classified into three classes ‘NORMAL FASTING’,’NORMAL POSTPRANDIAL’ and ‘HIGH’ using linear Support Vector Machine (SVM). Classification accuracy obtained was 91.4%. The developed prototype was economical, fast and pain free. Thus, it can be used for mass screening of diabetes.Keywords: Clarke’s error grid, electrical impedance of skin, linear SVM, nonlinear regression, non-invasive blood glucose monitoring, screening device for diabetes
Procedia PDF Downloads 325217 A Comparative Analysis on Survival in Patients with Node Positive Cutaneous Head and Neck Squamous Cell Carcinoma as per TNM 7th and Tnm 8th Editions
Authors: Petr Daniel Edward Kovarik, Malcolm Jackson, Charles Kelly, Rahul Patil, Shahid Iqbal
Abstract:
Introduction: Recognition of the presence of extra capsular spread (ECS) has been a major change in the TNM 8th edition published by the American Joint Committee on Cancer in 2018. Irrespective of the size or number of lymph nodes, the presence of ECS makes N3b disease a stage IV disease. The objective of this retrospective observational study was to conduct a comparative analysis of survival outcomes in patients with lymph node-positive cutaneous head and neck squamous cell carcinoma (CHNSCC) based on their TNM 7th and TNM 8th editions classification. Materials and Methods: From January 2010 to December 2020, 71 patients with CHNSCC were identified from our centre’s database who were treated with radical surgery and adjuvant radiotherapy. All histopathological reports were reviewed, and comprehensive nodal mapping was performed. The data were collected retrospectively and survival outcomes were compared using TNM 7th and 8th editions. Results: The median age of the whole group of 71 patients was 78 years, range 54 – 94 years, 63 were male and 8 female. In total, 2246 lymph nodes were analysed; 195 were positive for cancer. ECS was present in 130 lymph nodes, which led to a change in TNM staging. The details on N-stage as per TNM 7th edition was as follows; pN1 = 23, pN2a = 14, pN2b = 32, pN2c = 0, pN3 = 2. After incorporating the TNM 8th edition criterion (presence of ECS), the details on N-stage were as follows; pN1 = 6, pN2a = 5, pN2b = 3, pN2c = 0, pN3a = 0, pN3b = 57. This showed an increase in overall stage. According to TNM 7th edition, there were 23 patients were with stage III and remaining 48 patients, stage IV. As per TNM 8th edition, there were only 6 patients with stage III as compared to 65 patients with stage IV. For all patients, 2-year disease specific survival (DSS) and overall survival (OS) were 70% and 46%. 5-year DSS and OS rates were 66% and 20% respectively. Comparing the survival between stage III and stage IV of the two cohorts using both TNM 7th and 8th editions, there is an obvious greater survival difference between the stages if TNM 8th staging is used. However, meaningful statistics were not possible as the majority of patients (n = 65) were with stage IV and only 6 patients were stage III in the TNM 8th cohort. Conclusion: Our study provides a comprehensive analysis on lymph node data mapping in this specific patient population. It shows a better differentiation between stage III and stage IV in the TNM 8th edition as compared to TNM 7th however meaningful statistics were not possible due to the imbalance of patients in the sub-cohorts of the groups.Keywords: cutaneous head and neck squamous cell carcinoma, extra capsular spread, neck lymphadenopathy, TNM 7th and 8th editions
Procedia PDF Downloads 107216 The Study of Intangible Assets at Various Firm States
Authors: Gulnara Galeeva, Yulia Kasperskaya
Abstract:
The study deals with the relevant problem related to the formation of the efficient investment portfolio of an enterprise. The structure of the investment portfolio is connected to the degree of influence of intangible assets on the enterprise’s income. This determines the importance of research on the content of intangible assets. However, intangible assets studies do not take into consideration how the enterprise state can affect the content and the importance of intangible assets for the enterprise`s income. This affects accurateness of the calculations. In order to study this problem, the research was divided into several stages. In the first stage, intangible assets were classified based on their synergies as the underlying intangibles and the additional intangibles. In the second stage, this classification was applied. It showed that the lifecycle model and the theory of abrupt development of the enterprise, that are taken into account while designing investment projects, constitute limit cases of a more general theory of bifurcations. The research identified that the qualitative content of intangible assets significant depends on how close the enterprise is to being in crisis. In the third stage, the author developed and applied the Wide Pairwise Comparison Matrix method. This allowed to establish that using the ratio of the standard deviation to the mean value of the elements of the vector of priority of intangible assets makes it possible to estimate the probability of a full-blown crisis of the enterprise. The author has identified a criterion, which allows making fundamental decisions on investment feasibility. The study also developed an additional rapid method of assessing the enterprise overall status based on using the questionnaire survey with its Director. The questionnaire consists only of two questions. The research specifically focused on the fundamental role of stochastic resonance in the emergence of bifurcation (crisis) in the economic development of the enterprise. The synergetic approach made it possible to describe the mechanism of the crisis start in details and also to identify a range of universal ways of overcoming the crisis. It was outlined that the structure of intangible assets transforms into a more organized state with the strengthened synchronization of all processes as a result of the impact of the sporadic (white) noise. Obtained results offer managers and business owners a simple and an affordable method of investment portfolio optimization, which takes into account how close the enterprise is to a state of a full-blown crisis.Keywords: analytic hierarchy process, bifurcation, investment portfolio, intangible assets, wide matrix
Procedia PDF Downloads 208215 Petrology and Petrochemistry of Basement Rocks in Ila Orangun Area, Southwestern Nigeria
Authors: Jayeola A. O., Ayodele O. S., Olususi J. I.
Abstract:
From field studies, six (6) lithological units were identified to be common around the study area, which includes quartzites, granites, granite gneiss, porphyritic granites, amphibolite and pegmatites. Petrographical analysis was done to establish the major mineral assemblages and accessory minerals present in selected rock samples, which represents the major rock types in the area. For the purpose of this study, twenty (20) pulverized rock samples were taken to the laboratory for geochemical analysis with their results used in the classification, as well as suggest the geochemical attributes of the rocks. Results from petrographical studies of the rocks under both plane and cross polarized lights revealed the major minerals identified under thin sections to include quartz, feldspar, biotite, hornblende, plagioclase and muscovite with opaque other accessory minerals, which include actinolite, spinel and myrmekite. Geochemical results obtained and interpreted using various geochemical plots or discrimination plots all classified the rocks in the area as belonging to both the peralkaline metaluminous and peraluminous types. Results for the major oxides ratios produced for Na₂O/K₂O, Al₂O₃/Na₂O + CaO + K₂O and Na₂O + CaO + K₂O/Al₂O₃ show the excess of alumina, Al₂O₃ over the alkaline Na₂O +CaO +K₂O thus suggesting peraluminous rocks. While the excess of the alkali over the alumina suggests the peralkaline metaluminous rock type. The results of correlation coefficient show a perfect strong positive correlation, which shows that they are of same geogenic sources, while negative correlation coefficient values indicate a perfect weak negative correlation, suggesting that they are of heterogeneous geogenic sources. From factor analysis, five component groups were identified as Group 1 consists of Ag-Cr-Ni elemental associations suggesting Ag, Cr, and Ni mineralization, predicting the possibility of sulphide mineralization. in the study area. Group ll and lll consist of As-Ni-Hg-Fe-Sn-Co-Pb-Hg element association, which are pathfinder elements to the mineralization of gold. Group 1V and V consist of Cd-Cu-Ag-Co-Zn, which concentrations are significant to elemental associations and mineralization. In conclusion, from the potassium radiometric anomaly map produced, the eastern section (northeastern and southeastern) is observed to be the hot spot and mineralization zone for the study area.Keywords: petrography, Ila Orangun, petrochemistry, pegmatites, peraluminous
Procedia PDF Downloads 63214 Vulnerability of the Rural Self-Constructed Housing with Social Programs and His Economic Impact in the South-East of Mexico
Authors: Castillo-Acevedo J, Mena-Rivero R, Silva-Poot H
Abstract:
In Mexico, as largely of the developing countries, the rural housing is a study object, since the diversity of constructive idiosyncrasies for locality, involves various factors that make it vulnerable; an important aspect of study is the progressive deterioration that is seen in the rural housing. Various social programs, contribute financial resources in the field of housing to provide support for families living in rural areas, however, they do not provide a coordination with the self-construction that is usually the way in which is built in these areas. The present study, exposes the physical situation and an economic assessment that presents the rural self-constructed housing in three rural communities in the south of the state of Quintana Roo, Mexico, which were built with funding from federal social programs. The information compilation was carried out in a period of seven months in which there was used the intentional sampling of typical cases, where the object study was the housing constructed with supports of the program “Rural Housing” between the year 2009 and 2014. Instruments were used as the interview, ballot papers of observation, ballot papers of technical verification and various measuring equipment laboratory for the classification of pathologies; for the determination of some pathologies constructive Mexican standards were applied how NMX-C-192-ONNCCE, NMX-C-111-ONNCCE, NMX-C-404-ONNCCE and finally used the software of Opus CMS ® with the help of tables of the National Consumer Price Index (CPI) for update of costs and wages according to the line of being applied in Mexico, were used for an economic valuation. The results show 11 different constructive pathologies and exposes greater presence with the 22.50% to the segregation of the concrete; the economic assessment shows that 80% of self-constructed housing, exceed the cost of construction it would have compared to a similar dwelling built by a construction company; It is also exposed to the 46.10% of the universe of study represent economic losses in materials to the social activities by houses not built. The system of self-construction used by the social programs, affect to some extent the program objectives applied in underserved areas, as implicit and additional costs affect the economic capacity of beneficiaries who invest time and effort in an activity that are not specialists, which this research provides foundations for sustainable alternatives or possibly eliminate the practice of self-construction of implemented social programs in marginalized rural communities in the south of state of Quintana Roo, Mexico.Keywords: economic valuation, pathologies constructive, rural housing, social programs
Procedia PDF Downloads 532213 An EEG-Based Scale for Comatose Patients' Vigilance State
Authors: Bechir Hbibi, Lamine Mili
Abstract:
Understanding the condition of comatose patients can be difficult, but it is crucial to their optimal treatment. Consequently, numerous scoring systems have been developed around the world to categorize patient states based on physiological assessments. Although validated and widely adopted by medical communities, these scores still present numerous limitations and obstacles. Even with the addition of additional tests and extensions, these scoring systems have not been able to overcome certain limitations, and it appears unlikely that they will be able to do so in the future. On the other hand, physiological tests are not the only way to extract ideas about comatose patients. EEG signal analysis has helped extensively to understand the human brain and human consciousness and has been used by researchers in the classification of different levels of disease. The use of EEG in the ICU has become an urgent matter in several cases and has been recommended by medical organizations. In this field, the EEG is used to investigate epilepsy, dementia, brain injuries, and many other neurological disorders. It has recently also been used to detect pain activity in some regions of the brain, for the detection of stress levels, and to evaluate sleep quality. In our recent findings, our aim was to use multifractal analysis, a very successful method of handling multifractal signals and feature extraction, to establish a state of awareness scale for comatose patients based on their electrical brain activity. The results show that this score could be instantaneous and could overcome many limitations with which the physiological scales stock. On the contrary, multifractal analysis stands out as a highly effective tool for characterizing non-stationary and self-similar signals. It demonstrates strong performance in extracting the properties of fractal and multifractal data, including signals and images. As such, we leverage this method, along with other features derived from EEG signal recordings from comatose patients, to develop a scale. This scale aims to accurately depict the vigilance state of patients in intensive care units and to address many of the limitations inherent in physiological scales such as the Glasgow Coma Scale (GCS) and the FOUR score. The results of applying version V0 of this approach to 30 patients with known GCS showed that the EEG-based score similarly describes the states of vigilance but distinguishes between the states of 8 sedated patients where the GCS could not be applied. Therefore, our approach could show promising results with patients with disabilities, injected with painkillers, and other categories where physiological scores could not be applied.Keywords: coma, vigilance state, EEG, multifractal analysis, feature extraction
Procedia PDF Downloads 67212 Visual Design of Walkable City as Sidewalk Integration with Dukuh Atas MRT Station in Jakarta
Authors: Nadia E. Christiana, Azzahra A. N. Ginting, Ardhito Nurcahya, Havisa P. Novira
Abstract:
One of the quickest ways to do a short trip in urban areas is by walking, either individually, in couple or groups. Walkability nowadays becomes one of the parameters to measure the quality of an urban neighborhood. As a Central Business District and public transport transit hub, Dukuh Atas area becomes one of the highest numbers of commuters that pass by the area and interchange between transportation modes daily. Thus, as a public transport hub, a lot of investment should be focused to speed up the development of the area that would support urban transit activity between transportation modes, one of them is revitalizing pedestrian walkways. The purpose of this research is to formulate the visual design concept of 'Walkable City' based on the results of the observation and a series of rankings. To achieve this objective, it is necessary to accomplish several stages of the research that consists of (1) Identifying the system of pedestrian paths in Dukuh Atas area using descriptive qualitative method (2) Analyzing the sidewalk walkability rate according to the perception and the walkability satisfaction rate using the characteristics of pedestrians and non-pedestrians in Dukuh Atas area by using Global Walkability Index analysis and Multicriteria Satisfaction Analysis (3) Analyzing the factors that determine the integration of pedestrian walkways in Dukuh Atas area using descriptive qualitative method. The results achieved in this study is that the walkability level of Dukuh Atas corridor area is 44.45 where the value is included in the classification of 25-49, which is a bit of facility that can be reached by foot. Furthermore, based on the questionnaire, satisfaction rate of pedestrian walkway in Dukuh Atas area reached a number of 64%. It is concluded that commuters have not been fully satisfied with the condition of the sidewalk. Besides, the factors that influence the integration in Dukuh Atas area have been reasonable as it is supported by the utilization of land and modes such as KRL, Busway, and MRT. From the results of all analyzes conducted, the visual design and the application of the concept of walkable city along the pathway pedestrian corridor of Dukuh Atas area are formulated. Achievement of the results of this study amounted to 80% which needs to be done further review of the results of the analysis. The work of this research is expected to be a recommendation or input for the government in the development of pedestrian paths in maximizing the use of public transportation modes.Keywords: design, global walkability index, mass rapid transit, walkable city
Procedia PDF Downloads 192211 Diversity and Distribution Ecology of Coprophilous Mushrooms of Family Psathyrellaceae from Punjab, India
Authors: Amandeep Kaur, Ns Atri, Munruchi Kaur
Abstract:
Mushrooms have shaped our environment in ways that we are only beginning to understand. The weather patterns, topography, flora and fauna of Punjab state in India create favorable growing conditions for thousands of species of mushrooms, but the complete region was unexplored when it comes to coprophilous mushrooms growing on herbivorous dung. Coprophilous mushrooms are the most specialized fungi ecologically, which germinate and grow directly on different types of animal dung or on manured soil. In the present work, the diversity of coprophilous mushrooms' of Family Psathyrellaceae of the order Agaricales is explored, their relationship to the human world is sketched out, and their supreme significance to life on this planet is revealed. During the investigation, different dung localities from 16 districts of Punjab state have been explored for the collection of material. The macroscopic features of the collected mushrooms were documented on the Field key. The hand cut sections of the various parts of carpophore, such as pileus, gills, stipe and the basidiospores details, were studied microscopically under different magnification. Various authentic publications were consulted for the identification of the investigated taxa. The classification, authentic names and synonyms of the investigated taxa are as per the latest version of Dictionary of Fungi and the MycoBank. The present work deals with the taxonomy of 81 collections belonging to 39 species spread over 05 coprophilous genera, namely Psathyrella, Panaeolus, Parasola, Coprinopsis, and Coprinellus of family Psathyrellaceae. In the text, the investigated taxa have been arranged as they appear in the key to the genera and species investigated. In this work, have been thoroughly examined for their macroscopic, microscopic, ecological, and chemical reaction details. The authors dig deeper to give indication of their ecology and the dung type where they can be obtained. Each taxon is accompanied by a detailed listing of its prominent features and an illustration with habitat photographs and line drawings of morphological and anatomical features. Taxa are organized as per their status in the keys, which allow easy recognition. All the taxa are compared with similar taxa. The study has shown that dung is an important substrate which serves as a favorable niche for the growth of a variety of mushrooms. This paper shows an insight what short-lived coprophilous mushrooms can teach us about sustaining life on earth!Keywords: abundance, basidiomycota, biodiversity, seasonal availability, systematics
Procedia PDF Downloads 65210 Knowledge Management Barriers: A Statistical Study of Hardware Development Engineering Teams within Restricted Environments
Authors: Nicholas S. Norbert Jr., John E. Bischoff, Christopher J. Willy
Abstract:
Knowledge Management (KM) is globally recognized as a crucial element in securing competitive advantage through building and maintaining organizational memory, codifying and protecting intellectual capital and business intelligence, and providing mechanisms for collaboration and innovation. KM frameworks and approaches have been developed and defined identifying critical success factors for conducting KM within numerous industries ranging from scientific to business, and for ranges of organization scales from small groups to large enterprises. However, engineering and technical teams operating within restricted environments are subject to unique barriers and KM challenges which cannot be directly treated using the approaches and tools prescribed for other industries. This research identifies barriers in conducting KM within Hardware Development Engineering (HDE) teams and statistically compares significance to barriers upholding the four KM pillars of organization, technology, leadership, and learning for HDE teams. HDE teams suffer from restrictions in knowledge sharing (KS) due to classification of information (national security risks), customer proprietary restrictions (non-disclosure agreement execution for designs), types of knowledge, complexity of knowledge to be shared, and knowledge seeker expertise. As KM evolved leveraging information technology (IT) and web-based tools and approaches from Web 1.0 to Enterprise 2.0, KM may also seek to leverage emergent tools and analytics including expert locators and hybrid recommender systems to enable KS across barriers of the technical teams. The research will test hypothesis statistically evaluating if KM barriers for HDE teams affect the general set of expected benefits of a KM System identified through previous research. If correlations may be identified, then generalizations of success factors and approaches may also be garnered for HDE teams. Expert elicitation will be conducted using a questionnaire hosted on the internet and delivered to a panel of experts including engineering managers, principal and lead engineers, senior systems engineers, and knowledge management experts. The feedback to the questionnaire will be processed using analysis of variance (ANOVA) to identify and rank statistically significant barriers of HDE teams within the four KM pillars. Subsequently, KM approaches will be recommended for upholding the KM pillars within restricted environments of HDE teams.Keywords: engineering management, knowledge barriers, knowledge management, knowledge sharing
Procedia PDF Downloads 279209 An Infinite Mixture Model for Modelling Stutter Ratio in Forensic Data Analysis
Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer
Abstract:
Forensic DNA analysis has received much attention over the last three decades, due to its incredible usefulness in human identification. The statistical interpretation of DNA evidence is recognised as one of the most mature fields in forensic science. Peak heights in an Electropherogram (EPG) are approximately proportional to the amount of template DNA in the original sample being tested. A stutter is a minor peak in an EPG, which is not masking as an allele of a potential contributor, and considered as an artefact that is presumed to be arisen due to miscopying or slippage during the PCR. Stutter peaks are mostly analysed in terms of stutter ratio that is calculated relative to the corresponding parent allele height. Analysis of mixture profiles has always been problematic in evidence interpretation, especially with the presence of PCR artefacts like stutters. Unlike binary and semi-continuous models; continuous models assign a probability (as a continuous weight) for each possible genotype combination, and significantly enhances the use of continuous peak height information resulting in more efficient reliable interpretations. Therefore, the presence of a sound methodology to distinguish between stutters and real alleles is essential for the accuracy of the interpretation. Sensibly, any such method has to be able to focus on modelling stutter peaks. Bayesian nonparametric methods provide increased flexibility in applied statistical modelling. Mixture models are frequently employed as fundamental data analysis tools in clustering and classification of data and assume unidentified heterogeneous sources for data. In model-based clustering, each unknown source is reflected by a cluster, and the clusters are modelled using parametric models. Specifying the number of components in finite mixture models, however, is practically difficult even though the calculations are relatively simple. Infinite mixture models, in contrast, do not require the user to specify the number of components. Instead, a Dirichlet process, which is an infinite-dimensional generalization of the Dirichlet distribution, is used to deal with the problem of a number of components. Chinese restaurant process (CRP), Stick-breaking process and Pólya urn scheme are frequently used as Dirichlet priors in Bayesian mixture models. In this study, we illustrate an infinite mixture of simple linear regression models for modelling stutter ratio and introduce some modifications to overcome weaknesses associated with CRP.Keywords: Chinese restaurant process, Dirichlet prior, infinite mixture model, PCR stutter
Procedia PDF Downloads 330208 Demographic Determinants of Spatial Patterns of Urban Crime
Authors: Natalia Sypion-Dutkowska
Abstract:
Abstract — The main research objective of the paper is to discover the relationship between the age groups of residents and crime in particular districts of a large city. The basic analytical tool is specific crime rates, calculated not in relation to the total population, but for age groups in a different social situation - property, housing, work, and representing different generations with different behavior patterns. They are the communities from which criminals and victims of crimes come. The analysis of literature and national police reports gives rise to hypotheses about the ability of a given age group to generate crime as a source of offenders and as a group of victims. These specific indicators are spatially differentiated, which makes it possible to detect socio-demographic determinants of spatial patterns of urban crime. A multi-feature classification of districts was also carried out, in which specific crime rates are the diagnostic features. In this way, areas with a similar structure of socio-demographic determinants of spatial patterns on urban crime were designated. The case study is the city of Szczecin in Poland. It has about 400,000 inhabitants and its area is about 300 sq km. Szczecin is located in the immediate vicinity of Germany and is the economic, academic and cultural capital of the region. It also has a seaport and an airport. Moreover, according to ESPON 2007, Szczecin is the Transnational and National Functional Urban Area. Szczecin is divided into 37 districts - auxiliary administrative units of the municipal government. The population of each of them in 2015-17 was divided into 8 age groups: babes (0-2 yrs.), children (3-11 yrs.), teens (12-17 yrs.), younger adults (18-30 yrs.), middle-age adults (31-45 yrs.), older adults (46-65 yrs.), early older (66-80) and late older (from 81 yrs.). The crimes reported in 2015-17 in each of the districts were divided into 10 groups: fights and beatings, other theft, car theft, robbery offenses, burglary into an apartment, break-in into a commercial facility, car break-in, break-in into other facilities, drug offenses, property damage. In total, 80 specific crime rates have been calculated for each of the districts. The analysis was carried out on an intra-city scale, this is a novel approach as this type of analysis is usually carried out at the national or regional level. Another innovative research approach is the use of specific crime rates in relation to age groups instead of standard crime rates. Acknowledgments: This research was funded by the National Science Centre, Poland, registration number 2019/35/D/HS4/02942.Keywords: age groups, determinants of crime, spatial crime pattern, urban crime
Procedia PDF Downloads 171