Search results for: mobility features
615 Experimental Evaluation of Foundation Settlement Mitigations in Liquefiable Soils using Press-in Sheet Piling Technique: 1-g Shake Table Tests
Authors: Md. Kausar Alam, Ramin Motamed
Abstract:
The damaging effects of liquefaction-induced ground movements have been frequently observed in past earthquakes, such as the 2010-2011 Canterbury Earthquake Sequence (CES) in New Zealand and the 2011 Tohoku earthquake in Japan. To reduce the consequences of soil liquefaction at shallow depths, various ground improvement techniques have been utilized in engineering practice, among which this research is focused on experimentally evaluating the press-in sheet piling technique. The press-in sheet pile technique eliminates the vibration, hammering, and noise pollution associated with dynamic sheet pile installation methods. Unfortunately, there are limited experimental studies on the press-in sheet piling technique for liquefaction mitigation using 1g shake table tests in which all the controlling mechanisms of liquefaction-induced foundation settlement, including sand ejecta, can be realistically reproduced. In this study, a series of moderate scale 1g shake table experiments were conducted at the University of Nevada, Reno, to evaluate the performance of this technique in liquefiable soil layers. First, a 1/5 size model was developed based on a recent UC San Diego shaking table experiment. The scaled model has a density of 50% for the top crust, 40% for the intermediate liquefiable layer, and 85% for the bottom dense layer. Second, a shallow foundation is seated atop an unsaturated sandy soil crust. Third, in a series of tests, a sheet pile with variable embedment depth is inserted into the liquefiable soil using the press-in technique surrounding the shallow foundations. The scaled models are subjected to harmonic input motions with amplitude and dominant frequency properly scaled based on the large-scale shake table test. This study assesses the performance of the press-in sheet piling technique in terms of reductions in the foundation movements (settlement and tilt) and generated excess pore water pressures. In addition, this paper discusses the cost-effectiveness and carbon footprint features of the studied mitigation measures.Keywords: excess pore water pressure, foundation settlement, press-in sheet pile, soil liquefaction
Procedia PDF Downloads 97614 Sediment Wave and Cyclic Steps as Mechanism for Sediment Transport in Submarine Canyons Thalweg
Authors: Taiwo Olusoji Lawrence, Peace Mawo Aaron
Abstract:
Seismic analysis of bedforms has proven to be one of the best ways to study deepwater sedimentary features. Canyons are known to be sediment transportation conduit. Sediment wave are large-scale depositional bedforms in various parts of the world's oceans formed predominantly by suspended load transport. These undulating objects usually have tens of meters to a few kilometers in wavelength and a height of several meters. Cyclic steps have long long-wave upstream-migrating bedforms confined by internal hydraulic jumps. They usually occur in regions with high gradients and slope breaks. Cyclic steps and migrating sediment waves are the most common bedform on the seafloor. Cyclic steps and related sediment wave bedforms are significant to the morpho-dynamic evolution of deep-water depositional systems architectural elements, especially those located along tectonically active margins with high gradients and slope breaks that can promote internal hydraulic jumps in turbidity currents. This report examined sedimentary activities and sediment transportation in submarine canyons and provided distinctive insight into factors that created a complex seabed canyon system in the Ceara Fortaleza basin Brazilian Equatorial Margin (BEM). The growing importance of cyclic steps made it imperative to understand the parameters leading to their formation, migration, and architecture as well as their controls on sediment transport in canyon thalweg. We extracted the parameters of the observed bedforms and evaluated the aspect ratio and asymmetricity. We developed a relationship between the hydraulic jump magnitude, depth of the hydraulic fall and the length of the cyclic step therein. It was understood that an increase in the height of the cyclic step increases the magnitude of the hydraulic jump and thereby increases the rate of deposition on the preceding stoss side. An increase in the length of the cyclic steps reduces the magnitude of the hydraulic jump and reduces the rate of deposition at the stoss side. Therefore, flat stoss side was noticed at most preceding cyclic step and sediment wave.Keywords: Ceara Fortaleza, submarine canyons, cyclic steps, sediment wave
Procedia PDF Downloads 114613 Barriers of the Development and Implementation of Health Information Systems in Iran
Authors: Abbas Sheikhtaheri, Nasim Hashemi
Abstract:
Health information systems have great benefits for clinical and managerial processes of health care organizations. However, identifying and removing constraints and barriers of implementing and using health information systems before any implementation is essential. Physicians are one of the main users of health information systems, therefore, identifying the causes of their resistance and concerns about the barriers of the implementation of these systems is very important. So the purpose of this study was to determine the barriers of the development and implementation of health information systems in terms of the Iranian physicians’ perspectives. In this study conducted in 8 selected hospitals affiliated to Tehran and Iran Universities of Medical Sciences, Tehran, Iran in 2014, physicians (GPs, residents, interns, specialists) in these hospitals were surveyed. In order to collect data, a research made questionnaire was used (Cronbach’s α = 0.95). The instrument included 25 about organizational (9), personal (4), moral and legal (3) and technical barriers (9). Participants were asked to answer the questions using 5 point scale Likert (completely disagree=1 to completely agree=5). By using a simple random sampling method, 200 physicians (from 600) were invited to study that eventually 163 questionnaires were returned. We used mean score and t-test and ANOVA to analyze the data using SPSS software version 17. 52.1% of respondents were female. The mean age was 30.18 ± 7.29. The work experience years for most of them were between 1 to 5 years (80.4 percent). The most important barriers were organizational ones (3.4 ± 0.89), followed by ethical (3.18 ± 0.98), technical (3.06 ± 0.8) and personal (3.04 ± 1.2). Lack of easy access to a fast Internet (3.67±1.91) and the lack of exchanging information (3.61±1.2) were the most important technical barriers. Among organizational barriers, the lack of efficient planning for the development and implementation systems (3.56±1.32) and was the most important ones. Lack of awareness and knowledge of health care providers about the health information systems features (3.33±1.28) and the lack of physician participation in planning phase (3.27±1.2) as well as concerns regarding the security and confidentiality of health information (3.15 ± 1.31) were the most important personal and ethical barriers, respectively. Women (P = 0.02) and those with less experience (P = 0.002) were more concerned about personal barriers. GPs also were more concerned about technical barriers (P = 0.02). According to the study, technical and ethics barriers were considered as the most important barriers however, lack of awareness in target population is also considered as one of the main barriers. Ignoring issues such as personal and ethical barriers, even if the necessary infrastructure and technical requirements were provided, may result in failure. Therefore, along with the creating infrastructure and resolving organizational barriers, special attention to education and awareness of physicians and providing solution for ethics concerns are necessary.Keywords: barriers, development health information systems, implementation, physicians
Procedia PDF Downloads 345612 Eco-Friendly Silicone/Graphene-Based Nanocomposites as Superhydrophobic Antifouling Coatings
Authors: Mohamed S. Selim, Nesreen A. Fatthallah, Shimaa A. Higazy, Hekmat R. Madian, Sherif A. El-Safty, Mohamed A. Shenashen
Abstract:
After the 2003 prohibition on employing TBT-based antifouling coatings, polysiloxane antifouling nano-coatings have gained in popularity as environmentally friendly and cost-effective replacements. A series of non-toxic polydimethylsiloxane nanocomposites filled with nanosheets of graphene oxide (GO) decorated with magnetite nanospheres (GO-Fe₃O₄ nanospheres) were developed and cured via a catalytic hydrosilation method. Various GO-Fe₃O₄ hybrid concentrations were mixed with the silicone resin via solution casting technique to evaluate the structure–property connection. To generate GO nanosheets, a modified Hummers method was applied. A simple co-precipitation method was used to make spherical magnetite particles under inert nitrogen. Hybrid GO-Fe₃O₄ composite fillers were developed by a simple ultrasonication method. Superhydrophobic PDMS/GO-Fe₃O₄ nanocomposite surface with a micro/nano-roughness, reduced surface-free energy (SFE), high fouling release (FR) efficiency was achieved. The physical, mechanical, and anticorrosive features of the virgin and GO-Fe₃O₄ filled nanocomposites were investigated. The synergistic effects of GO-Fe₃O4 hybrid's well-dispersion on the water-repellency and surface topological roughness of the PDMS/GO-Fe₃O₄ nanopaints were extensively studied. The addition of the GO-Fe₃O₄ hybrid fillers till 1 wt.% could increase the coating's water contact angle (158°±2°), minimize its SFE to 12.06 mN/m, develop outstanding micro/nano-roughness, and improve its bulk mechanical and anticorrosion properties. Several microorganisms were employed for examining the fouling-resistance of the coated specimens for 1 month. Silicone coatings filled with 1 wt.% GO-Fe₃O₄ nanofiller showed the least biodegradability% among all the tested microorganisms. Whereas GO-Fe₃O4 with 5 wt.% nanofiller possessed the highest biodegradability% potency by all the microorganisms. We successfully developed non-toxic and low cost nanostructured FR composite coating with high antifouling-resistance, reproducible superhydrophobic character, and enhanced service-time for maritime navigation.Keywords: silicone antifouling, environmentally friendly, nanocomposites, nanofillers, fouling repellency, hydrophobicity
Procedia PDF Downloads 114611 Estimating the Ladder Angle and the Camera Position From a 2D Photograph Based on Applications of Projective Geometry and Matrix Analysis
Authors: Inigo Beckett
Abstract:
In forensic investigations, it is often the case that the most potentially useful recorded evidence derives from coincidental imagery, recorded immediately before or during an incident, and that during the incident (e.g. a ‘failure’ or fire event), the evidence is changed or destroyed. To an image analysis expert involved in photogrammetric analysis for Civil or Criminal Proceedings, traditional computer vision methods involving calibrated cameras is often not appropriate because image metadata cannot be relied upon. This paper presents an approach for resolving this problem, considering in particular and by way of a case study, the angle of a simple ladder shown in a photograph. The UK Health and Safety Executive (HSE) guidance document published in 2014 (INDG455) advises that a leaning ladder should be erected at 75 degrees to the horizontal axis. Personal injury cases can arise in the construction industry because a ladder is too steep or too shallow. Ad-hoc photographs of such ladders in their incident position provide a basis for analysis of their angle. This paper presents a direct approach for ascertaining the position of the camera and the angle of the ladder simultaneously from the photograph(s) by way of a workflow that encompasses a novel application of projective geometry and matrix analysis. Mathematical analysis shows that for a given pixel ratio of directly measured collinear points (i.e. features that lie on the same line segment) from the 2D digital photograph with respect to a given viewing point, we can constrain the 3D camera position to a surface of a sphere in the scene. Depending on what we know about the ladder, we can enforce another independent constraint on the possible camera positions which enables us to constrain the possible positions even further. Experiments were conducted using synthetic and real-world data. The synthetic data modeled a vertical plane with a ladder on a horizontally flat plane resting against a vertical wall. The real-world data was captured using an Apple iPhone 13 Pro and 3D laser scan survey data whereby a ladder was placed in a known location and angle to the vertical axis. For each case, we calculated camera positions and the ladder angles using this method and cross-compared them against their respective ‘true’ values.Keywords: image analysis, projective geometry, homography, photogrammetry, ladders, Forensics, Mathematical modeling, planar geometry, matrix analysis, collinear, cameras, photographs
Procedia PDF Downloads 52610 Evaluation of Systemic Immune-Inflammation Index in Obese Children
Authors: Mustafa M. Donma, Orkide Donma
Abstract:
A growing list of cancers might be influenced by obesity. Obesity is associated with an increased risk for the occurrence and development of some cancers. Inflammation can lead to cancer. It is one of the characteristic features of cancer and plays a critical role in cancer development. C-reactive protein (CRP) is under evaluation related to the new and simple prognostic factors in patients with metastatic renal cell cancer. Obesity can predict and promote systemic inflammation in healthy adults. BMI is correlated with hs-CRP. In this study, SII index and CRP values were evaluated in children with normal BMI and those within the range of different obesity grades to detect the tendency towards cancer in pediatric obesity. A total of one hundred and ninety-four children; thirty-five children with normal BMI, twenty overweight (OW), forty-seven obese (OB) and ninety-two morbid obese (MO) participated in the study. Age- and sex-matched groups were constituted using BMI-for age percentiles. Informed consent was obtained. Ethical Committee approval was taken. Weight, height, waist circumference (C), hip C, head C and neck C of the children were measured. The complete blood count test was performed. C-reactive protein analysis was performed. Statistical analyses were performed using SPSS. The degree for statistical significance was p≤0.05. SII index values were progressively increasing starting from normal weight (NW) to MO children. There is a statistically significant difference between NW and OB as well as MO children. No significant difference was observed between NW and OW children, however, a correlation was observed between NW and OW children. MO constitutes the only group, which exhibited a statistically significant correlation between SII index and CRP. Obesity-related bladder, kidney, cervical, liver, colorectal, endometrial cancers are still being investigated. Obesity, characterized as a chronic low-grade inflammation, is a crucial risk factor for colon cancer. Elevated childhood BMI values may be indicative of processes leading to cancer, initiated early in life. Prevention of childhood adiposity may decrease the cancer incidence in adults. To authors’ best knowledge, this study is the first to introduce SII index values during obesity of varying degrees of severity. It is suggested that this index seems to affect all stages of obesity with an increasing tendency and may point out the concomitant status of obesity and cancer starting from very early periods of life.Keywords: children, C-reactive protein, systemic immune-inflammation index, obesity
Procedia PDF Downloads 177609 Investigation of Deep Eutectic Solvents for Microwave Assisted Extraction and Headspace Gas Chromatographic Determination of Hexanal in Fat-Rich Food
Authors: Birute Bugelyte, Ingrida Jurkute, Vida Vickackaite
Abstract:
The most complicated step of the determination of volatile compounds in complex matrices is the separation of analytes from the matrix. Traditional analyte separation methods (liquid extraction, Soxhlet extraction) require a lot of time and labour; moreover, there is a risk to lose the volatile analytes. In recent years, headspace gas chromatography has been used to determine volatile compounds. To date, traditional extraction solvents have been used in headspace gas chromatography. As a rule, such solvents are rather volatile; therefore, a large amount of solvent vapour enters into the headspace together with the analyte. Because of that, the determination sensitivity of the analyte is reduced, a huge solvent peak in the chromatogram can overlap with the peaks of the analyts. The sensitivity is also limited by the fact that the sample can’t be heated at a higher temperature than the solvent boiling point. In 2018 it was suggested to replace traditional headspace gas chromatographic solvents with non-volatile, eco-friendly, biodegradable, inexpensive, and easy to prepare deep eutectic solvents (DESs). Generally, deep eutectic solvents have low vapour pressure, a relatively wide liquid range, much lower melting point than that of any of their individual components. Those features make DESs very attractive as matrix media for application in headspace gas chromatography. Also, DESs are polar compounds, so they can be applied for microwave assisted extraction. The aim of this work was to investigate the possibility of applying deep eutectic solvents for microwave assisted extraction and headspace gas chromatographic determination of hexanal in fat-rich food. Hexanal is considered one of the most suitable indicators of lipid oxidation degree as it is the main secondary oxidation product of linoleic acid, which is one of the principal fatty acids of many edible oils. Eight hydrophilic and hydrophobic deep eutectic solvents have been synthesized, and the influence of the temperature and microwaves on their headspace gas chromatographic behaviour has been investigated. Using the most suitable DES, microwave assisted extraction conditions and headspace gas chromatographic conditions have been optimized for the determination of hexanal in potato chips. Under optimized conditions, the quality parameters of the prepared technique have been determined. The suggested technique was applied for the determination of hexanal in potato chips and other fat-rich food.Keywords: deep eutectic solvents, headspace gas chromatography, hexanal, microwave assisted extraction
Procedia PDF Downloads 195608 Self-Assembled ZnFeAl Layered Double Hydroxides as Highly Efficient Fenton-Like Catalysts
Authors: Marius Sebastian Secula, Mihaela Darie, Gabriela Carja
Abstract:
Ibuprofen is a non-steroidal anti-inflammatory drug (NSAIDs) and is among the most frequently detected pharmaceuticals in environmental samples and among the most widespread drug in the world. Its concentration in the environment is reported to be between 10 and 160 ng L-1. In order to improve the abatement efficiency of this compound for water source prevention and reclamation, the development of innovative technologies is mandatory. AOPs (advanced oxidation processes) are known as highly efficient towards the oxidation of organic pollutants. Among the promising combined treatments, photo-Fenton processes using layered double hydroxides (LDHs) attracted significant consideration especially due to their composition flexibility, high surface area and tailored redox features. This work presents the self-supported Fe, Mn or Ti on ZnFeAl LDHs obtained by co-precipitation followed by reconstruction method as novel efficient photo-catalysts for Fenton-like catalysis. Fe, Mn or Ti/ZnFeAl LDHs nano-hybrids were tested for the degradation of a model pharmaceutical agent, the anti-inflammatory agent ibuprofen, by photocatalysis and photo-Fenton catalysis, respectively, by means of a lab-scale system consisting of a batch reactor equipped with an UV lamp (17 W). The present study presents comparatively the degradation of Ibuprofen in aqueous solution UV light irradiation using four different types of LDHs. The newly prepared Ti/ZnFeAl 4:1 catalyst results in the best degradation performance. After 60 minutes of light irradiation, the Ibuprofen removal efficiency reaches 95%. The slowest degradation of Ibuprofen solution occurs in case of Fe/ZnFeAl 4:1 LDH, (67% removal efficiency after 60 minutes of process). Evolution of Ibuprofen degradation during the photo Fenton process is also studied using Ti/ZnFeAl 2:1 and 4:1 LDHs in the presence and absence of H2O2. It is found that after 60 min the use of Ti/ZnFeAl 4:1 LDH in presence of 100 mg/L H2O2 leads to the fastest degradation of Ibuprofen molecule. After 120 min, both catalysts Ti/ZnFeAl 4:1 and 2:1 result in the same value of removal efficiency (98%). In the absence of H2O2, Ibuprofen degradation reaches only 73% removal efficiency after 120 min of degradation process. Acknowledgements: This work was supported by a grant of the Romanian National Authority for Scientific Research and Innovation, CNCS - UEFISCDI, project number PN-II-RU-TE-2014-4-0405.Keywords: layered double hydroxide, advanced oxidation process, micropollutant, heterogeneous Fenton
Procedia PDF Downloads 229607 Doing Durable Organisational Identity Work in the Transforming World of Work: Meeting the Challenge of Different Workplace Strategies
Authors: Theo Heyns Veldsman, Dieter Veldsman
Abstract:
Organisational Identity (OI) refers to who and what the organisation is, what it stands for and does, and what it aspires to become. OI explores the perspectives of how we see ourselves, are seen by others and aspire to be seen. It provides as rationale the ‘why’ for the organisation’s continued existence. The most widely accepted differentiating features of OI are encapsulated in the organisation’s core, distinctive, differentiating, and enduring attributes. OI finds its concrete expression in the organisation’s Purpose, Vision, Strategy, Core Ideology, and Legacy. In the emerging new order infused by hyper-turbulence and hyper-fluidity, the VICCAS world, OI provides a secure anchor and steady reference point for the organisation, particularly the growing widespread focus on Purpose, which is indicative of the organisation’s sense of social citizenship. However, the transforming world of work (TWOW) - particularly the potent mix of ongoing disruptive innovation, the 4th Industrial Revolution, and the gig economy with the totally unpredicted COVID19 pandemic - has resulted in the consequential adoption of different workplace strategies by organisations in terms of how, where, and when work takes place. Different employment relations (transient to permanent); work locations (on-site to remote); work time arrangements (full-time at work to flexible work schedules); and technology enablement (face-to-face to virtual) now form the basis of the employer/employee relationship. The different workplace strategies, fueled by the demands of TWOW, pose a substantive challenge to organisations of doing durable OI work, able to fulfill OI’s critical attributes of core, distinctive, differentiating, and enduring. OI work is contained in the ongoing, reciprocally interdependent stages of sense-breaking, sense-giving, internalisation, enactment, and affirmation. The objective of our paper is to explore how to do durable OI work relative to different workplace strategies in the TWOW. Using a conceptual-theoretical approach from a practice-based orientation, the paper addresses the following topics: distinguishes different workplace strategies based upon a time/place continuum; explicates stage-wise the differential organisational content and process consequences of these strategies for durable OI work; indicates the critical success factors of durable OI work under these differential conditions; recommends guidelines for OI work relative to TWOW; and points out ethical implications of all of the above.Keywords: organisational identity, workplace strategies, new world of work, durable organisational identity work
Procedia PDF Downloads 200606 Advanced Magnetic Field Mapping Utilizing Vertically Integrated Deployment Platforms
Authors: John E. Foley, Martin Miele, Raul Fonda, Jon Jacobson
Abstract:
This paper presents development and implementation of new and innovative data collection and analysis methodologies based on deployment of total field magnetometer arrays. Our research has focused on the development of a vertically-integrated suite of platforms all utilizing common data acquisition, data processing and analysis tools. These survey platforms include low-altitude helicopters and ground-based vehicles, including robots, for terrestrial mapping applications. For marine settings the sensor arrays are deployed from either a hydrodynamic bottom-following wing towed from a surface vessel or from a towed floating platform for shallow-water settings. Additionally, sensor arrays are deployed from tethered remotely operated vehicles (ROVs) for underwater settings where high maneuverability is required. While the primary application of these systems is the detection and mapping of unexploded ordnance (UXO), these system are also used for various infrastructure mapping and geologic investigations. For each application, success is driven by the integration of magnetometer arrays, accurate geo-positioning, system noise mitigation, and stable deployment of the system in appropriate proximity of expected targets or features. Each of the systems collects geo-registered data compatible with a web-enabled data management system providing immediate access of data and meta-data for remote processing, analysis and delivery of results. This approach allows highly sophisticated magnetic processing methods, including classification based on dipole modeling and remanent magnetization, to be efficiently applied to many projects. This paper also briefly describes the initial development of magnetometer-based detection systems deployed from low-altitude helicopter platforms and the subsequent successful transition of this technology to the marine environment. Additionally, we present examples from a range of terrestrial and marine settings as well as ongoing research efforts related to sensor miniaturization for unmanned aerial vehicle (UAV) magnetic field mapping applications.Keywords: dipole modeling, magnetometer mapping systems, sub-surface infrastructure mapping, unexploded ordnance detection
Procedia PDF Downloads 464605 “I” on the Web: Social Penetration Theory Revised
Authors: Dr. Dionysis Panos Dpt. Communication, Internet Studies Cyprus University of Technology
Abstract:
The widespread use of New Media and particularly Social Media, through fixed or mobile devices, has changed in a staggering way our perception about what is “intimate" and "safe" and what is not, in interpersonal communication and social relationships. The distribution of self and identity-related information in communication now evolves under new and different conditions and contexts. Consequently, this new framework forces us to rethink processes and mechanisms, such as what "exposure" means in interpersonal communication contexts, how the distinction between the "private" and the "public" nature of information is being negotiated online, how the "audiences" we interact with are understood and constructed. Drawing from an interdisciplinary perspective that combines sociology, communication psychology, media theory, new media and social networks research, as well as from the empirical findings of a longitudinal comparative research, this work proposes an integrative model for comprehending mechanisms of personal information management in interpersonal communication, which can be applied to both types of online (Computer-Mediated) and offline (Face-To-Face) communication. The presentation is based on conclusions drawn from a longitudinal qualitative research study with 458 new media users from 24 countries for almost over a decade. Some of these main conclusions include: (1) There is a clear and evidenced shift in users’ perception about the degree of "security" and "familiarity" of the Web, between the pre- and the post- Web 2.0 era. The role of Social Media in this shift was catalytic. (2) Basic Web 2.0 applications changed dramatically the nature of the Internet itself, transforming it from a place reserved for “elite users / technical knowledge keepers" into a place of "open sociability” for anyone. (3) Web 2.0 and Social Media brought about a significant change in the concept of “audience” we address in interpersonal communication. The previous "general and unknown audience" of personal home pages, converted into an "individual & personal" audience chosen by the user under various criteria. (4) The way we negotiate the nature of 'private' and 'public' of the Personal Information, has changed in a fundamental way. (5) The different features of the mediated environment of online communication and the critical changes occurred since the Web 2.0 advance, lead to the need of reconsideration and updating the theoretical models and analysis tools we use in our effort to comprehend the mechanisms of interpersonal communication and personal information management. Therefore, is proposed here a new model for understanding the way interpersonal communication evolves, based on a revision of social penetration theory.Keywords: new media, interpersonal communication, social penetration theory, communication exposure, private information, public information
Procedia PDF Downloads 371604 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results
Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter
Abstract:
Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.Keywords: classification, CRISP-DM, machine learning, predictive quality, regression
Procedia PDF Downloads 144603 The Diary of Dracula, by Marin Mincu: Inquiries into a Romanian 'Book of Wisdom' as a Fictional Counterpart for Corpus Hermeticum
Authors: Lucian Vasile Bagiu, Paraschiva Bagiu
Abstract:
The novel written in Italian and published in Italy in 1992 by the Romanian scholar Marin Mincu is meant for the foreign reader, aiming apparently at a better knowledge of the historical character of Vlad the Empalor (Vlad Dracul), within the European cultural, political and historical context of 1463. Throughout the very well written tome, one comes to realize that one of the underlining levels of the fiction is the exposing of various fundamental features of the Romanian culture and civilization. The author of the diary, Dracula, makes mention of Corpus Hermeticum no less than fifteen times, suggesting his own diary is some sort of a philosophical counterpart. The essay focuses on several ‘truths’ and ‘wisdom’ revealed in the fictional teachings of Dracula. The boycott of History by the Romanians is identified as an echo of the philosophical approach of the famous Romanian scholar and writer Lucian Blaga. The orality of the Romanian culture is a landmark opposed to written culture of the Western Europe. The religion of the ancient Dacian God Zalmoxis is seen as the basis for the Romanian existential and/or metaphysical ethnic philosophy (a feature tackled by the famous Romanian historian of religion Mircea Eliade), with a suggestion that Hermes Trismegistus may have written his Corpus Hermeticum being influenced by Zalmoxis. The historical figure of the last Dacian king Decebalus (death 106 AD) is a good pretext for a tantalizing Indo-European suggestion that the prehistoric Thraco-Dacian people may have been the ancestors of the first Romans settled in Latium. The lost diary of the Emperor Trajan The Bello Dacico may have proved that the unknown language of the Dacians was very much alike Latin language (a secret well hidden by the Vatican). The attitude towards death of the Dacians, as described by Herodotus, may have later inspired Pitagora, Socrates, the Eleusinian and Orphic Mysteries, etc. All of these within the Humanistic and Renascentist European context of the epoch, Dracula having a close relationship with scholars such as Nicolaus Cusanus, Cosimo de Medici, Marsilio Ficino, Pope Pius II, etc. Thus The Diary of Dracula turns out as exciting and stupefying as Corpus Hermeticum, a book impossible to assimilate entirely, yet a reference not wise to be ignored.Keywords: Corpus Hermeticum, Dacians, Dracula, Zalmoxis
Procedia PDF Downloads 159602 Compression and Air Storage Systems for Small Size CAES Plants: Design and Off-Design Analysis
Authors: Coriolano Salvini, Ambra Giovannelli
Abstract:
The use of renewable energy sources for electric power production leads to reduced CO2 emissions and contributes to improving the domestic energy security. On the other hand, the intermittency and unpredictability of their availability poses relevant problems in fulfilling safely and in a cost efficient way the load demand along the time. Significant benefits in terms of “grid system applications”, “end-use applications” and “renewable applications” can be achieved by introducing energy storage systems. Among the currently available solutions, CAES (Compressed Air Energy Storage) shows favorable features. Small-medium size plants equipped with artificial air reservoirs can constitute an interesting option to get efficient and cost-effective distributed energy storage systems. The present paper is addressed to the design and off-design analysis of the compression system of small size CAES plants suited to absorb electric power in the range of hundreds of kilowatt. The system of interest is constituted by an intercooled (in case aftercooled) multi-stage reciprocating compressor and a man-made reservoir obtained by connecting large diameter steel pipe sections. A specific methodology for the system preliminary sizing and off-design modeling has been developed. Since during the charging phase the electric power absorbed along the time has to change according to the peculiar CAES requirements and the pressure ratio increases continuously during the filling of the reservoir, the compressor has to work at variable mass flow rate. In order to ensure an appropriately wide range of operations, particular attention has been paid to the selection of the most suitable compressor capacity control device. Given the capacity regulation margin of the compressor and the actual level of charge of the reservoir, the proposed approach allows the instant-by-instant evaluation of minimum and maximum electric power absorbable from the grid. The developed tool gives useful information to appropriately size the compression system and to manage it in the most effective way. Various cases characterized by different system requirements are analysed. Results are given and widely discussed.Keywords: artificial air storage reservoir, compressed air energy storage (CAES), compressor design, compression system management.
Procedia PDF Downloads 228601 Intracranial Hypotension: A Brief Review of the Pathophysiology and Diagnostic Algorithm
Authors: Ana Bermudez de Castro Muela, Xiomara Santos Salas, Silvia Cayon Somacarrera
Abstract:
The aim of this review is to explain what is the intracranial hypotension and its main causes, and also to approach to the diagnostic management in the different clinical situations, understanding radiological findings, and physiopathological substrate. An approach to the diagnostic management is presented: what are the guidelines to follow, the different tests available, and the typical findings. We review the myelo-CT and myelo-RM studies in patients with suspected CSF fistula or hypotension of unknown cause during the last 10 years in three centers. Signs of intracranial hypotension (subdural hygromas/hematomas, pachymeningeal enhancement, venous sinus engorgement, pituitary hyperemia, and lowering of the brain) that are evident in baseline CT and MRI are also sought. The intracranial hypotension is defined as a lower opening pressure of 6 cmH₂O. It is a relatively rare disorder with an annual incidence of 5 per 100.000, with a female to male ratio 2:1. The clinical features it’s an orthostatic headache, which is defined as development or aggravation of headache when patients move from a supine to an upright position and disappear or typically relieve after lay down. The etiology is a decrease in the amount of cerebrospinal fluid (CSF), usually by loss of it, either spontaneous or secondary (post-traumatic, post-surgical, systemic disease, post-lumbar puncture etc.) and rhinorrhea and/or otorrhea may exist. The pathophysiological mechanisms of hypotension and CSF hypertension are interrelated, as a situation of hypertension may lead to hypotension secondary to spontaneous CSF leakage. The diagnostic management of intracranial hypotension in our center includes, in the case of being spontaneous and without rhinorrhea and/or otorrhea and according to necessity, a range of available tests, which will be performed from less to more complex: cerebral CT, cerebral MRI and spine without contrast and CT/MRI with intrathecal contrast. If we are in a situation of intracranial hypotension with the presence of rhinorrhea/otorrhea, a sample can be obtained for the detection of b2-transferrin, which is found in the CSF physiologically, as well as sinus CT and cerebral MRI including constructive interference steady state (CISS) sequences. If necessary, cisternography studies are performed to locate the exact point of leakage. It is important to emphasize the significance of myelo-CT / MRI to establish the diagnosis and location of CSF leak, which is indispensable for therapeutic planning (whether surgical or not) in patients with more than one lesion or doubts in the baseline tests.Keywords: cerebrospinal fluid, neuroradiology brain, magnetic resonance imaging, fistula
Procedia PDF Downloads 127600 Land Cover Mapping Using Sentinel-2, Landsat-8 Satellite Images, and Google Earth Engine: A Study Case of the Beterou Catchment
Authors: Ella Sèdé Maforikan
Abstract:
Accurate land cover mapping is essential for effective environmental monitoring and natural resources management. This study focuses on assessing the classification performance of two satellite datasets and evaluating the impact of different input feature combinations on classification accuracy in the Beterou catchment, situated in the northern part of Benin. Landsat-8 and Sentinel-2 images from June 1, 2020, to March 31, 2021, were utilized. Employing the Random Forest (RF) algorithm on Google Earth Engine (GEE), a supervised classification categorized the land into five classes: forest, savannas, cropland, settlement, and water bodies. GEE was chosen due to its high-performance computing capabilities, mitigating computational burdens associated with traditional land cover classification methods. By eliminating the need for individual satellite image downloads and providing access to an extensive archive of remote sensing data, GEE facilitated efficient model training on remote sensing data. The study achieved commendable overall accuracy (OA), ranging from 84% to 85%, even without incorporating spectral indices and terrain metrics into the model. Notably, the inclusion of additional input sources, specifically terrain features like slope and elevation, enhanced classification accuracy. The highest accuracy was achieved with Sentinel-2 (OA = 91%, Kappa = 0.88), slightly surpassing Landsat-8 (OA = 90%, Kappa = 0.87). This underscores the significance of combining diverse input sources for optimal accuracy in land cover mapping. The methodology presented herein not only enables the creation of precise, expeditious land cover maps but also demonstrates the prowess of cloud computing through GEE for large-scale land cover mapping with remarkable accuracy. The study emphasizes the synergy of different input sources to achieve superior accuracy. As a future recommendation, the application of Light Detection and Ranging (LiDAR) technology is proposed to enhance vegetation type differentiation in the Beterou catchment. Additionally, a cross-comparison between Sentinel-2 and Landsat-8 for assessing long-term land cover changes is suggested.Keywords: land cover mapping, Google Earth Engine, random forest, Beterou catchment
Procedia PDF Downloads 63599 An Alternative Credit Scoring System in China’s Consumer Lendingmarket: A System Based on Digital Footprint Data
Authors: Minjuan Sun
Abstract:
Ever since the late 1990s, China has experienced explosive growth in consumer lending, especially in short-term consumer loans, among which, the growth rate of non-bank lending has surpassed bank lending due to the development in financial technology. On the other hand, China does not have a universal credit scoring and registration system that can guide lenders during the processes of credit evaluation and risk control, for example, an individual’s bank credit records are not available for online lenders to see and vice versa. Given this context, the purpose of this paper is three-fold. First, we explore if and how alternative digital footprint data can be utilized to assess borrower’s creditworthiness. Then, we perform a comparative analysis of machine learning methods for the canonical problem of credit default prediction. Finally, we analyze, from an institutional point of view, the necessity of establishing a viable and nationally universal credit registration and scoring system utilizing online digital footprints, so that more people in China can have better access to the consumption loan market. Two different types of digital footprint data are utilized to match with bank’s loan default records. Each separately captures distinct dimensions of a person’s characteristics, such as his shopping patterns and certain aspects of his personality or inferred demographics revealed by social media features like profile image and nickname. We find both datasets can generate either acceptable or excellent prediction results, and different types of data tend to complement each other to get better performances. Typically, the traditional types of data banks normally use like income, occupation, and credit history, update over longer cycles, hence they can’t reflect more immediate changes, like the financial status changes caused by the business crisis; whereas digital footprints can update daily, weekly, or monthly, thus capable of providing a more comprehensive profile of the borrower’s credit capabilities and risks. From the empirical and quantitative examination, we believe digital footprints can become an alternative information source for creditworthiness assessment, because of their near-universal data coverage, and because they can by and large resolve the "thin-file" issue, due to the fact that digital footprints come in much larger volume and higher frequency.Keywords: credit score, digital footprint, Fintech, machine learning
Procedia PDF Downloads 160598 Influence of Surface Wettability on Imbibition Dynamics of Protein Solution in Microwells
Authors: Himani Sharma, Amit Agrawal
Abstract:
Stability of the Cassie and Wenzel wetting states depends on intrinsic contact angle and geometric features on a surface that was exploited in capturing biofluids in microwells. However, the mechanism of imbibition of biofluids in the microwells is not well implied in terms of wettability of a substrate. In this work, we experimentally demonstrated filling dynamics in hydrophilic and hydrophobic microwells by protein solutions. Towards this, we utilized lotus leaf as a mold to fabricate microwells on a Polydimethylsiloxane (PDMS) surface. Lotus leaf containing micrometer-sized blunt-conical shaped pillars with a height of 8-15 µm and diameter of 3-8 µm were transferred on to PDMS. Furthermore, PDMS surface was treated with oxygen plasma to render the hydrophilic nature. A 10µL droplets containing fluorescein isothiocyanate (FITC) - labelled bovine serum albumin (BSA) were rested on both hydrophobic (θa = 108o, where θa is the apparent contact angle) and hydrophilic (θa = 60o) PDMS surfaces. A time-dependent fluorescence microscopy was conducted on these modified PDMS surfaces by recording the fluorescent intensity over a 5 minute period. It was observed that, initially (at t=1 min) FITC-BSA was accumulated on the periphery of both hydrophilic and hydrophobic microwells due to incomplete penetration of liquid-gas meniscus. This deposition of FITC-BSA on periphery of microwell was not changed with time for hydrophobic surfaces, whereas, a complete filling was occurred in hydrophilic microwells (at t=5 mins). This attributes to a gradual movement of three-phase contact line along the vertical surface of the hydrophilic microwells as compared to stable pinning in the hydrophobic microwells as confirmed by Surface Evolver simulations. In addition, if the cavities are presented on hydrophobic surfaces, air bubbles will be trapped inside the cavities once the aqueous solution is placed over these surfaces, resulting in the Cassie-Baxter wetting state. This condition hinders trapping of proteins inside the microwells. Thus, it is necessary to impart hydrophilicity to the microwell surfaces so as to induce the Wenzel state, such that, an entire solution will be fully in contact with the walls of microwells. Imbibition of microwells by protein solutions was analyzed in terms fluorescent intensity versus time. The present work underlines the importance of geometry of microwells and surface wettability of substrate in wetting and effective capturing of solid sub-phases in biofluids.Keywords: BSA, microwells, surface evolver, wettability
Procedia PDF Downloads 198597 Human Identification Using Local Roughness Patterns in Heartbeat Signal
Authors: Md. Khayrul Bashar, Md. Saiful Islam, Kimiko Yamashita, Yano Midori
Abstract:
Despite having some progress in human authentication, conventional biometrics (e.g., facial features, fingerprints, retinal scans, gait, voice patterns) are not robust against falsification because they are neither confidential nor secret to an individual. As a non-invasive tool, electrocardiogram (ECG) has recently shown a great potential in human recognition due to its unique rhythms characterizing the variability of human heart structures (chest geometry, sizes, and positions). Moreover, ECG has a real-time vitality characteristic that signifies the live signs, which ensure legitimate individual to be identified. However, the detection accuracy of the current ECG-based methods is not sufficient due to a high variability of the individual’s heartbeats at a different instance of time. These variations may occur due to muscle flexure, the change of mental or emotional states, and the change of sensor positions or long-term baseline shift during the recording of ECG signal. In this study, a new method is proposed for human identification, which is based on the extraction of the local roughness of ECG heartbeat signals. First ECG signal is preprocessed using a second order band-pass Butterworth filter having cut-off frequencies of 0.00025 and 0.04. A number of local binary patterns are then extracted by applying a moving neighborhood window along the ECG signal. At each instant of the ECG signal, the pattern is formed by comparing the ECG intensities at neighboring time points with the central intensity in the moving window. Then, binary weights are multiplied with the pattern to come up with the local roughness description of the signal. Finally, histograms are constructed that describe the heartbeat signals of individual subjects in the database. One advantage of the proposed feature is that it does not depend on the accuracy of detecting QRS complex, unlike the conventional methods. Supervised recognition methods are then designed using minimum distance to mean and Bayesian classifiers to identify authentic human subjects. An experiment with sixty (60) ECG signals from sixty adult subjects from National Metrology Institute of Germany (NMIG) - PTB database, showed that the proposed new method is promising compared to a conventional interval and amplitude feature-based method.Keywords: human identification, ECG biometrics, local roughness patterns, supervised classification
Procedia PDF Downloads 404596 The Grammar of the Content Plane as a Style Marker in Forensic Authorship Attribution
Authors: Dayane de Almeida
Abstract:
This work aims at presenting a study that demonstrates the usability of categories of analysis from Discourse Semiotics – also known as Greimassian Semiotics in authorship cases in forensic contexts. It is necessary to know if the categories examined in semiotic analysis (the ‘grammar’ of the content plane) can distinguish authors. Thus, a study with 4 sets of texts from a corpus of ‘not on demand’ written samples (those texts differ in formality degree, purpose, addressees, themes, etc.) was performed. Each author contributed with 20 texts, separated into 2 groups of 10 (Author1A, Author1B, and so on). The hypothesis was that texts from a single author were semiotically more similar to each other than texts from different authors. The assumptions and issues that led to this idea are as follows: -The features analyzed in authorship studies mostly relate to the expression plane: they are manifested on the ‘surface’ of texts. If language is both expression and content, content would also have to be considered for more accurate results. Style is present in both planes. -Semiotics postulates the content plane is structured in a ‘grammar’ that underlies expression, and that presents different levels of abstraction. This ‘grammar’ would be a style marker. -Sociolinguistics demonstrates intra-speaker variation: an individual employs different linguistic uses in different situations. Then, how to determine if someone is the author of several texts, distinct in nature (as it is the case in most forensic sets), when it is known intra-speaker variation is dependent on so many factors?-The idea is that the more abstract the level in the content plane, the lower the intra-speaker variation, because there will be a greater chance for the author to choose the same thing. If two authors recurrently chose the same options, differently from one another, it means each one’s option has discriminatory power. -Size is another issue for various attribution methods. Since most texts in real forensic settings are short, methods relying only on the expression plane tend to fail. The analysis of the content plane as proposed by greimassian semiotics would be less size-dependable. -The semiotic analysis was performed using the software Corpus Tool, generating tags to allow the counting of data. Then, similarities and differences were quantitatively measured, through the application of the Jaccard coefficient (a statistical measure that compares the similarities and differences between samples). The results showed the hypothesis was confirmed and, hence, the grammatical categories of the content plane may successfully be used in questioned authorship scenarios.Keywords: authorship attribution, content plane, forensic linguistics, greimassian semiotics, intraspeaker variation, style
Procedia PDF Downloads 242595 Biflavonoids from Selaginellaceae as Epidermal Growth Factor Receptor Inhibitors and Their Anticancer Properties
Authors: Adebisi Adunola Demehin, Wanlaya Thamnarak, Jaruwan Chatwichien, Chatchakorn Eurtivong, Kiattawee Choowongkomon, Somsak Ruchirawat, Nopporn Thasana
Abstract:
The epidermal growth factor receptor (EGFR) is a transmembrane glycoprotein involved in cellular signalling processes and, its aberrant activity is crucial in the development of many cancers such as lung cancer. Selaginellaceae are fern allies that have long been used in Chinese traditional medicine to treat various cancer types, especially lung cancer. Biflavonoids, the major secondary metabolites in Selaginellaceae, have numerous pharmacological activities, including anti-cancer and anti-inflammatory. For instance, amentoflavone induces a cytotoxic effect in the human NSCLC cell line via the inhibition of PARP-1. However, to the best of our knowledge, there are no studies on biflavonoids as EGFR inhibitors. Thus, this study aims to investigate the EGFR inhibitory activities of biflavonoids isolated from Selaginella siamensis and Selaginella bryopteris. Amentoflavone, tetrahydroamentoflavone, sciadopitysin, robustaflavone, robustaflavone-4-methylether, delicaflavone, and chrysocauloflavone were isolated from the ethyl-acetate extract of the whole plants. The structures were determined using NMR spectroscopy and mass spectrometry. In vitro study was conducted to evaluate their cytotoxicity against A549, HEPG2, and T47D human cancer cell lines using the MTT assay. In addition, a target-based assay was performed to investigate their EGFR inhibitory activity using the kinase inhibition assay. Finally, a molecular docking study was conducted to predict the binding modes of the compounds. Robustaflavone-4-methylether and delicaflavone showed the best cytotoxic activity on all the cell lines with IC50 (µM) values of 18.9 ± 2.1 and 22.7 ± 3.3 on A549, respectively. Of these biflavonoids, delicaflavone showed the most potent EGFR inhibitory activity with an 84% relative inhibition at 0.02 nM using erlotinib as a positive control. Robustaflavone-4-methylether showed a 78% inhibition at 0.15 nM. The docking scores obtained from the molecular docking study correlated with the kinase inhibition assay. Robustaflavone-4-methylether and delicaflavone had a docking score of 72.0 and 86.5, respectively. The inhibitory activity of delicaflavone seemed to be linked with the C2”=C3” and 3-O-4”’ linkage pattern. Thus, this study suggests that the structural features of these compounds could serve as a basis for developing new EGFR-TK inhibitors.Keywords: anticancer, biflavonoids, EGFR, molecular docking, Selaginellaceae
Procedia PDF Downloads 198594 Assessment of the Impact of Atmospheric Air, Drinking Water and Socio-Economic Indicators on the Primary Incidence of Children in Altai Krai
Authors: A. P. Pashkov
Abstract:
The number of environmental factors that adversely affect children's health is growing every year; their combination in each territory is different. The contribution of socio-economic factors to the health status of the younger generation is increasing. It is the child’s body that is most sensitive to changes in environmental conditions, responding to this with a deterioration in health. Over the past years, scientists have determined the influence of environmental factors and the incidence of children. Currently, there is a tendency to study regional characteristics of the interaction of a combination of environmental factors with the child's body. The aim of the work was to identify trends in the primary non-infectious morbidity of the children of the Altai Territory as a unique region that combines territories with different levels of environmental quality indicators, as well as to assess the effect of atmospheric air, drinking water and socio-economic indicators on the incidence of children in the region. An unfavorable tendency has been revealed in the region for incidence of such nosological groups as neoplasms, including malignant ones, diseases of the endocrine system, including obesity and thyroid disease, diseases of the circulatory system, digestive diseases, diseases of the genitourinary system, congenital anomalies, and respiratory diseases. Between some groups of diseases revealed a pattern of geographical distribution during mapping and a significant correlation. Some nosologies have a relationship with socio-economic indicators for an integrated assessment: circulatory system diseases, respiratory diseases (direct connection), endocrine system diseases, eating disorders, and metabolic disorders (feedback). The analysis of associations of the incidence of children with average annual concentrations of substances that pollute the air and drinking water showed the existence of reliable correlation in areas of critical and intense degree of environmental quality. This fact confirms that the population living in contaminated areas is subject to the negative influence of environmental factors, which immediately affects the health status of children. The results obtained indicate the need for a detailed assessment of the influence of environmental factors on the incidence of children in the regional aspect, the formation of a database, and the development of automated programs that can predict the incidence in each specific territory. This will increase the effectiveness, including economic of preventive measures.Keywords: incidence of children, regional features, socio-economic factors, environmental factors
Procedia PDF Downloads 115593 Towards End-To-End Disease Prediction from Raw Metagenomic Data
Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker
Abstract:
Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.Keywords: deep learning, disease prediction, end-to-end machine learning, metagenomics, multiple instance learning, precision medicine
Procedia PDF Downloads 125592 A Virtual Set-Up to Evaluate Augmented Reality Effect on Simulated Driving
Authors: Alicia Yanadira Nava Fuentes, Ilse Cervantes Camacho, Amadeo José Argüelles Cruz, Ana María Balboa Verduzco
Abstract:
Augmented reality promises being present in future driving, with its immersive technology let to show directions and maps to identify important places indicating with graphic elements when the car driver requires the information. On the other side, driving is considered a multitasking activity and, for some people, a complex activity where different situations commonly occur that require the immediate attention of the car driver to make decisions that contribute to avoid accidents; therefore, the main aim of the project is the instrumentation of a platform with biometric sensors that allows evaluating the performance in driving vehicles with the influence of augmented reality devices to detect the level of attention in drivers, since it is important to know the effect that it produces. In this study, the physiological sensors EPOC X (EEG), ECG06 PRO and EMG Myoware are joined in the driving test platform with a Logitech G29 steering wheel and the simulation software City Car Driving in which the level of traffic can be controlled, as well as the number of pedestrians that exist within the simulation obtaining a driver interaction in real mode and through a MSP430 microcontroller achieves the acquisition of data for storage. The sensors bring a continuous analog signal in time that needs signal conditioning, at this point, a signal amplifier is incorporated due to the acquired signals having a sensitive range of 1.25 mm/mV, also filtering that consists in eliminating the frequency bands of the signal in order to be interpretative and without noise to convert it from an analog signal into a digital signal to analyze the physiological signals of the drivers, these values are stored in a database. Based on this compilation, we work on the extraction of signal features and implement K-NN (k-nearest neighbor) classification methods and decision trees (unsupervised learning) that enable the study of data for the identification of patterns and determine by classification methods different effects of augmented reality on drivers. The expected results of this project include are a test platform instrumented with biometric sensors for data acquisition during driving and a database with the required variables to determine the effect caused by augmented reality on people in simulated driving.Keywords: augmented reality, driving, physiological signals, test platform
Procedia PDF Downloads 141591 A Corpus-Based Contrastive Analysis of Directive Speech Act Verbs in English and Chinese Legal Texts
Authors: Wujian Han
Abstract:
In the process of human interaction and communication, speech act verbs are considered to be the most active component and the main means for information transmission, and are also taken as an indication of the structure of linguistic behavior. The theoretical value and practical significance of such everyday built-in metalanguage have long been recognized. This paper, which is part of a bigger study, is aimed to provide useful insights for a more precise and systematic application to speech act verbs translation between English and Chinese, especially with regard to the degree to which generic integrity is maintained in the practice of translation of legal documents. In this study, the corpus, i.e. Chinese legal texts and their English translations, English legal texts, ordinary Chinese texts, and ordinary English texts, serve as a testing ground for examining contrastively the usage of English and Chinese directive speech act verbs in legal genre. The scope of this paper is relatively wide and essentially covers all directive speech act verbs which are used in ordinary English and Chinese, such as order, command, request, prohibit, threat, advice, warn and permit. The researcher, by combining the corpus methodology with a contrastive perspective, explored a range of characteristics of English and Chinese directive speech act verbs including their semantic, syntactic and pragmatic features, and then contrasted them in a structured way. It has been found that there are similarities between English and Chinese directive speech act verbs in legal genre, such as similar semantic components between English speech act verbs and their translation equivalents in Chinese, formal and accurate usage of English and Chinese directive speech act verbs in legal contexts. But notable differences have been identified in areas of difference between their usage in the original Chinese and English legal texts such as valency patterns and frequency of occurrences. For example, the subjects of some directive speech act verbs are very frequently omitted in Chinese legal texts, but this is not the case in English legal texts. One of the practicable methods to achieve adequacy and conciseness in speech act verb translation from Chinese into English in legal genre is to repeat the subjects or the message with discrepancy, and vice versa. In addition, translation effects such as overuse and underuse of certain directive speech act verbs are also found in the translated English texts compared to the original English texts. Legal texts constitute a particularly valuable material for speech act verb study. Building up such a contrastive picture of the Chinese and English speech act verbs in legal language would yield results of value and interest to legal translators and students of language for legal purposes and have practical application to legal translation between English and Chinese.Keywords: contrastive analysis, corpus-based, directive speech act verbs, legal texts, translation between English and Chinese
Procedia PDF Downloads 499590 From Abraham to Average Man: Game Theoretic Analysis of Divine Social Relationships
Authors: Elizabeth Latham
Abstract:
Billions of people worldwide profess some feeling of psychological or spiritual connection with the divine. The majority of them attribute this personal connection to the God of the Christian Bible. The objective of this research was to discover what could be known about the exact social nature of these relationships and to see if they mimic the interactions recounted in the bible; if a worldwide majority believes that the Christian Bible is a true account of God’s interactions with mankind, it is reasonable to assume that the interactions between God and the aforementioned people would be similar to the ones in the bible. This analysis required the employment of an unusual method of biblical analysis: Game Theory. Because the research focused on documented social interaction between God and man in scripture, it was important to go beyond text-analysis methods. We used stories from the New Revised Standard Version of the bible to set up “games” using economics-style matrices featuring each player’s motivations and possible courses of action, modeled after interactions in the Old and New Testaments between the Judeo-Christian God and some mortal person. We examined all relevant interactions for the objectives held by each party and their strategies for obtaining them. These findings were then compared to similar “games” created based on interviews with people subscribing to different levels of Christianity who ranged from barely-practicing to clergymen. The range was broad so as to look for a correlation between scriptural knowledge and game-similarity to the bible. Each interview described a personal experience someone believed they had with God and matrices were developed to describe each one as social interaction: a “game” to be analyzed quantitively. The data showed that in most cases, the social features of God-man interactions in the modern lives of people were like those present in the “games” between God and man in the bible. This similarity was referred to in the study as “biblical faith” and it alone was a fascinating finding with many implications. The even more notable finding, however, was that the amount of game-similarity present did not correlate with the amount of scriptural knowledge. Each participant was also surveyed on family background, political stances, general education, scriptural knowledge, and those who had biblical faith were not necessarily the ones that knew the bible best. Instead, there was a high degree of correlation between biblical faith and family religious observance. It seems that to have a biblical psychological relationship with God, it is more important to have a religious family than to have studied scripture, a surprising insight with massive implications on the practice and preservation of religion.Keywords: bible, Christianity, game theory, social psychology
Procedia PDF Downloads 155589 A Method to Evaluate and Compare Web Information Extractors
Authors: Patricia Jiménez, Rafael Corchuelo, Hassan A. Sleiman
Abstract:
Web mining is gaining importance at an increasing pace. Currently, there are many complementary research topics under this umbrella. Their common theme is that they all focus on applying knowledge discovery techniques to data that is gathered from the Web. Sometimes, these data are relatively easy to gather, chiefly when it comes from server logs. Unfortunately, there are cases in which the data to be mined is the data that is displayed on a web document. In such cases, it is necessary to apply a pre-processing step to first extract the information of interest from the web documents. Such pre-processing steps are performed using so-called information extractors, which are software components that are typically configured by means of rules that are tailored to extracting the information of interest from a web page and structuring it according to a pre-defined schema. Paramount to getting good mining results is that the technique used to extract the source information is exact, which requires to evaluate and compare the different proposals in the literature from an empirical point of view. According to Google Scholar, about 4 200 papers on information extraction have been published during the last decade. Unfortunately, they were not evaluated within a homogeneous framework, which leads to difficulties to compare them empirically. In this paper, we report on an original information extraction evaluation method. Our contribution is three-fold: a) this is the first attempt to provide an evaluation method for proposals that work on semi-structured documents; the little existing work on this topic focuses on proposals that work on free text, which has little to do with extracting information from semi-structured documents. b) It provides a method that relies on statistically sound tests to support the conclusions drawn; the previous work does not provide clear guidelines or recommend statistically sound tests, but rather a survey that collects many features to take into account as well as related work; c) We provide a novel method to compute the performance measures regarding unsupervised proposals; otherwise they would require the intervention of a user to compute them by using the annotations on the evaluation sets and the information extracted. Our contributions will definitely help researchers in this area make sure that they have advanced the state of the art not only conceptually, but from an empirical point of view; it will also help practitioners make informed decisions on which proposal is the most adequate for a particular problem. This conference is a good forum to discuss on our ideas so that we can spread them to help improve the evaluation of information extraction proposals and gather valuable feedback from other researchers.Keywords: web information extractors, information extraction evaluation method, Google scholar, web
Procedia PDF Downloads 248588 Humanistic Psychology Workshop to Increase Psychological Well-Being
Authors: Nidia Thalia Alva Rangel, Ferran Padros Blazquez, Ma. Ines Gomez Del Campo Del Paso
Abstract:
Happiness has been since antiquity a concept of interest around the world. Positive psychology is the science that begins to study happiness in a more precise and controlled way, obtaining wide amount of research which can be applied. One of the central constructs of Positive Psychology is Carol Ryff’s psychological well-being model as eudaimonic happiness, which comprehends six dimensions: autonomy, environmental mastery, personal growth, positive relations with others, purpose in life, and self-acceptance. Humanistic psychology is a clear precedent of Positive Psychology, which has studied human development topics and it features a great variety of intervention techniques nevertheless has little evidence with controlled research. Therefore, the present research had the aim to evaluate the efficacy of a humanistic intervention program to increase psychological well-being in healthy adults through a mixed methods study. Before and after the intervention, it was applied Carol Ryff’s psychological well-being scale (PWBS) and the Symptom Check List 90 as pretest and posttest. In addition, a questionnaire of five open questions was applied after each session. The intervention program was designed in experiential workshop format, based on the foundational attitudes defined by Carl Rogers: congruence, unconditional positive regard and empathy, integrating humanistic intervention strategies from gestalt, psychodrama, logotherapy and psychological body therapy, with the aim to strengthen skills in the six dimensions of psychological well-being model. The workshop was applied to six volunteer adults in 12 sessions of 2 hours each. Finally, quantitative data were analyzed with Wilcoxon statistic test through the SPSS program, obtaining as results differences statistically significant in pathology symptoms between prettest and postest, also levels of dimensions of psychological well-being were increased, on the other hand for qualitative strand, by open questionnaires it showed how the participants were experiencing the techniques and changing through the sessions. Thus, the humanistic psychology program was effective to increase psychological well-being. Working to promote well-being prompts to be an effective way to reduce pathological symptoms as a secondary gain. Experiential workshops are a useful tool for small groups. There exists the need for research to count with more evidence of humanistic psychology interventions in different contexts and impulse the application of Positive Psychology knowledge.Keywords: happiness, humanistic psychology, positive psychology, psychological well-being, workshop
Procedia PDF Downloads 416587 Linguistic Analysis of Argumentation Structures in Georgian Political Speeches
Authors: Mariam Matiashvili
Abstract:
Argumentation is an integral part of our daily communications - formal or informal. Argumentative reasoning, techniques, and language tools are used both in personal conversations and in the business environment. Verbalization of the opinions requires the use of extraordinary syntactic-pragmatic structural quantities - arguments that add credibility to the statement. The study of argumentative structures allows us to identify the linguistic features that make the text argumentative. Knowing what elements make up an argumentative text in a particular language helps the users of that language improve their skills. Also, natural language processing (NLP) has become especially relevant recently. In this context, one of the main emphases is on the computational processing of argumentative texts, which will enable the automatic recognition and analysis of large volumes of textual data. The research deals with the linguistic analysis of the argumentative structures of Georgian political speeches - particularly the linguistic structure, characteristics, and functions of the parts of the argumentative text - claims, support, and attack statements. The research aims to describe the linguistic cues that give the sentence a judgmental/controversial character and helps to identify reasoning parts of the argumentative text. The empirical data comes from the Georgian Political Corpus, particularly TV debates. Consequently, the texts are of a dialogical nature, representing a discussion between two or more people (most often between a journalist and a politician). The research uses the following approaches to identify and analyze the argumentative structures Lexical Classification & Analysis - Identify lexical items that are relevant in argumentative texts creating process - Creating the lexicon of argumentation (presents groups of words gathered from a semantic point of view); Grammatical Analysis and Classification - means grammatical analysis of the words and phrases identified based on the arguing lexicon. Argumentation Schemas - Describe and identify the Argumentation Schemes that are most likely used in Georgian Political Speeches. As a final step, we analyzed the relations between the above mentioned components. For example, If an identified argument scheme is “Argument from Analogy”, identified lexical items semantically express analogy too, and they are most likely adverbs in Georgian. As a result, we created the lexicon with the words that play a significant role in creating Georgian argumentative structures. Linguistic analysis has shown that verbs play a crucial role in creating argumentative structures.Keywords: georgian, argumentation schemas, argumentation structures, argumentation lexicon
Procedia PDF Downloads 70586 Experimental Research of High Pressure Jet Interaction with Supersonic Crossflow
Authors: Bartosz Olszanski, Zbigniew Nosal, Jacek Rokicki
Abstract:
An experimental study of cold-jet (nitrogen) reaction control jet system has been carried out to investigate the flow control efficiency for low to moderate jet pressure ratios (total jet pressure p0jet over free stream static pressure in the wind tunnel p∞) and different angles of attack for infinite Mach number equal to 2. An investigation of jet influence was conducted on a flat plate geometry placed in the test section of intermittent supersonic wind tunnel of Department of Aerodynamics, WUT. Various convergent jet nozzle geometries to obtain different jet momentum ratios were tested on the same test model geometry. Surface static pressure measurements, Schlieren flow visualizations (using continuous and photoflash light source), load cell measurements gave insight into the supersonic crossflow interaction for different jet pressure and jet momentum ratios and their influence on the efficiency of side jet control as described by the amplification factor (actual to theoretical net force generated by the control nozzle). Moreover, the quasi-steady numerical simulations of flow through the same wind tunnel geometry (convergent-divergent nozzle plus test section) were performed using ANSYS Fluent basing on Reynolds-Averaged Navier-Stokes (RANS) solver incorporated with k-ω Shear Stress Transport (SST) turbulence model to assess the possible spurious influence of test section walls over the jet exit near field area of interest. The strong bow shock, barrel shock, and Mach disk as well as lambda separation region in front of nozzle were observed as images taken by high-speed camera examine the interaction of the jet and the free stream. In addition, the development of large-scale vortex structures (counter-rotating vortex pair) was detected. The history of complex static pressure pattern on the plate was recorded and compared to the force measurement data as well as numerical simulation data. The analysis of the obtained results, especially in the wake of the jet showed important features of the interaction mechanisms between the lateral jet and the flow field.Keywords: flow visualization techniques, pressure measurements, reaction control jet, supersonic cross flow
Procedia PDF Downloads 299