Search results for: hybrid quantum algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4061

Search results for: hybrid quantum algorithms

431 Exchanges between Literature and Cinema: Scripted Writing in the Novel "Miguel e os Demônios", by Lourenço Mutarelli

Authors: Marilia Correa Parecis De Oliveira

Abstract:

This research looks at the novel Miguel e os demônios (2009), by the contemporary Brazilian author Lourenço Mutarelli. In it, the presence of film language resources is remarkable, creating thus a kind of scripted writing. We intend to analyze the presence of film language in work under study, in which there is a mixture of the characteristics of the novel and screenplay genres, trying to explore which aesthetic and meaning effects of the ownership of a visual language for the creation of a literary text create in the novel. The objective of this research is to identify and analyze the formal and thematic aspects that characterize the hybridity of literature and film in the novel by Lourenço Mutarelli. The method employed comprises reading and production cataloging of theoretical and critical texts, literary and film theory, historical review about the author, and also the realization of an analytical and interpretative reading of novel. In Miguel e os demônios there is a range of formal and thematic elements of popular narrative genres such as the detective story and action film, with a predominance of verb forms in the present and NPs - features that tend to make present the narrated scenes, as in the cinema. The novel, in this sense, is located in an intermediate position between the literary text and the pre-film text, as though filled with proper elements of the language of film, you can not fit it categorically in the genre script, since it does not reduce the script because aspires to be read as a novel. Therefore, the difficulty of fitting the work in a single gender also refused to be extra-textual factors - such as your publication as novel - but, rather, by the binary classifications serve solely to imprison the work on a label, which impoverish not only reading the text, as also the possibility of recognizing literature as a constant dialogue space and interaction with other media. We can say, therefore, that frame the work Miguel e os demônios in one of the two genres (novel or screenplay) proves not enough, since the text is revealed a hybrid narrative, consisting in a kind of scripted writing. In this sense, it is like a text that is born in a society saturated by audiovisual in their daily lives in order to be consumed by readers who, in ascending scale, exchange books by visual narratives. However, the novel uses film's resources without giving up its constitution as literature; on the contrary, it enriches the visual and linguistically, dialoguing with the complex contemporary horizon marked by the cultural industry.

Keywords: Brazilian literature, cinema, Lourenço Mutarelli, screenplay

Procedia PDF Downloads 298
430 Legal Issues of Collecting and Processing Big Health Data in the Light of European Regulation 679/2016

Authors: Ioannis Iglezakis, Theodoros D. Trokanas, Panagiota Kiortsi

Abstract:

This paper aims to explore major legal issues arising from the collection and processing of Health Big Data in the light of the new European secondary legislation for the protection of personal data of natural persons, placing emphasis on the General Data Protection Regulation 679/2016. Whether Big Health Data can be characterised as ‘personal data’ or not is really the crux of the matter. The legal ambiguity is compounded by the fact that, even though the processing of Big Health Data is premised on the de-identification of the data subject, the possibility of a combination of Big Health Data with other data circulating freely on the web or from other data files cannot be excluded. Another key point is that the application of some provisions of GPDR to Big Health Data may both absolve the data controller of his legal obligations and deprive the data subject of his rights (e.g., the right to be informed), ultimately undermining the fundamental right to the protection of personal data of natural persons. Moreover, data subject’s rights (e.g., the right not to be subject to a decision based solely on automated processing) are heavily impacted by the use of AI, algorithms, and technologies that reclaim health data for further use, resulting in sometimes ambiguous results that have a substantial impact on individuals. On the other hand, as the COVID-19 pandemic has revealed, Big Data analytics can offer crucial sources of information. In this respect, this paper identifies and systematises the legal provisions concerned, offering interpretative solutions that tackle dangers concerning data subject’s rights while embracing the opportunities that Big Health Data has to offer. In addition, particular attention is attached to the scope of ‘consent’ as a legal basis in the collection and processing of Big Health Data, as the application of data analytics in Big Health Data signals the construction of new data and subject’s profiles. Finally, the paper addresses the knotty problem of role assignment (i.e., distinguishing between controller and processor/joint controllers and joint processors) in an era of extensive Big Health data sharing. The findings are the fruit of a current research project conducted by a three-member research team at the Faculty of Law of the Aristotle University of Thessaloniki and funded by the Greek Ministry of Education and Religious Affairs.

Keywords: big health data, data subject rights, GDPR, pandemic

Procedia PDF Downloads 116
429 Career Guidance System Using Machine Learning

Authors: Mane Darbinyan, Lusine Hayrapetyan, Elen Matevosyan

Abstract:

Artificial Intelligence in Education (AIED) has been created to help students get ready for the workforce, and over the past 25 years, it has grown significantly, offering a variety of technologies to support academic, institutional, and administrative services. However, this is still challenging, especially considering the labor market's rapid change. While choosing a career, people face various obstacles because they do not take into consideration their own preferences, which might lead to many other problems like shifting jobs, work stress, occupational infirmity, reduced productivity, and manual error. Besides preferences, people should properly evaluate their technical and non-technical skills, as well as their personalities. Professional counseling has become a difficult undertaking for counselors due to the wide range of career choices brought on by changing technological trends. It is necessary to close this gap by utilizing technology that makes sophisticated predictions about a person's career goals based on their personality. Hence, there is a need to create an automated model that would help in decision-making based on user inputs. Improving career guidance can be achieved by embedding machine learning into the career consulting ecosystem. There are various systems of career guidance that work based on the same logic, such as the classification of applicants, matching applications with appropriate departments or jobs, making predictions, and providing suitable recommendations. Methodologies like KNN, Neural Networks, K-means clustering, D-Tree, and many other advanced algorithms are applied in the fields of data and compute some data, which is helpful to predict the right careers. Besides helping users with their career choice, these systems provide numerous opportunities which are very useful while making this hard decision. They help the candidate to recognize where he/she specifically lacks sufficient skills so that the candidate can improve those skills. They are also capable to offer an e-learning platform, taking into account the user's lack of knowledge. Furthermore, users can be provided with details on a particular job, such as the abilities required to excel in that industry.

Keywords: career guidance system, machine learning, career prediction, predictive decision, data mining, technical and non-technical skills

Procedia PDF Downloads 70
428 Career Guidance System Using Machine Learning

Authors: Mane Darbinyan, Lusine Hayrapetyan, Elen Matevosyan

Abstract:

Artificial Intelligence in Education (AIED) has been created to help students get ready for the workforce, and over the past 25 years, it has grown significantly, offering a variety of technologies to support academic, institutional, and administrative services. However, this is still challenging, especially considering the labor market's rapid change. While choosing a career, people face various obstacles because they do not take into consideration their own preferences, which might lead to many other problems like shifting jobs, work stress, occupational infirmity, reduced productivity, and manual error. Besides preferences, people should evaluate properly their technical and non-technical skills, as well as their personalities. Professional counseling has become a difficult undertaking for counselors due to the wide range of career choices brought on by changing technological trends. It is necessary to close this gap by utilizing technology that makes sophisticated predictions about a person's career goals based on their personality. Hence, there is a need to create an automated model that would help in decision-making based on user inputs. Improving career guidance can be achieved by embedding machine learning into the career consulting ecosystem. There are various systems of career guidance that work based on the same logic, such as the classification of applicants, matching applications with appropriate departments or jobs, making predictions, and providing suitable recommendations. Methodologies like KNN, neural networks, K-means clustering, D-Tree, and many other advanced algorithms are applied in the fields of data and compute some data, which is helpful to predict the right careers. Besides helping users with their career choice, these systems provide numerous opportunities which are very useful while making this hard decision. They help the candidate to recognize where he/she specifically lacks sufficient skills so that the candidate can improve those skills. They are also capable of offering an e-learning platform, taking into account the user's lack of knowledge. Furthermore, users can be provided with details on a particular job, such as the abilities required to excel in that industry.

Keywords: career guidance system, machine learning, career prediction, predictive decision, data mining, technical and non-technical skills

Procedia PDF Downloads 58
427 Knowledge Management Barriers: A Statistical Study of Hardware Development Engineering Teams within Restricted Environments

Authors: Nicholas S. Norbert Jr., John E. Bischoff, Christopher J. Willy

Abstract:

Knowledge Management (KM) is globally recognized as a crucial element in securing competitive advantage through building and maintaining organizational memory, codifying and protecting intellectual capital and business intelligence, and providing mechanisms for collaboration and innovation. KM frameworks and approaches have been developed and defined identifying critical success factors for conducting KM within numerous industries ranging from scientific to business, and for ranges of organization scales from small groups to large enterprises. However, engineering and technical teams operating within restricted environments are subject to unique barriers and KM challenges which cannot be directly treated using the approaches and tools prescribed for other industries. This research identifies barriers in conducting KM within Hardware Development Engineering (HDE) teams and statistically compares significance to barriers upholding the four KM pillars of organization, technology, leadership, and learning for HDE teams. HDE teams suffer from restrictions in knowledge sharing (KS) due to classification of information (national security risks), customer proprietary restrictions (non-disclosure agreement execution for designs), types of knowledge, complexity of knowledge to be shared, and knowledge seeker expertise. As KM evolved leveraging information technology (IT) and web-based tools and approaches from Web 1.0 to Enterprise 2.0, KM may also seek to leverage emergent tools and analytics including expert locators and hybrid recommender systems to enable KS across barriers of the technical teams. The research will test hypothesis statistically evaluating if KM barriers for HDE teams affect the general set of expected benefits of a KM System identified through previous research. If correlations may be identified, then generalizations of success factors and approaches may also be garnered for HDE teams. Expert elicitation will be conducted using a questionnaire hosted on the internet and delivered to a panel of experts including engineering managers, principal and lead engineers, senior systems engineers, and knowledge management experts. The feedback to the questionnaire will be processed using analysis of variance (ANOVA) to identify and rank statistically significant barriers of HDE teams within the four KM pillars. Subsequently, KM approaches will be recommended for upholding the KM pillars within restricted environments of HDE teams.

Keywords: engineering management, knowledge barriers, knowledge management, knowledge sharing

Procedia PDF Downloads 259
426 Polyvinyl Alcohol Incorporated with Hibiscus Extract Microcapsules as Combined Active and Intelligent Composite Film for Meat Preservation

Authors: Ahmed F. Ghanem, Marwa I. Wahba, Asmaa N. El-Dein, Mohamed A. EL-Raey, Ghada E.A. Awad

Abstract:

Numerous attempts are being performed in order to formulate suitable packaging materials for meat products. However, to the best of our knowledge, the incorporation of free hibiscus extract or its microcapsules in the pure polyvinyl alcohol (PVA) matrix as packaging materials for meats is seldom reported. Therefore, this study aims at protection of the aqueous crude extract of hibiscus flowers utilizing spry drying encapsulation technique. Fourier transform infrared (FTIR), scanning electron microscope (SEM), and zetasizer results confirmed the successful formation of assembled capsules via strong interactions, spherical rough microparticles, and ~ 235 nm of particle size, respectively. Also, the obtained microcapsules enjoy high thermal stability, unlike the free extract. Then, the obtained spray-dried particles were incorporated into the casting solution of the pure PVA film with a concentration 10 wt. %. The segregated free-standing composite films were investigated, compared to the neat matrix, with several characterization techniques such as FTIR, SEM, thermal gravimetric analysis (TGA), mechanical tester, contact angle, water vapor permeability, and oxygen transmission. The results demonstrated variations in the physicochemical properties of the PVA film after the inclusion of the free and the extract microcapsules. Moreover, biological studies emphasized the biocidal potential of the hybrid films against microorganisms contaminating the meat. Specifically, the microcapsules imparted not only antimicrobial but also antioxidant activities to PVA. Application of the prepared films on the real meat samples displayed low bacterial growth with a slight increase in the pH over the storage time up to 10 days at 4 oC which further proved the meat safety. Moreover, the colors of the films did not significantly changed except after 21 days indicating the spoilage of the meat samples. No doubt, the dual-functional of prepared composite films pave the way towards combined active/smart food packaging applications. This would play a vital role in the food hygiene, including also quality control and assurance.

Keywords: PVA, hibiscus, extraction, encapsulation, active packaging, smart and intelligent packaging, meat spoilage

Procedia PDF Downloads 75
425 Computational Fluid Dynamics Analysis of Sit-Ski Aerodynamics in Crosswind Conditions

Authors: Lev Chernyshev, Ekaterina Lieshout, Natalia Kabaliuk

Abstract:

Sit-skis enable individuals with limited lower limb or core movement to ski unassisted confidently. The rise in popularity of the Winter Paralympics has seen an influx of engineering innovation, especially for the Downhill and Super-Giant Slalom events, where the athletes achieve speeds as high as 160km/h. The growth in the sport has inspired recent research into sit-ski aerodynamics. Crosswinds are expected in mountain climates and, therefore, can greatly impact a skier's maneuverability and aerodynamics. This research investigates the impact of crosswinds on the drag force of a Paralympic sit-ski using Computational Fluid Dynamics (CFD). A Paralympic sit-ski with a model of a skier, a leg cover, a bucket seat, and a simplified suspension system was used for CFD analysis in ANSYS Fluent. The hybrid initialisation tool and the SST k–ω turbulence model were used with two tetrahedral mesh bodies of influence. The crosswinds (10, 30, and 50 km/h) acting perpendicular to the sit-ski's direction of travel were simulated, corresponding to the straight-line skiing speeds of 60, 80, and 100km/h. Following the initialisation, 150 iterations for both first and second order steady-state solvers were used, before switching to a transient solver with a computational time of 1.5s and a time step of 0.02s, to allow the solution to converge. CFD results were validated against wind tunnel data. The results suggested that for all crosswind and sit-ski speeds, on average, 64% of the total drag on the ski was due to the athlete's torso. The suspension was associated with the second largest overall sit-ski drag force contribution, averaging at 27%, followed by the leg cover at 10%. While the seat contributed a negligible 0.5% of the total drag force, averaging at 1.2N across the conditions studied. The effect of the crosswind increased the total drag force across all skiing speed studies, with the drag on the athlete's torso and suspension being the most sensitive to the changes in the crosswind magnitude. The effect of the crosswind on the ski drag reduced as the simulated skiing speed increased: for skiing at 60km/h, the drag force on the torso increased by 154% with the increase of the crosswind from 10km/h to 50km/h; whereas, at 100km/h the corresponding drag force increase was halved (75%). The analysis of the flow and pressure field characteristics for a sit-ski in crosswind conditions indicated the flow separation localisation and wake size correlated with the magnitude and directionality of the crosswind relative to straight-line skiing. The findings can inform aerodynamic improvements in sit-ski design and increase skiers' medalling chances.

Keywords: sit-ski, aerodynamics, CFD, crosswind effects

Procedia PDF Downloads 60
424 Localization of Radioactive Sources with a Mobile Radiation Detection System using Profit Functions

Authors: Luís Miguel Cabeça Marques, Alberto Manuel Martinho Vale, José Pedro Miragaia Trancoso Vaz, Ana Sofia Baptista Fernandes, Rui Alexandre de Barros Coito, Tiago Miguel Prates da Costa

Abstract:

The detection and localization of hidden radioactive sources are of significant importance in countering the illicit traffic of Special Nuclear Materials and other radioactive sources and materials. Radiation portal monitors are commonly used at airports, seaports, and international land borders for inspecting cargo and vehicles. However, these equipment can be expensive and are not available at all checkpoints. Consequently, the localization of SNM and other radioactive sources often relies on handheld equipment, which can be time-consuming. The current study presents the advantages of real-time analysis of gamma-ray count rate data from a mobile radiation detection system based on simulated data and field tests. The incorporation of profit functions and decision criteria to optimize the detection system's path significantly enhances the radiation field information and reduces survey time during cargo inspection. For source position estimation, a maximum likelihood estimation algorithm is employed, and confidence intervals are derived using the Fisher information. The study also explores the impact of uncertainties, baselines, and thresholds on the performance of the profit function. The proposed detection system, utilizing a plastic scintillator with silicon photomultiplier sensors, boasts several benefits, including cost-effectiveness, high geometric efficiency, compactness, and lightweight design. This versatility allows for seamless integration into any mobile platform, be it air, land, maritime, or hybrid, and it can also serve as a handheld device. Furthermore, integration of the detection system into drones, particularly multirotors, and its affordability enable the automation of source search and substantial reduction in survey time, particularly when deploying a fleet of drones. While the primary focus is on inspecting maritime container cargo, the methodologies explored in this research can be applied to the inspection of other infrastructures, such as nuclear facilities or vehicles.

Keywords: plastic scintillators, profit functions, path planning, gamma-ray detection, source localization, mobile radiation detection system, security scenario

Procedia PDF Downloads 96
423 Computational Linguistic Implications of Gender Bias: Machines Reflect Misogyny in Society

Authors: Irene Yi

Abstract:

Machine learning, natural language processing, and neural network models of language are becoming more and more prevalent in the fields of technology and linguistics today. Training data for machines are at best, large corpora of human literature and at worst, a reflection of the ugliness in society. Computational linguistics is a growing field dealing with such issues of data collection for technological development. Machines have been trained on millions of human books, only to find that in the course of human history, derogatory and sexist adjectives are used significantly more frequently when describing females in history and literature than when describing males. This is extremely problematic, both as training data, and as the outcome of natural language processing. As machines start to handle more responsibilities, it is crucial to ensure that they do not take with them historical sexist and misogynistic notions. This paper gathers data and algorithms from neural network models of language having to deal with syntax, semantics, sociolinguistics, and text classification. Computational analysis on such linguistic data is used to find patterns of misogyny. Results are significant in showing the existing intentional and unintentional misogynistic notions used to train machines, as well as in developing better technologies that take into account the semantics and syntax of text to be more mindful and reflect gender equality. Further, this paper deals with the idea of non-binary gender pronouns and how machines can process these pronouns correctly, given its semantic and syntactic context. This paper also delves into the implications of gendered grammar and its effect, cross-linguistically, on natural language processing. Languages such as French or Spanish not only have rigid gendered grammar rules, but also historically patriarchal societies. The progression of society comes hand in hand with not only its language, but how machines process those natural languages. These ideas are all extremely vital to the development of natural language models in technology, and they must be taken into account immediately.

Keywords: computational analysis, gendered grammar, misogynistic language, neural networks

Procedia PDF Downloads 105
422 Improvising Grid Interconnection Capabilities through Implementation of Power Electronics

Authors: Ashhar Ahmed Shaikh, Ayush Tandon

Abstract:

The swift reduction of fossil fuels from nature has crucial need for alternative energy sources to cater vital demand. It is essential to boost alternative energy sources to cover the continuously increasing demand for energy while minimizing the negative environmental impacts. Solar energy is one of the reliable sources that can generate energy. Solar energy is freely available in nature and is completely eco-friendly, and they are considered as the most promising power generating sources due to their easy availability and other advantages for the local power generation. This paper is to review the implementation of power electronic devices through Solar Energy Grid Integration System (SEGIS) to increase the efficiency. This paper will also concentrate on the future grid infrastructure and various other applications in order to make the grid smart. Development and implementation of a power electronic devices such as PV inverters and power controllers play an important role in power supply in the modern energy economy. Solar Energy Grid Integration System (SEGIS) opens pathways for promising solutions for new electronic and electrical components such as advanced innovative inverter/controller topologies and their functions, economical energy management systems, innovative energy storage systems with equipped advanced control algorithms, advanced maximum-power-point tracking (MPPT) suited for all PV technologies, protocols and the associated communications. In addition to advanced grid interconnection capabilities and features, the new hardware design results in small size, less maintenance, and higher reliability. The SEGIS systems will make the 'advanced integrated system' and 'smart grid' evolutionary processes to run in a better way. Since the last few years, there was a major development in the field of power electronics which led to more efficient systems and reduction of the cost per Kilo-watt. The inverters became more efficient and had reached efficiencies in excess of 98%, and commercial solar modules have reached almost 21% efficiency.

Keywords: solar energy grid integration systems, smart grid, advanced integrated system, power electronics

Procedia PDF Downloads 171
421 The Key Role of a Bystander Improving the Effectiveness of Cardiopulmonary Resuscitation Performed in Extra-Urban Areas

Authors: Leszek Szpakowski, Daniel Celiński, Sławomir Pilip, Grzegorz Michalak

Abstract:

The aim of the study was to analyse the usefulness of the 'E-rescuer' pilot project planned to be implemented in a chosen area of Eastern Poland in the cases of suspected sudden cardiac arrests in the extra-urban areas. Inventing an application allowing to dispatch simultaneously both Medical Emergency Teams and the E-rescuer to the place of the accident is the crucial assumption of the mentioned pilot project. The E-rescuer is defined to be the trained person able to take effective basic life support and to use automated external defibrillator. Having logged in using a smartphone, the E-rescuer's readiness is reported online to provide cardiopulmonary resuscitation exactly at the given location. Due to the accurately defined location of the E-rescuer, his arrival time is possible to be precisely fixed, and the substantive support through the displayed algorithms is capable of being provided as well. Having analysed the medical records in the years 2015-2016, cardiopulmonary resuscitation was considered to be effective when an early indication of circulation was provided, and the patient was taken to hospital. In the mentioned term, there were 2.291 cases of a sudden cardiac arrest. Cardiopulmonary resuscitation was taken in 621 patients in total including 205 people in the urban area and 416 in the extra-urban areas. The effectiveness of cardiopulmonary resuscitation in the extra-urban areas was much lower (33,8%) than in the urban (50,7%). The average ambulance arrival time was respectively longer in the extra-urban areas, and it was 12,3 minutes while in the urban area 3,3 minutes. There was no significant difference in the average age of studied patients - 62,5 and 64,8 years old. However, the average ambulance arrival time was 7,6 minutes for effective resuscitations and 10,5 minutes for ineffective ones. Hence, the ambulance arrival time is a crucial factor influencing on the effectiveness of cardiopulmonary resuscitation, especially in the extra-urban areas where it is much longer than in the urban. The key role of trained E-rescuers being nearby taking basic life support before the ambulance arrival can effectively support Emergency Medical Services System in Poland.

Keywords: basic life support, bystander, effectiveness, resuscitation

Procedia PDF Downloads 188
420 Barriers for Appropriate Palliative Symptom Management: A Qualitative Research in Kazakhstan, a Medium-Income Transitional-Economy Country

Authors: Ibragim Issabekov, Byron Crape, Lyazzat Toleubekova

Abstract:

Background: Palliative care substantially improves the quality of life of terminally-ill patients. Symptom control is one of the keystones in the management of patients in palliative care settings, lowering distress as well as improving the quality of life of patients with end-stage diseases. The most common symptoms causing significant distress for patients are pain, nausea and vomiting, increased respiratory secretions and mental health issues like depression. Aims are: 1. to identify best practices in symptom management in palliative patients in accordance with internationally approved guidelines and compare aforementioned with actual practices in Kazakhstan; to evaluate the criteria for assessing symptoms in terminally-ill patients, 2. to review the availability and utilization of pharmaceutical agents for pain control, management of excessive respiratory secretions, nausea, and vomiting, and delirium and 3. to develop recommendations for the systematic approach to end-of-life symptom management in Kazakhstan. Methods: The use of qualitative research methods together with systematic literature review have been employed to provide a rigorous research process to evaluate current approaches for symptom management of palliative patients in Kazakhstan. Qualitative methods include in-depth semi-structured interviews of the healthcare professionals involved in palliative care provision. Results: Obstacles were found in appropriate provision of palliative care. Inadequate education and training to manage severe symptoms, poorly defined laws and regulations for palliative care provision, and a lack of algorithms and guidelines for care were major barriers in the effective provision of palliative care. Conclusion: Assessment of palliative care in this medium-income transitional-economy country is one of the first steps in the initiation of integration of palliative care into the existing health system. Achieving this requires identifying obstacles and resolving these issues.

Keywords: end-of-life care, middle income country, palliative care, symptom control

Procedia PDF Downloads 190
419 Constructing a Semi-Supervised Model for Network Intrusion Detection

Authors: Tigabu Dagne Akal

Abstract:

While advances in computer and communications technology have made the network ubiquitous, they have also rendered networked systems vulnerable to malicious attacks devised from a distance. These attacks or intrusions start with attackers infiltrating a network through a vulnerable host and then launching further attacks on the local network or Intranet. Nowadays, system administrators and network professionals can attempt to prevent such attacks by developing intrusion detection tools and systems using data mining technology. In this study, the experiments were conducted following the Knowledge Discovery in Database Process Model. The Knowledge Discovery in Database Process Model starts from selection of the datasets. The dataset used in this study has been taken from Massachusetts Institute of Technology Lincoln Laboratory. After taking the data, it has been pre-processed. The major pre-processing activities include fill in missed values, remove outliers; resolve inconsistencies, integration of data that contains both labelled and unlabelled datasets, dimensionality reduction, size reduction and data transformation activity like discretization tasks were done for this study. A total of 21,533 intrusion records are used for training the models. For validating the performance of the selected model a separate 3,397 records are used as a testing set. For building a predictive model for intrusion detection J48 decision tree and the Naïve Bayes algorithms have been tested as a classification approach for both with and without feature selection approaches. The model that was created using 10-fold cross validation using the J48 decision tree algorithm with the default parameter values showed the best classification accuracy. The model has a prediction accuracy of 96.11% on the training datasets and 93.2% on the test dataset to classify the new instances as normal, DOS, U2R, R2L and probe classes. The findings of this study have shown that the data mining methods generates interesting rules that are crucial for intrusion detection and prevention in the networking industry. Future research directions are forwarded to come up an applicable system in the area of the study.

Keywords: intrusion detection, data mining, computer science, data mining

Procedia PDF Downloads 287
418 Alternative Approach to the Machine Vision System Operating for Solving Industrial Control Issue

Authors: M. S. Nikitenko, S. A. Kizilov, D. Y. Khudonogov

Abstract:

The paper considers an approach to a machine vision operating system combined with using a grid of light markers. This approach is used to solve several scientific and technical problems, such as measuring the capability of an apron feeder delivering coal from a lining return port to a conveyor in the technology of mining high coal releasing to a conveyor and prototyping an autonomous vehicle obstacle detection system. Primary verification of a method of calculating bulk material volume using three-dimensional modeling and validation in laboratory conditions with relative errors calculation were carried out. A method of calculating the capability of an apron feeder based on a machine vision system and a simplifying technology of a three-dimensional modelled examined measuring area with machine vision was offered. The proposed method allows measuring the volume of rock mass moved by an apron feeder using machine vision. This approach solves the volume control issue of coal produced by a feeder while working off high coal by lava complexes with release to a conveyor with accuracy applied for practical application. The developed mathematical apparatus for measuring feeder productivity in kg/s uses only basic mathematical functions such as addition, subtraction, multiplication, and division. Thus, this fact simplifies software development, and this fact expands the variety of microcontrollers and microcomputers suitable for performing tasks of calculating feeder capability. A feature of an obstacle detection issue is to correct distortions of the laser grid, which simplifies their detection. The paper presents algorithms for video camera image processing and autonomous vehicle model control based on obstacle detection machine vision systems. A sample fragment of obstacle detection at the moment of distortion with the laser grid is demonstrated.

Keywords: machine vision, machine vision operating system, light markers, measuring capability, obstacle detection system, autonomous transport

Procedia PDF Downloads 101
417 Kinematic Analysis of the Calf Raise Test Using a Mobile iOS Application: Validation of the Calf Raise Application

Authors: Ma. Roxanne Fernandez, Josie Athens, Balsalobre-Fernandez, Masayoshi Kubo, Kim Hébert-Losier

Abstract:

Objectives: The calf raise test (CRT) is used in rehabilitation and sports medicine to evaluate calf muscle function. For testing, individuals stand on one leg and go up on their toes and back down to volitional fatigue. The newly developed Calf Raise application (CRapp) for iOS uses computer-vision algorithms enabling objective measurement of CRT outcomes. We aimed to validate the CRapp by examining its concurrent validity and agreement levels against laboratory-based equipment and establishing its intra- and inter-rater reliability. Methods: CRT outcomes (i.e., repetitions, positive work, total height, peak height, fatigue index, and peak power) were assessed in thirteen healthy individuals (6 males, 7 females) on three occasions and both legs using the CRapp, 3D motion capture, and force plate technologies simultaneously. Data were extracted from two markers: one placed immediately below the lateral malleolus and another on the heel. Concurrent validity and agreement measures were determined using intraclass correlation coefficients (ICC₃,ₖ), typical errors expressed as coefficient of variations (CV), and Bland-Altman methods to assess biases and precision. Reliability was assessed using ICC3,1 and CV values. Results: Validity of CRapp outcomes was good to excellent across measures for both markers (mean ICC ≥0.878), with precision plots showing good agreement and precision. CV ranged from 0% (repetitions) to 33.3% (fatigue index) and were, on average better for the lateral malleolus marker. Additionally, inter- and intra-rater reliability were excellent (mean ICC ≥0.949, CV ≤5.6%). Conclusion: These results confirm the CRapp is valid and reliable within and between users for measuring CRT outcomes in healthy adults. The CRapp provides a tool to objectivise CRT outcomes in research and practice, aligning with recent advances in mobile technologies and their increased use in healthcare.

Keywords: calf raise test, mobile application, validity, reliability

Procedia PDF Downloads 152
416 Study on Control Techniques for Adaptive Impact Mitigation

Authors: Rami Faraj, Cezary Graczykowski, Błażej Popławski, Grzegorz Mikułowski, Rafał Wiszowaty

Abstract:

Progress in the field of sensors, electronics and computing results in more and more often applications of adaptive techniques for dynamic response mitigation. When it comes to systems excited with mechanical impacts, the control system has to take into account the significant limitations of actuators responsible for system adaptation. The paper provides a comprehensive discussion of the problem of appropriate design and implementation of adaptation techniques and mechanisms. Two case studies are presented in order to compare completely different adaptation schemes. The first example concerns a double-chamber pneumatic shock absorber with a fast piezo-electric valve and parameters corresponding to the suspension of a small unmanned aerial vehicle, whereas the second considered system is a safety air cushion applied for evacuation of people from heights during a fire. For both systems, it is possible to ensure adaptive performance, but a realization of the system’s adaptation is completely different. The reason for this is technical limitations corresponding to specific types of shock-absorbing devices and their parameters. Impact mitigation using a pneumatic shock absorber corresponds to much higher pressures and small mass flow rates, which can be achieved with minimal change of valve opening. In turn, mass flow rates in safety air cushions relate to gas release areas counted in thousands of sq. cm. Because of these facts, both shock-absorbing systems are controlled based on completely different approaches. Pneumatic shock-absorber takes advantage of real-time control with valve opening recalculated at least every millisecond. In contrast, safety air cushion is controlled using the semi-passive technique, where adaptation is provided using prediction of the entire impact mitigation process. Similarities of both approaches, including applied models, algorithms and equipment, are discussed. The entire study is supported by numerical simulations and experimental tests, which prove the effectiveness of both adaptive impact mitigation techniques.

Keywords: adaptive control, adaptive system, impact mitigation, pneumatic system, shock-absorber

Procedia PDF Downloads 79
415 A Concept in Addressing the Singularity of the Emerging Universe

Authors: Mahmoud Reza Hosseini

Abstract:

The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times has been studied known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity which cannot be explained by modern physics and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature could be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing an energy conversion mechanism. This is accomplished by establishing a state of energy called a “neutral state”, with an energy level which is referred to as “base energy” capable of converting into other states. Although it follows the same principles, the unique quanta state of the base energy allows it to be distinguishable from other states and have a uniform distribution at the ground level. Although the concept of base energy can be utilized to address the singularity issue, to establish a complete picture, the origin of the base energy should be also identified. This matter is the subject of the first study in the series “A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing” which is discussed in detail. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.

Keywords: big bang, cosmic inflation, birth of universe, energy creation

Procedia PDF Downloads 77
414 Energy Usage in Isolated Areas of Honduras

Authors: Bryan Jefry Sabillon, Arlex Molina Cedillo

Abstract:

Currently, the raise in the demand of electrical energy as a consequence of the development of technology and population growth, as well as some projections made by ‘La Agencia Internacional de la Energía’ (AIE) and research institutes, reveal alarming data about the expected raise of it in the next few decades. Because of this, something should be made to raise the awareness of the rational and efficient usage of this resource. Because of the global concern of providing electrical energy to isolated areas, projects consisting of energy generation using renewable resources are commonly carried out. On a socioeconomically and cultural point of view, it can be foreseen a positive impact that would result for the society to have this resource. This article is focused on the great potential that Honduras shows, as a country that is looking forward to produce renewable energy due to the crisis that it’s living nowadays. Because of this, we present a detailed research that exhibits the main necessities that the rural communities are facing today, to allay the negative aspects due to the scarcity of electrical energy. We also discuss which should be the type of electrical generation method to be used, according to the disposition, geography, climate, and of course the accessibility of each area. Honduras is actually in the process of developing new methods for the generation of energy; therefore, it is of our concern to talk about renewable energy, the exploitation of which is a global trend. Right now the countries’ main energetic generation methods are: hydrological, thermic, wind, biomass and photovoltaic (this is one of the main sources of clean electrical generation). The use of these resources was possible partially due to the studies made by the organizations that focus on electrical energy and its demand, such as ‘La Cooperación Alemana’ (GIZ), ‘La Secretaria de Energía y Recursos Naturales’ (SERNA), and ‘El Banco Centroamericano de Integración Económica’ (BCIE), which eased the complete guide that is to be used in the protocol to be followed to carry out the three stages of this type of projects: 1) Licences and Permitions, 2) Fincancial Aspects and 3) The inscription for the Protocol in Kyoto. This article pretends to take the reader through the necessary information (according to the difficult accessibility that each zone might present), about the best option of electrical generation in zones that are totally isolated from the net, pretending to use renewable resources to generate electrical energy. We finally conclude that the usage of hybrid systems of generation of energy for small remote communities brings about a positive impact, not only because of the fact of providing electrical energy but also because of the improvements in education, health, sustainable agriculture and livestock, and of course the advances in the generation of energy which is the main concern of this whole article.

Keywords: energy, isolated, renewable, accessibility

Procedia PDF Downloads 220
413 Preparation of Allyl BODIPY for the Click Reaction with Thioglycolic Acid

Authors: Chrislaura Carmo, Luca Deiana, Mafalda Laranjo, Abilio Sobral, Armando Cordova

Abstract:

Photodynamic therapy (PDT) is currently used for the treatment of malignancies and premalignant tumors. It is based on the capture of a photosensitizing molecule (PS) which, when excited by light at a certain wavelength, reacts with oxygen and generates oxidizing species (radicals, singlet oxygen, triplet species) in target tissues, leading to cell death. BODIPY (4,4-difluoro-4-bora-3a,4a-diaza-s-indaceno) derivatives are emerging as important candidates for photosensitizer in photodynamic therapy of cancer cells due to their high triplet quantum yield. Today these dyes are relevant molecules in photovoltaic materials and fluorescent sensors. In this study, it will be demonstrated the possibility that BODIPY can be covalently linked to thioglycolic acid through the click reaction. Thiol−ene click chemistry has become a powerful synthesis method in materials science and surface modification. The design of biobased allyl-terminated precursors with high renewable carbon content for the construction of the thiol-ene polymer networks is essential for sustainable development and green chemistry. The work aims to synthesize the BODIPY (10-(4-(allyloxy) phenyl)-2,8-diethyl-5,5-difluoro-1,3,7,9-tetramethyl-5H-dipyrrolo[1,2-c:2',1'-f] [1,3,2] diazaborinin-4-ium-5-uide) and to click reaction with Thioglycolic acid. BODIPY was synthesized by the condensation reaction between aldehyde and pyrrole in dichloromethane, followed by in situ complexation with BF3·OEt2 in the presence of the base. Then it was functionalized with allyl bromide to achieve the double bond and thus be able to carry out the click reaction. The thiol−ene click was performed using DMPA (2,2-Dimethoxy-2-phenylacetophenone) as a photo-initiator in the presence of UV light (320–500 nm) in DMF at room temperature for 24 hours. Compounds were characterized by standard analytical techniques, including UV-Vis Spectroscopy, 1H, 13C, 19F NMR and mass spectroscopy. The results of this study will be important to link BODIPY to polymers through the thiol group offering a diversity of applications and functionalization. This new molecule can be tested as third-generation photosensitizers, in which the dye is targeted by antibodies or nanocarriers by cells, mainly in cancer cells, PDT and Photodynamic Antimicrobial Chemotherapy (PACT). According to our studies, it was possible to visualize a click reaction between allyl BODIPY and thioglycolic acid. Our team will also test the reaction with other thiol groups for comparison. Further, we will do the click reaction of BODIPY with a natural polymer linked with a thiol group. The results of the above compounds will be tested in PDT assays on various lung cancer cell lines.

Keywords: bodipy, click reaction, thioglycolic acid, allyl, thiol-ene click

Procedia PDF Downloads 116
412 Assessment of Hydrologic Response of a Naturalized Tropical Coastal Mangrove Ecosystem Due to Land Cover Change in an Urban Watershed

Authors: Bryan Clark B. Hernandez, Eugene C. Herrera, Kazuo Nadaoka

Abstract:

Mangrove forests thriving in intertidal zones in tropical and subtropical regions of the world offer a range of ecosystem services including carbon storage and sequestration. They can regulate the detrimental effects of climate change due to carbon releases two to four times greater than that of mature tropical rainforests. Moreover, they are effective natural defenses against storm surges and tsunamis. However, their proliferation depends significantly on the prevailing hydroperiod at the coast. In the Philippines, these coastal ecosystems have been severely threatened with a 50% decline in areal extent observed from 1918 to 2010. The highest decline occurred in 1950 - 1972 when national policies encouraged the development of fisheries and aquaculture. With the intensive land use conversion upstream, changes in the freshwater-saltwater envelope at the coast may considerably impact mangrove growth conditions. This study investigates a developing urban watershed in Kalibo, Aklan province with a 220-hectare mangrove forest replanted for over 30 years from coastal mudflats. Since then, the mangrove forest was sustainably conserved and declared as protected areas. Hybrid land cover classification technique was used to classify Landsat images for years, 1990, 2010, and 2017. Digital elevation model utilized was Interferometric Synthetic Aperture Radar (IFSAR) with a 5-meter resolution to delineate the watersheds. Using numerical modelling techniques, the hydrologic and hydraulic analysis of the influence of land cover change to flow and sediment dynamics was simulated. While significant land cover change occurred upland, thereby increasing runoff and sediment loads, the mangrove forests abundance adjacent to the coasts for the urban watershed, was somehow sustained. However, significant alteration of the coastline was observed in Kalibo through the years, probably due to the massive land-use conversion upstream and significant replanting of mangroves downstream. Understanding the hydrologic-hydraulic response of these watersheds to change land cover is essential to helping local government and stakeholders facilitate better management of these mangrove ecosystems.

Keywords: coastal mangroves, hydrologic model, land cover change, Philippines

Procedia PDF Downloads 113
411 Real-Time Data Stream Partitioning over a Sliding Window in Real-Time Spatial Big Data

Authors: Sana Hamdi, Emna Bouazizi, Sami Faiz

Abstract:

In recent years, real-time spatial applications, like location-aware services and traffic monitoring, have become more and more important. Such applications result dynamic environments where data as well as queries are continuously moving. As a result, there is a tremendous amount of real-time spatial data generated every day. The growth of the data volume seems to outspeed the advance of our computing infrastructure. For instance, in real-time spatial Big Data, users expect to receive the results of each query within a short time period without holding in account the load of the system. But with a huge amount of real-time spatial data generated, the system performance degrades rapidly especially in overload situations. To solve this problem, we propose the use of data partitioning as an optimization technique. Traditional horizontal and vertical partitioning can increase the performance of the system and simplify data management. But they remain insufficient for real-time spatial Big data; they can’t deal with real-time and stream queries efficiently. Thus, in this paper, we propose a novel data partitioning approach for real-time spatial Big data named VPA-RTSBD (Vertical Partitioning Approach for Real-Time Spatial Big data). This contribution is an implementation of the Matching algorithm for traditional vertical partitioning. We find, firstly, the optimal attribute sequence by the use of Matching algorithm. Then, we propose a new cost model used for database partitioning, for keeping the data amount of each partition more balanced limit and for providing a parallel execution guarantees for the most frequent queries. VPA-RTSBD aims to obtain a real-time partitioning scheme and deals with stream data. It improves the performance of query execution by maximizing the degree of parallel execution. This affects QoS (Quality Of Service) improvement in real-time spatial Big Data especially with a huge volume of stream data. The performance of our contribution is evaluated via simulation experiments. The results show that the proposed algorithm is both efficient and scalable, and that it outperforms comparable algorithms.

Keywords: real-time spatial big data, quality of service, vertical partitioning, horizontal partitioning, matching algorithm, hamming distance, stream query

Procedia PDF Downloads 147
410 Factors Impacting Geostatistical Modeling Accuracy and Modeling Strategy of Fluvial Facies Models

Authors: Benbiao Song, Yan Gao, Zhuo Liu

Abstract:

Geostatistical modeling is the key technic for reservoir characterization, the quality of geological models will influence the prediction of reservoir performance greatly, but few studies have been done to quantify the factors impacting geostatistical reservoir modeling accuracy. In this study, 16 fluvial prototype models have been established to represent different geological complexity, 6 cases range from 16 to 361 wells were defined to reproduce all those 16 prototype models by different methodologies including SIS, object-based and MPFS algorithms accompany with different constraint parameters. Modeling accuracy ratio was defined to quantify the influence of each factor, and ten realizations were averaged to represent each accuracy ratio under the same modeling condition and parameters association. Totally 5760 simulations were done to quantify the relative contribution of each factor to the simulation accuracy, and the results can be used as strategy guide for facies modeling in the similar condition. It is founded that data density, geological trend and geological complexity have great impact on modeling accuracy. Modeling accuracy may up to 90% when channel sand width reaches up to 1.5 times of well space under whatever condition by SIS and MPFS methods. When well density is low, the contribution of geological trend may increase the modeling accuracy from 40% to 70%, while the use of proper variogram may have very limited contribution for SIS method. It can be implied that when well data are dense enough to cover simple geobodies, few efforts were needed to construct an acceptable model, when geobodies are complex with insufficient data group, it is better to construct a set of robust geological trend than rely on a reliable variogram function. For object-based method, the modeling accuracy does not increase obviously as SIS method by the increase of data density, but kept rational appearance when data density is low. MPFS methods have the similar trend with SIS method, but the use of proper geological trend accompany with rational variogram may have better modeling accuracy than MPFS method. It implies that the geological modeling strategy for a real reservoir case needs to be optimized by evaluation of dataset, geological complexity, geological constraint information and the modeling objective.

Keywords: fluvial facies, geostatistics, geological trend, modeling strategy, modeling accuracy, variogram

Procedia PDF Downloads 251
409 Site-based Internship Experiences: From Research to Implementation and Community Collaboration

Authors: Jamie Sundvall, Lisa Jennings

Abstract:

Site based field internship learning (SBL) is an educational approach within a Master’s of Social Work (MSW) university field placement department that promotes a more streamlined approach to the integration of theory and evidence based practices for social work students. The SBL model is founded on research in the field, consideration of current work force needs, United States national trends of MSW graduate skill and knowledge deficits, educational trends in students pursing a master’s degree in social work, and current social problems that require unique problem solving skills. This study explores the use of site-based learning in a hybrid social work program. In this setting, site based learning pairs online education courses and social work field education to create training opportunities for social work students within their own community and cultural context. Students engage in coursework in an online setting with both synchronous and asynchronous features that facilitate development of core competencies for MSW students. Through the SBL model, students are then partnered with faculty in a virtual course room and a university vetted site within their community. The study explores how this model of learning creates community partnerships, through which students engage in a learning loop to develop social work skills, while preparing students to address current community, social, and global issues with the engagement of technology. The goal of SBL is to more effectively equip social work students for practice according to current workforce demands, provide access to education and care to populations who have limited access, and create self-sustainable partnerships. Further, the model helps students learn integration of evidence based practices and helps instructors more effectively teach integration of ethics into practice. The study found that the SBL model increases the influence and professional relevance of the social work profession, and ultimately facilitates stronger approaches to integrating theory into practice. Current implementation of the practice in the United States will be presented in the study. dditionally, future research conceptualization of SBL models will be presented, in order to collaborate on advancing best approaches of translating theory into practice, according to the current needs of the profession and needs of social work students.

Keywords: collaboration, fieldwork, research, site-based learning, technology

Procedia PDF Downloads 119
408 A Risk Assessment Tool for the Contamination of Aflatoxins on Dried Figs Based on Machine Learning Algorithms

Authors: Kottaridi Klimentia, Demopoulos Vasilis, Sidiropoulos Anastasios, Ihara Diego, Nikolaidis Vasileios, Antonopoulos Dimitrios

Abstract:

Aflatoxins are highly poisonous and carcinogenic compounds produced by species of the genus Aspergillus spp. that can infect a variety of agricultural foods, including dried figs. Biological and environmental factors, such as population, pathogenicity, and aflatoxinogenic capacity of the strains, topography, soil, and climate parameters of the fig orchards, are believed to have a strong effect on aflatoxin levels. Existing methods for aflatoxin detection and measurement, such as high performance liquid chromatography (HPLC), and enzyme-linked immunosorbent assay (ELISA), can provide accurate results, but the procedures are usually time-consuming, sample-destructive, and expensive. Predicting aflatoxin levels prior to crop harvest is useful for minimizing the health and financial impact of a contaminated crop. Consequently, there is interest in developing a tool that predicts aflatoxin levels based on topography and soil analysis data of fig orchards. This paper describes the development of a risk assessment tool for the contamination of aflatoxin on dried figs, based on the location and altitude of the fig orchards, the population of the fungus Aspergillus spp. in the soil, and soil parameters such as pH, saturation percentage (SP), electrical conductivity (EC), organic matter, particle size analysis (sand, silt, clay), the concentration of the exchangeable cations (Ca, Mg, K, Na), extractable P, and trace of elements (B, Fe, Mn, Zn and Cu), by employing machine learning methods. In particular, our proposed method integrates three machine learning techniques, i.e., dimensionality reduction on the original dataset (principal component analysis), metric learning (Mahalanobis metric for clustering), and k-nearest neighbors learning algorithm (KNN), into an enhanced model, with mean performance equal to 85% by terms of the Pearson correlation coefficient (PCC) between observed and predicted values.

Keywords: aflatoxins, Aspergillus spp., dried figs, k-nearest neighbors, machine learning, prediction

Procedia PDF Downloads 173
407 Uncertainty Quantification of Corrosion Anomaly Length of Oil and Gas Steel Pipelines Based on Inline Inspection and Field Data

Authors: Tammeen Siraj, Wenxing Zhou, Terry Huang, Mohammad Al-Amin

Abstract:

The high resolution inline inspection (ILI) tool is used extensively in the pipeline industry to identify, locate, and measure metal-loss corrosion anomalies on buried oil and gas steel pipelines. Corrosion anomalies may occur singly (i.e. individual anomalies) or as clusters (i.e. a colony of corrosion anomalies). Although the ILI technology has advanced immensely, there are measurement errors associated with the sizes of corrosion anomalies reported by ILI tools due limitations of the tools and associated sizing algorithms, and detection threshold of the tools (i.e. the minimum detectable feature dimension). Quantifying the measurement error in the ILI data is crucial for corrosion management and developing maintenance strategies that satisfy the safety and economic constraints. Studies on the measurement error associated with the length of the corrosion anomalies (in the longitudinal direction of the pipeline) has been scarcely reported in the literature and will be investigated in the present study. Limitations in the ILI tool and clustering process can sometimes cause clustering error, which is defined as the error introduced during the clustering process by including or excluding a single or group of anomalies in or from a cluster. Clustering error has been found to be one of the biggest contributory factors for relatively high uncertainties associated with ILI reported anomaly length. As such, this study focuses on developing a consistent and comprehensive framework to quantify the measurement errors in the ILI-reported anomaly length by comparing the ILI data and corresponding field measurements for individual and clustered corrosion anomalies. The analysis carried out in this study is based on the ILI and field measurement data for a set of anomalies collected from two segments of a buried natural gas pipeline currently in service in Alberta, Canada. Data analyses showed that the measurement error associated with the ILI-reported length of the anomalies without clustering error, denoted as Type I anomalies is markedly less than that for anomalies with clustering error, denoted as Type II anomalies. A methodology employing data mining techniques is further proposed to classify the Type I and Type II anomalies based on the ILI-reported corrosion anomaly information.

Keywords: clustered corrosion anomaly, corrosion anomaly assessment, corrosion anomaly length, individual corrosion anomaly, metal-loss corrosion, oil and gas steel pipeline

Procedia PDF Downloads 300
406 Enhanced Tensor Tomographic Reconstruction: Integrating Absorption, Refraction and Temporal Effects

Authors: Lukas Vierus, Thomas Schuster

Abstract:

A general framework is examined for dynamic tensor field tomography within an inhomogeneous medium characterized by refraction and absorption, treated as an inverse source problem concerning the associated transport equation. Guided by Fermat’s principle, the Riemannian metric within the specified domain is determined by the medium's refractive index. While considerable literature exists on the inverse problem of reconstructing a tensor field from its longitudinal ray transform within a static Euclidean environment, limited inversion formulas and algorithms are available for general Riemannian metrics and time-varying tensor fields. It is established that tensor field tomography, akin to an inverse source problem for a transport equation, persists in dynamic scenarios. Framing dynamic tensor tomography as an inverse source problem embodies a comprehensive perspective within this domain. Ensuring well-defined forward mappings necessitates establishing existence and uniqueness for the underlying transport equations. However, the bilinear forms of the associated weak formulations fail to meet the coercivity condition. Consequently, recourse to viscosity solutions is taken, demonstrating their unique existence within suitable Sobolev spaces (in the static case) and Sobolev-Bochner spaces (in the dynamic case), under a specific assumption restricting variations in the refractive index. Notably, the adjoint problem can also be reformulated as a transport equation, with analogous results regarding uniqueness. Analytical solutions are expressed as integrals over geodesics, facilitating more efficient evaluation of forward and adjoint operators compared to solving partial differential equations. Certainly, here's the revised sentence in English: Numerical experiments are conducted using a Nesterov-accelerated Landweber method, encompassing various fields, absorption coefficients, and refractive indices, thereby illustrating the enhanced reconstruction achieved through this holistic modeling approach.

Keywords: attenuated refractive dynamic ray transform of tensor fields, geodesics, transport equation, viscosity solutions

Procedia PDF Downloads 34
405 A First-Principles Investigation of Magnesium-Hydrogen System: From Bulk to Nano

Authors: Paramita Banerjee, K. R. S. Chandrakumar, G. P. Das

Abstract:

Bulk MgH2 has drawn much attention for the purpose of hydrogen storage because of its high hydrogen storage capacity (~7.7 wt %) as well as low cost and abundant availability. However, its practical usage has been hindered because of its high hydrogen desorption enthalpy (~0.8 eV/H2 molecule), which results in an undesirable desorption temperature of 3000C at 1 bar H2 pressure. To surmount the limitations of bulk MgH2 for the purpose of hydrogen storage, a detailed first-principles density functional theory (DFT) based study on the structure and stability of neutral (Mgm) and positively charged (Mgm+) Mg nanoclusters of different sizes (m = 2, 4, 8 and 12), as well as their interaction with molecular hydrogen (H2), is reported here. It has been found that due to the absence of d-electrons within the Mg atoms, hydrogen remained in molecular form even after its interaction with neutral and charged Mg nanoclusters. Interestingly, the H2 molecules do not enter into the interstitial positions of the nanoclusters. Rather, they remain on the surface by ornamenting these nanoclusters and forming new structures with a gravimetric density higher than 15 wt %. Our observation is that the inclusion of Grimme’s DFT-D3 dispersion correction in this weakly interacting system has a significant effect on binding of the H2 molecules with these nanoclusters. The dispersion corrected interaction energy (IE) values (0.1-0.14 eV/H2 molecule) fall in the right energy window, that is ideal for hydrogen storage. These IE values are further verified by using high-level coupled-cluster calculations with non-iterative triples corrections i.e. CCSD(T), (which has been considered to be a highly accurate quantum chemical method) and thereby confirming the accuracy of our ‘dispersion correction’ incorporated DFT calculations. The significance of the polarization and dispersion energy in binding of the H2 molecules are confirmed by performing energy decomposition analysis (EDA). A total of 16, 24, 32 and 36 H2 molecules can be attached to the neutral and charged nanoclusters of size m = 2, 4, 8 and 12 respectively. Ab-initio molecular dynamics (AIMD) simulation shows that the outermost H2 molecules are desorbed at a rather low temperature viz. 150 K (-1230C) which is expected. However, complete dehydrogenation of these nanoclusters occur at around 1000C. Most importantly, the host nanoclusters remain stable up to ~500 K (2270C). All these results on the adsorption and desorption of molecular hydrogen with neutral and charged Mg nanocluster systems indicate towards the possibility of reducing the dehydrogenation temperature of bulk MgH2 by designing new Mg-based nano materials which will be able to adsorb molecular hydrogen via this weak Mg-H2 interaction, rather than the strong Mg-H bonding. Notwithstanding the fact that in practical applications, these interactions will be further complicated by the effect of substrates as well as interactions with other clusters, the present study has implications on our fundamental understanding to this problem.

Keywords: density functional theory, DFT, hydrogen storage, molecular dynamics, molecular hydrogen adsorption, nanoclusters, physisorption

Procedia PDF Downloads 407
404 Adolescent-Parent Relationship as the Most Important Factor in Preventing Mood Disorders in Adolescents: An Application of Artificial Intelligence to Social Studies

Authors: Elżbieta Turska

Abstract:

Introduction: One of the most difficult times in a person’s life is adolescence. The experiences in this period may shape the future life of this person to a large extent. This is the reason why many young people experience sadness, dejection, hopelessness, sense of worthlessness, as well as losing interest in various activities and social relationships, all of which are often classified as mood disorders. As many as 15-40% adolescents experience depressed moods and for most of them they resolve and are not carried into adulthood. However, (5-6%) of those affected by mood disorders develop the depressive syndrome and as many as (1-3%) develop full-blown clinical depression. Materials: A large questionnaire was given to 2508 students, aged 13–16 years old, and one of its parts was the Burns checklist, i.e. the standard test for identifying depressed mood. The questionnaire asked about many aspects of the student’s life, it included a total of 53 questions, most of which had subquestions. It is important to note that the data suffered from many problems, the most important of which were missing data and collinearity. Aim: In order to identify the correlates of mood disorders we built predictive models which were then trained and validated. Our aim was not to be able to predict which students suffer from mood disorders but rather to explore the factors influencing mood disorders. Methods: The problems with data described above practically excluded using all classical statistical methods. For this reason, we attempted to use the following Artificial Intelligence (AI) methods: classification trees with surrogate variables, random forests and xgboost. All analyses were carried out with the use of the mlr package for the R programming language. Resuts: The predictive model built by classification trees algorithm outperformed the other algorithms by a large margin. As a result, we were able to rank the variables (questions and subquestions from the questionnaire) from the most to least influential as far as protection against mood disorder is concerned. Thirteen out of twenty most important variables reflect the relationships with parents. This seems to be a really significant result both from the cognitive point of view and also from the practical point of view, i.e. as far as interventions to correct mood disorders are concerned.

Keywords: mood disorders, adolescents, family, artificial intelligence

Procedia PDF Downloads 92
403 Next Generation Radiation Risk Assessment and Prediction Tools Generation Applying AI-Machine (Deep) Learning Algorithms

Authors: Selim M. Khan

Abstract:

Indoor air quality is strongly influenced by the presence of radioactive radon (222Rn) gas. Indeed, exposure to high 222Rn concentrations is unequivocally linked to DNA damage and lung cancer and is a worsening issue in North American and European built environments, having increased over time within newer housing stocks as a function of as yet unclear variables. Indoor air radon concentration can be influenced by a wide range of environmental, structural, and behavioral factors. As some of these factors are quantitative while others are qualitative, no single statistical model can determine indoor radon level precisely while simultaneously considering all these variables across a complex and highly diverse dataset. The ability of AI- machine (deep) learning to simultaneously analyze multiple quantitative and qualitative features makes it suitable to predict radon with a high degree of precision. Using Canadian and Swedish long-term indoor air radon exposure data, we are using artificial deep neural network models with random weights and polynomial statistical models in MATLAB to assess and predict radon health risk to human as a function of geospatial, human behavioral, and built environmental metrics. Our initial artificial neural network with random weights model run by sigmoid activation tested different combinations of variables and showed the highest prediction accuracy (>96%) within the reasonable iterations. Here, we present details of these emerging methods and discuss strengths and weaknesses compared to the traditional artificial neural network and statistical methods commonly used to predict indoor air quality in different countries. We propose an artificial deep neural network with random weights as a highly effective method for assessing and predicting indoor radon.

Keywords: radon, radiation protection, lung cancer, aI-machine deep learnng, risk assessment, risk prediction, Europe, North America

Procedia PDF Downloads 86
402 Photophysics of a Coumarin Molecule in Graphene Oxide Containing Reverse Micelle

Authors: Aloke Bapli, Debabrata Seth

Abstract:

Graphene oxide (GO) is the two-dimensional (2D) nanoscale allotrope of carbon having several physiochemical properties such as high mechanical strength, high surface area, strong thermal and electrical conductivity makes it an important candidate in various modern applications such as drug delivery, supercapacitors, sensors etc. GO has been used in the photothermal treatment of cancers and Alzheimer’s disease etc. The main idea to choose GO in our work is that it is a surface active molecule, it has a large number of hydrophilic functional groups such as carboxylic acid, hydroxyl, epoxide on its surface and in basal plane. So it can easily interact with organic fluorophores through hydrogen bonding or any other kind of interaction and easily modulate the photophysics of the probe molecules. We have used different spectroscopic techniques for our work. The Ground-state absorption spectra and steady-state fluorescence emission spectra were measured by using UV-Vis spectrophotometer from Shimadzu (model-UV-2550) and spectrofluorometer from Horiba Jobin Yvon (model-Fluoromax 4P) respectively. All the fluorescence lifetime and anisotropy decays were collected by using time-correlated single photon counting (TCSPC) setup from Edinburgh instrument (model: LifeSpec-II, U.K.). Herein, we described the photophysics of a hydrophilic molecule 7-(n,n׀-diethylamino) coumarin-3-carboxylic acid (7-DCCA) in the reverse micelles containing GO. It was observed that photophysics of dye is modulated in the presence of GO compared to photophysics of dye in the absence of GO inside the reverse micelles. Here we have reported the solvent relaxation and rotational relaxation time in GO containing reverse micelle and compare our work with normal reverse micelle system by using 7-DCCA molecule. Normal reverse micelle means reverse micelle in the absence of GO. The absorption maxima of 7-DCCA were blue shifted and emission maxima were red shifted in GO containing reverse micelle compared to normal reverse micelle. The rotational relaxation time in GO containing reverse micelle is always faster compare to normal reverse micelle. Solvent relaxation time, at lower w₀ values, is always slower in GO containing reverse micelle compare to normal reverse micelle and at higher w₀ solvent relaxation time of GO containing reverse micelle becomes almost equal to normal reverse micelle. Here emission maximum of 7-DCCA exhibit bathochromic shift in GO containing reverse micelles compared to that in normal reverse micelles because in presence of GO the polarity of the system increases, as polarity increases the emission maxima was red shifted an average decay time of GO containing reverse micelle is less than that of the normal reverse micelle. In GO containing reverse micelle quantum yield, decay time, rotational relaxation time, solvent relaxation time at λₑₓ=375 nm is always higher than λₑₓ=405 nm, shows the excitation wavelength dependent photophysics of 7-DCCA in GO containing reverse micelles.

Keywords: photophysics, reverse micelle, rotational relaxation, solvent relaxation

Procedia PDF Downloads 145