Search results for: deep Boltzmann machines
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2698

Search results for: deep Boltzmann machines

508 An Evaluation of the Artificial Neural Network and Adaptive Neuro Fuzzy Inference System Predictive Models for the Remediation of Crude Oil-Contaminated Soil Using Vermicompost

Authors: Precious Ehiomogue, Ifechukwude Israel Ahuchaogu, Isiguzo Edwin Ahaneku

Abstract:

Vermicompost is the product of the decomposition process using various species of worms, to create a mixture of decomposing vegetable or food waste, bedding materials, and vemicast. This process is called vermicomposting, while the rearing of worms for this purpose is called vermiculture. Several works have verified the adsorption of toxic metals using vermicompost but the application is still scarce for the retention of organic compounds. This research brings to knowledge the effectiveness of earthworm waste (vermicompost) for the remediation of crude oil contaminated soils. The remediation methods adopted in this study were two soil washing methods namely, batch and column process which represent laboratory and in-situ remediation. Characterization of the vermicompost and crude oil contaminated soil were performed before and after the soil washing using Fourier transform infrared (FTIR), scanning electron microscopy (SEM), X-ray fluorescence (XRF), X-ray diffraction (XRD) and Atomic adsorption spectrometry (AAS). The optimization of washing parameters, using response surface methodology (RSM) based on Box-Behnken Design was performed on the response from the laboratory experimental results. This study also investigated the application of machine learning models [Artificial neural network (ANN), Adaptive neuro fuzzy inference system (ANFIS). ANN and ANFIS were evaluated using the coefficient of determination (R²) and mean square error (MSE)]. Removal efficiency obtained from the Box-Behnken design experiment ranged from 29% to 98.9% for batch process remediation. Optimization of the experimental factors carried out using numerical optimization techniques by applying desirability function method of the response surface methodology (RSM) produce the highest removal efficiency of 98.9% at absorbent dosage of 34.53 grams, adsorbate concentration of 69.11 (g/ml), contact time of 25.96 (min), and pH value of 7.71, respectively. Removal efficiency obtained from the multilevel general factorial design experiment ranged from 56% to 92% for column process remediation. The coefficient of determination (R²) for ANN was (0.9974) and (0.9852) for batch and column process, respectively, showing the agreement between experimental and predicted results. For batch and column precess, respectively, the coefficient of determination (R²) for RSM was (0.9712) and (0.9614), which also demonstrates agreement between experimental and projected findings. For the batch and column processes, the ANFIS coefficient of determination was (0.7115) and (0.9978), respectively. It can be concluded that machine learning models can predict the removal of crude oil from polluted soil using vermicompost. Therefore, it is recommended to use machines learning models to predict the removal of crude oil from contaminated soil using vermicompost.

Keywords: ANFIS, ANN, crude-oil, contaminated soil, remediation and vermicompost

Procedia PDF Downloads 90
507 Adapting Cyber Physical Production Systems to Small and Mid-Size Manufacturing Companies

Authors: Yohannes Haile, Dipo Onipede, Jr., Omar Ashour

Abstract:

The main thrust of our research is to determine Industry 4.0 readiness of small and mid-size manufacturing companies in our region and assist them to implement Cyber Physical Production System (CPPS) capabilities. Adopting CPPS capabilities will help organizations realize improved quality, order delivery, throughput, new value creation, and reduced idle time of machines and work centers of their manufacturing operations. The key metrics for the assessment include the level of intelligence, internal and external connections, responsiveness to internal and external environmental changes, capabilities for customization of products with reference to cost, level of additive manufacturing, automation, and robotics integration, and capabilities to manufacture hybrid products in the near term, where near term is defined as 0 to 18 months. In our initial evaluation of several manufacturing firms which are profitable and successful in what they do, we found low level of Physical-Digital-Physical (PDP) loop in their manufacturing operations, whereas 100% of the firms included in this research have specialized manufacturing core competencies that have differentiated them from their competitors. The level of automation and robotics integration is low to medium range, where low is defined as less than 30%, and medium is defined as 30 to 70% of manufacturing operation to include automation and robotics. However, there is a significant drive to include these capabilities at the present time. As it pertains to intelligence and connection of manufacturing systems, it is observed to be low with significant variance in tying manufacturing operations management to Enterprise Resource Planning (ERP). Furthermore, it is observed that the integration of additive manufacturing in general, 3D printing, in particular, to be low, but with significant upside of integrating it in their manufacturing operations in the near future. To hasten the readiness of the local and regional manufacturing companies to Industry 4.0 and transitions towards CPPS capabilities, our working group (ADMAR Working Group) in partnership with our university have been engaged with the local and regional manufacturing companies. The goal is to increase awareness, share know-how and capabilities, initiate joint projects, and investigate the possibility of establishing the Center for Cyber Physical Production Systems Innovation (C2P2SI). The center is intended to support the local and regional university-industry research of implementing intelligent factories, enhance new value creation through disruptive innovations, the development of hybrid and data enhanced products, and the creation of digital manufacturing enterprises. All these efforts will enhance local and regional economic development and educate students that have well developed knowledge and applications of cyber physical manufacturing systems and Industry 4.0.

Keywords: automation, cyber-physical production system, digital manufacturing enterprises, disruptive innovation, new value creation, physical-digital-physical loop

Procedia PDF Downloads 116
506 Studying the Effect of Heartfulness Meditation on Brain Activity

Authors: Norman Farb, Anirudh Kumar, Abdul Subhan, Pallavi Gupta, Jahnavi Mundluru, Abdul Subhan, Shankar Pathmakanthan

Abstract:

Long term meditation practice is increasingly recognized for its health benefits. Among a diversity of contemplative traditions, Heartfulness meditation represents a quickly growing set of practices that is largely unstudied. Heartfulness is unique in that it is a meditation practice that focuses on the Heart. It helps individuals to connect to themselves and find inner peace while meditating. In order to deepen ones’ meditation on the heart, the element of Yogic Energy (‘pranahuti’) is used as an aid during meditation. The purpose of this study was to determine whether consistent EEG effects of Heartfulness meditation be observed in sixty experienced Heartfulness meditators, each of whom attended 6 testing sessions. In each session, participants performed three conditions: a set of cognitive tasks, Heartfulness guided relaxation, and Heartfulness Meditation. To measure EEG, the MUSE EEG head band (product of Interaxon Inc) was used. Participants during the cognitive portion were required to answer questions that tested their logical thinking (Cognitive Reflective Test) and creative thinking skills. (Random Associative Test) The order of condition was randomly counter balanced across six sessions. It was hypothesized that Heartfulness meditation would bring increased alpha (8-12Hz) brain activity during meditation and better cognitive task scores in sessions where the tasks followed meditation. Results show that cognitive task scores were higher after meditation in both CRT and RAT, suggesting stronger right brain and left brain activation. Heartfulness meditation produces a significant decrease in brain activity (as indexed by higher levels of alpha) during the early stages of meditation. As the meditation progressed deep meditative state (as indexed by higher levels of delta) were observed until the end of the condition. This lead to the conclusion that Heartfulness Meditation produces a state that is clearly distinguishable from effortful problem solving.

Keywords: heartfulness meditation, neuroplasticity, brain activity, relaxation response

Procedia PDF Downloads 310
505 Digi-Buddy: A Smart Cane with Artificial Intelligence and Real-Time Assistance

Authors: Amaladhithyan Krishnamoorthy, Ruvaitha Banu

Abstract:

Vision is considered as the most important sense in humans, without which leading a normal can be often difficult. There are many existing smart canes for visually impaired with obstacle detection using ultrasonic transducer to help them navigate. Though the basic smart cane increases the safety of the users, it does not help in filling the void of visual loss. This paper introduces the concept of Digi-Buddy which is an evolved smart cane for visually impaired. The cane consists for several modules, apart from the basic obstacle detection features; the Digi-Buddy assists the user by capturing video/images and streams them to the server using a wide-angled camera, which then detects the objects using Deep Convolutional Neural Network. In addition to determining what the particular image/object is, the distance of the object is assessed by the ultrasonic transducer. The sound generation application, modelled with the help of Natural Language Processing is used to convert the processed images/object into audio. The object detected is signified by its name which is transmitted to the user with the help of Bluetooth hear phones. The object detection is extended to facial recognition which maps the faces of the person the user meets in the database of face images and alerts the user about the person. One of other crucial function consists of an automatic-intimation-alarm which is triggered when the user is in an emergency. If the user recovers within a set time, a button is provisioned in the cane to stop the alarm. Else an automatic intimation is sent to friends and family about the whereabouts of the user using GPS. In addition to safety and security by the existing smart canes, the proposed concept devices to be implemented as a prototype helping visually-impaired visualize their surroundings through audio more in an amicable way.

Keywords: artificial intelligence, facial recognition, natural language processing, internet of things

Procedia PDF Downloads 330
504 Getting to Know ICU Nurses and Their Duties

Authors: Masih Nikgou

Abstract:

ICU nurses or intensive care nurses are highly specialized and trained healthcare personnel. These nurses provide nursing care for patients with life-threatening illnesses or conditions. They provide the experience, knowledge and specialized skills that patients need to survive and recover. Intensive care nurses (ICU) are trained to make momentary decisions and act quickly when the patient's condition changes. Their primary work environment is in the hospital in intensive care units. Typically, ICU patients require a high level of care. ICU nurses work in challenging and complex fields in their nursing profession. They have the primary duty of caring for and saving patients who are fighting for their lives. Intensive care (ICU) nurses are highly trained to provide exceptional care to patients who depend on 24/7 nursing care. A patient in the ICU is often equipped with a ventilator, intubated and connected to several life support machines and medical equipment. Intensive Care Nurses (ICU) have full expertise in considering all aspects of bringing back their patients. Some of the specific responsibilities of ICU nurses include (a) Assessing and monitoring the patient's progress and identifying any sudden changes in the patient's medical condition. (b) Administration of drugs intravenously by injection or through gastric tubes. (c) Provide regular updates on patient progress to physicians, patients, and their families. (d) According to the clinical condition of the patient, perform the approved diagnostic or treatment methods. (e) In case of a health emergency, informing the relevant doctors. (f) To determine the need for emergency interventions, evaluate laboratory data and vital signs of patients. (g) Caring for patient needs during recovery in the ICU. (h) ICU nurses often provide emotional support to patients and their families. (i) Regulating and monitoring medical equipment and devices such as medical ventilators, oxygen delivery devices, transducers, and pressure lines. (j) Assessment of pain level and sedation needs of patients. (k) Maintaining patient reports and records. As the name suggests, critical care nurses work primarily in ICU health care units. ICUs are completely healthy and have proper lighting with strict adherence to health and safety from medical centers. ICU nurses usually move between the intensive care unit, the emergency department, the operating room, and other special departments of the hospital. ICU nurses usually follow a standard shift schedule that includes morning, afternoon, and night schedules. There are also other relocation programs depending on the hospital and region. Nurses who are passionate about data and managing a patient's condition and outcomes typically do well as ICU nurses. An inquisitive mind and attention to processes are equally important. ICU nurses are completely compassionate and are not afraid to advocate for their patients and family members. who are distressed.

Keywords: nursing, intensive care unit, pediatric intensive care unit, mobile intensive care unit, surgical intensive care unite

Procedia PDF Downloads 52
503 Brain Connectome of Glia, Axons, and Neurons: Cognitive Model of Analogy

Authors: Ozgu Hafizoglu

Abstract:

An analogy is an essential tool of human cognition that enables connecting diffuse and diverse systems with physical, behavioral, principal relations that are essential to learning, discovery, and innovation. The Cognitive Model of Analogy (CMA) leads and creates patterns of pathways to transfer information within and between domains in science, just as happens in the brain. The connectome of the brain shows how the brain operates with mental leaps between domains and mental hops within domains and the way how analogical reasoning mechanism operates. This paper demonstrates the CMA as an evolutionary approach to science, technology, and life. The model puts forward the challenges of deep uncertainty about the future, emphasizing the need for flexibility of the system in order to enable reasoning methodology to adapt to changing conditions in the new era, especially post-pandemic. In this paper, we will reveal how to draw an analogy to scientific research to discover new systems that reveal the fractal schema of analogical reasoning within and between the systems like within and between the brain regions. Distinct phases of the problem-solving processes are divided thusly: stimulus, encoding, mapping, inference, and response. Based on the brain research so far, the system is revealed to be relevant to brain activation considering each of these phases with an emphasis on achieving a better visualization of the brain’s mechanism in macro context; brain and spinal cord, and micro context: glia and neurons, relative to matching conditions of analogical reasoning and relational information, encoding, mapping, inference and response processes, and verification of perceptual responses in four-term analogical reasoning. Finally, we will relate all these terminologies with these mental leaps, mental maps, mental hops, and mental loops to make the mental model of CMA clear.

Keywords: analogy, analogical reasoning, brain connectome, cognitive model, neurons and glia, mental leaps, mental hops, mental loops

Procedia PDF Downloads 148
502 Stability of Pump Station Cavern in Chagrin Shale with Time

Authors: Mohammad Moridzadeh, Mohammad Djavid, Barry Doyle

Abstract:

An assessment of the long-term stability of a cavern in Chagrin shale excavated by the sequential excavation method was performed during and after construction. During the excavation of the cavern, deformations of rock mass were measured at the surface of excavation and within the rock mass by surface and deep measurement instruments. Rock deformations were measured during construction which appeared to result from the as-built excavation sequence that had potentially disturbed the rock and its behavior. Also some additional time dependent rock deformations were observed during and post excavation. Several opinions have been expressed to explain this time dependent deformation including stress changes induced by excavation, strain softening (or creep) in the beddings with and without clay and creep of the shaley rock under compressive stresses. In order to analyze and replicate rock behavior observed during excavation, including current and post excavation elastic, plastic, and time dependent deformation, Finite Element Analysis (FEA) was performed. The analysis was also intended to estimate long term deformation of the rock mass around the excavation. Rock mass behavior including time dependent deformation was measured by means of rock surface convergence points, MPBXs, extended creep testing on the long anchors, and load history data from load cells attached to several long anchors. Direct creep testing of Chagrin Shale was performed on core samples from the wall of the Pump Room. Results of these measurements were used to calibrate the FEA of the excavation. These analyses incorporate time dependent constitutive modeling for the rock to evaluate the potential long term movement in the roof, walls, and invert of the cavern. The modeling was performed due to the concerns regarding the unanticipated behavior of the rock mass as well as the forecast of long term deformation and stability of rock around the excavation.

Keywords: Cavern, Chagrin shale, creep, finite element.

Procedia PDF Downloads 330
501 Bioecological Assessment of Cage Farming on the Soft Bottom Benthic Communities of the Vlora Gulf (Albania)

Authors: Ina Nasto, Denada Sota, Pudrila Haskoçelaj, Mariola Ismailaj, Hajdar Kicaj

Abstract:

Most of the fishing areas of the Mediterranean Sea are considered to be overfished, consequently fishing has decreased or is static. Considering the continuous increase in demand for fish, the option of aquaculture production has had a growing development in recent decades. The environmental impact of aquaculture in the marine ecosystem has been a subject of study for several years in the Mediterranean. In the case of the Albanian waters, and in particular the Gulf of Vlora, have had a progressive growing of aquaculture activity in the last twenty years. Given the convenient and secluded location for tourist activities, the bay of Ragusa was considered as the most suitable area to install the aquaculture cage system for the breeding of sea bass and sea bream. The impact of aquaculture in on the soft bottom benthic communities has been assessed at the biggest commercial fish farm (Alb-Adriatico Sh.P.K) established in coastal waters of Ragusa bay 30–50 m deep, in the southern part of the Gulf of Vlora. In order to determine if there is a possible impact on the aquaculture cage in benthic communities, a comparative analysis was undertaken between transects and samples with differences in distances between them and with a gradient of distance from the fish cages. A total of 275 taxa were identified (1 Foraminifera, 1 Porifera, 3 Cnidaria, 2 Platyhelminthes, 2 Nemertea, 1 Bryozoa, 171 Mollusca, 39 Annelida, 35 Crustacea, 14 Echinodermata, 1 Hemichordata, and 5 Tunicata). The anaysis showed three main habitats in the area: biocoenosis of terrigenous mud, residual areas with Possidonia oceanica and also residual assemblages of algal coralligenous. Four benthic biotic indexes were calculated (Shannon H ’, BENTIX, Simpson's Diversity and Peilou’s J’) also benthic indicators as total abundance, number of taxa and species frequency to evaluate possible ecological impact of fish cages in Ragusa bay.

Keywords: Bentix index, Benthic community, invertebrates, aquaculture, Raguza bay

Procedia PDF Downloads 82
500 Effect of Duration and Frequency on Ground Motion: Case Study of Guwahati City

Authors: Amar F. Siddique

Abstract:

The Guwahati city is one of the fastest growing cities of the north-eastern region of India, situated on the South Bank of the Brahmaputra River falls in the highest seismic zone level V. The city has witnessed many high magnitude earthquakes in the past decades. The Assam earthquake occurred on August 15, 1950, of moment magnitude 8.7 epicentered near Rima, Tibet was one of the major earthquakes which caused a serious structural damage and widespread soil liquefaction in and around the region. Hence the study of ground motion characteristics of Guwahati city is very essential. In this present work 1D equivalent linear ground response analysis (GRA) has been adopted using Deep soil software. The analysis has been done for two typical sites namely, Panbazar and Azara comprising total four boreholes location in Guwahati city of India. GRA of the sites is carried out by using an input motion recorded at Nongpoh station (recorded PGA 0.048g) and Nongstoin station (recorded PGA 0.047g) of 1997 Indo-Burma earthquake. In comparison to motion recorded at Nongpoh, different amplifications of bedrock peak ground acceleration (PGA) are obtained for all the boreholes by the motion recorded at Nongstoin station; although, the Fourier amplitude ratios (FAR) and fundamental frequencies remain almost same. The difference in recorded duration and frequency content of the two motions mainly influence the amplification of motions thus getting different surface PGA and amplification factor keeping a constant bedrock PGA. From the results of response spectra, it is found that at the period of less than 0.2 sec the ground motion recorded at Nongpoh station will give a high spectral acceleration (SA) on the structures than at Nongstoin station. Again for a period greater than 0.2 sec the ground motion recorded at Nongstoin station will give a high SA on the structures than at Nongpoh station.

Keywords: fourier amplitude ratio, ground response analysis, peak ground acceleration, spectral acceleration

Procedia PDF Downloads 162
499 The Impact of Emotional Intelligence on Organizational Performance

Authors: El Ghazi Safae, Cherkaoui Mounia

Abstract:

Within companies, emotions have been forgotten as key elements of successful management systems. Seen as factors which disturb judgment, make reckless acts or affect negatively decision-making. Since management systems were influenced by the Taylorist worker image, that made the work regular and plain, and considered employees as executing machines. However, recently, in globalized economy characterized by a variety of uncertainties, emotions are proved as useful elements, even necessary, to attend high-level management. The work of Elton Mayo and Kurt Lewin reveals the importance of emotions. Since then emotions start to attract considerable attention. These studies have shown that emotions influence, directly or indirectly, many organization processes. For example, the quality of interpersonal relationships, job satisfaction, absenteeism, stress, leadership, performance and team commitment. Emotions became fundamental and indispensable to individual yield and so on to management efficiency. The idea that a person potential is associated to Intellectual Intelligence, measured by the IQ as the main factor of social, professional and even sentimental success, was the main problematic that need to be questioned. The literature on emotional intelligence has made clear that success at work does not only depend on intellectual intelligence but also other factors. Several researches investigating emotional intelligence impact on performance showed that emotionally intelligent managers perform more, attain remarkable results, able to achieve organizational objectives, impact the mood of their subordinates and create a friendly work environment. An improvement in the emotional intelligence of managers is therefore linked to the professional development of the organization and not only to the personal development of the manager. In this context, it would be interesting to question the importance of emotional intelligence. Does it impact organizational performance? What is the importance of emotional intelligence and how it impacts organizational performance? The literature highlighted that measurement and conceptualization of emotional intelligence are difficult to define. Efforts to measure emotional intelligence have identified three models that are more prominent: the mixed model, the ability model, and the trait model. The first is considered as cognitive skill, the second relates to the mixing of emotional skills with personality-related aspects and the latter is intertwined with personality traits. But, despite strong claims about the importance of emotional intelligence in the workplace, few studies have empirically examined the impact of emotional intelligence on organizational performance, because even though the concept of performance is at the heart of all evaluation processes of companies and organizations, we observe that performance remains a multidimensional concept and many authors insist about the vagueness that surrounds the concept. Given the above, this article provides an overview of the researches related to emotional intelligence, particularly focusing on studies that investigated the impact of emotional intelligence on organizational performance to contribute to the emotional intelligence literature and highlight its importance and show how it impacts companies’ performance.

Keywords: emotions, performance, intelligence, firms

Procedia PDF Downloads 89
498 Aeromagnetic Data Interpretation and Source Body Evaluation Using Standard Euler Deconvolution Technique in Obudu Area, Southeastern Nigeria

Authors: Chidiebere C. Agoha, Chukwuebuka N. Onwubuariri, Collins U.amasike, Tochukwu I. Mgbeojedo, Joy O. Njoku, Lawson J. Osaki, Ifeyinwa J. Ofoh, Francis B. Akiang, Dominic N. Anuforo

Abstract:

In order to interpret the airborne magnetic data and evaluate the approximate location, depth, and geometry of the magnetic sources within Obudu area using the standard Euler deconvolution method, very high-resolution aeromagnetic data over the area was acquired, processed digitally and analyzed using Oasis Montaj 8.5 software. Data analysis and enhancement techniques, including reduction to the equator, horizontal derivative, first and second vertical derivatives, upward continuation and regional-residual separation, were carried out for the purpose of detailed data Interpretation. Standard Euler deconvolution for structural indices of 0, 1, 2, and 3 was also carried out and respective maps were obtained using the Euler deconvolution algorithm. Results show that the total magnetic intensity ranges from -122.9nT to 147.0nT, regional intensity varies between -106.9nT to 137.0nT, while residual intensity ranges between -51.5nT to 44.9nT clearly indicating the masking effect of deep-seated structures over surface and shallow subsurface magnetic materials. Results also indicated that the positive residual anomalies have an NE-SW orientation, which coincides with the trend of major geologic structures in the area. Euler deconvolution for all the considered structural indices has depth to magnetic sources ranging from the surface to more than 2000m. Interpretation of the various structural indices revealed the locations and depths of the source bodies and the existence of geologic models, including sills, dykes, pipes, and spherical structures. This area is characterized by intrusive and very shallow basement materials and represents an excellent prospect for solid mineral exploration and development.

Keywords: Euler deconvolution, horizontal derivative, Obudu, structural indices

Procedia PDF Downloads 55
497 X-Ray Detector Technology Optimization In CT Imaging

Authors: Aziz Ikhlef

Abstract:

Most of multi-slices CT scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80kVp and 140kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.

Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts

Procedia PDF Downloads 242
496 Evaluation of Gesture-Based Password: User Behavioral Features Using Machine Learning Algorithms

Authors: Lakshmidevi Sreeramareddy, Komalpreet Kaur, Nane Pothier

Abstract:

Graphical-based passwords have existed for decades. Their major advantage is that they are easier to remember than an alphanumeric password. However, their disadvantage (especially recognition-based passwords) is the smaller password space, making them more vulnerable to brute force attacks. Graphical passwords are also highly susceptible to the shoulder-surfing effect. The gesture-based password method that we developed is a grid-free, template-free method. In this study, we evaluated the gesture-based passwords for usability and vulnerability. The results of the study are significant. We developed a gesture-based password application for data collection. Two modes of data collection were used: Creation mode and Replication mode. In creation mode (Session 1), users were asked to create six different passwords and reenter each password five times. In replication mode, users saw a password image created by some other user for a fixed duration of time. Three different duration timers, such as 5 seconds (Session 2), 10 seconds (Session 3), and 15 seconds (Session 4), were used to mimic the shoulder-surfing attack. After the timer expired, the password image was removed, and users were asked to replicate the password. There were 74, 57, 50, and 44 users participated in Session 1, Session 2, Session 3, and Session 4 respectfully. In this study, the machine learning algorithms have been applied to determine whether the person is a genuine user or an imposter based on the password entered. Five different machine learning algorithms were deployed to compare the performance in user authentication: namely, Decision Trees, Linear Discriminant Analysis, Naive Bayes Classifier, Support Vector Machines (SVMs) with Gaussian Radial Basis Kernel function, and K-Nearest Neighbor. Gesture-based password features vary from one entry to the next. It is difficult to distinguish between a creator and an intruder for authentication. For each password entered by the user, four features were extracted: password score, password length, password speed, and password size. All four features were normalized before being fed to a classifier. Three different classifiers were trained using data from all four sessions. Classifiers A, B, and C were trained and tested using data from the password creation session and the password replication with a timer of 5 seconds, 10 seconds, and 15 seconds, respectively. The classification accuracies for Classifier A using five ML algorithms are 72.5%, 71.3%, 71.9%, 74.4%, and 72.9%, respectively. The classification accuracies for Classifier B using five ML algorithms are 69.7%, 67.9%, 70.2%, 73.8%, and 71.2%, respectively. The classification accuracies for Classifier C using five ML algorithms are 68.1%, 64.9%, 68.4%, 71.5%, and 69.8%, respectively. SVMs with Gaussian Radial Basis Kernel outperform other ML algorithms for gesture-based password authentication. Results confirm that the shorter the duration of the shoulder-surfing attack, the higher the authentication accuracy. In conclusion, behavioral features extracted from the gesture-based passwords lead to less vulnerable user authentication.

Keywords: authentication, gesture-based passwords, machine learning algorithms, shoulder-surfing attacks, usability

Procedia PDF Downloads 83
495 Historical Evolution of Islamic Law and Its Application to the Islamic Finance

Authors: Malik Imtiaz Ahmad

Abstract:

The prime sources of Islamic Law or Shariah are Quran and Sunnah and is applied to the personal and public affairs of Muslims. Islamic law is deemed to be divine and furnishes a complete code of conduct based upon universal values to build honesty, trust, righteousness, piety, charity, and social justice. The primary focus of this paper was to examine the development of Islamic jurisprudence (Fiqh) over time and its relevance to the field of Islamic finance. This encompassed a comprehensive analysis of the historical context, key legal principles, and their application in contemporary financial systems adhering to Islamic principles. This study aimed to elucidate the deep-rooted connection between Islamic law and finance, offering valuable insights for practitioners and policymakers in the Islamic finance sector. Understanding the historical context and legal underpinnings is crucial for ensuring the compliance and ethicality of modern financial systems adhering to Islamic principles. Fintech solutions are developing fields to accelerate the digitalization of Islamic finance products and services for the harmonization of global investors' mandate. Through this study, we focus on institutional governance that will improve Sharia compliance, efficiency, transparency in decision-making, and Islamic finance's contribution to humanity through the SDGs program. The research paper employed an extensive literature review, historical analysis, examination of legal principles, and case studies to trace the evolution of Islamic law and its contemporary application in Islamic finance, providing a concise yet comprehensive understanding of this intricate relationship. Through these research methodologies, the aim was to provide a comprehensive and insightful exploration of the historical evolution of Islamic law and its relevance to contemporary Islamic finance, thereby contributing to a deeper understanding of this unique and growing sector of the global financial industry.

Keywords: sharia, sequencing Islamic jurisprudence, Islamic congruent marketing, social development goals of Islamic finance

Procedia PDF Downloads 47
494 Effect of Size and Soil Characteristic on Contribution of Side and Tip Resistance of the Drilled Shafts Axial Load Carrying Capacity

Authors: Mehrak Zargaryaeghoubi, Masood Hajali

Abstract:

Drilled shafts are the most popular of deep foundations, because they have the capability that one single shaft can easily carry the entire load of a large column from a bridge or tall building. Drilled shaft may be an economical alternative to pile foundations because a pile cap is not needed, which not only reduces that expense, but also provides a rough surface in the border of soil and concrete to carry a more axial load. Due to the larger construction sizes of drilled shafts, they have an excellent axial load carrying capacity. Part of the axial load carrying capacity of the drilled shaft is resisted by the soil below the tip of the shaft which is tip resistance and the other part is resisted by the friction developed around the drilled shaft which is side resistance. The condition at the bottom of the excavation can affect the end bearing capacity of the drilled shaft. Also, type of the soil and size of the drilled shaft can affect the frictional resistance. The main loads applied on the drilled shafts are axial compressive loads. It is important to know how many percent of the maximum applied load will be shed inside friction and how much will be transferred to the base. The axial capacity of the drilled shaft foundation is influenced by the size of the drilled shaft, and soil characteristics. In this study, the effect of the size and soil characteristic will be investigated on the contribution of side resistance and end-bearing capacity. Also, the study presents a three-dimensional finite element modeling of a drilled shaft subjected to axial load using ANSYS. The top displacement and settlement of the drilled shaft are verified with analytical results. The soil profile is considered as Table 1 and for a drilled shaft with 7 ft diameter and 95 ft length the stresses in z-direction are calculated through the length of the shaft. From the stresses in z-direction through the length of the shaft the side resistance can be calculated and with the z-direction stress at the tip, the tip resistance can be calculated. The result of the side and tip resistance for this drilled shaft are compared with the analytical results.

Keywords: Drilled Shaft Foundation, size and soil characteristic, axial load capacity, Finite Element

Procedia PDF Downloads 365
493 Improving Fingerprinting-Based Localization System Using Generative Artificial Intelligence

Authors: Getaneh Berie Tarekegn

Abstract:

A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 39 cm, and more than 90% of the errors are less than 82 cm. That is, numerical results proved that, in comparison to traditional methods, the proposed SRCLoc method can significantly improve positioning performance and reduce radio map construction costs.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 47
492 Control of Oil Content of Fried Zucchini Slices by Partial Predrying and Process Optimization

Authors: E. Karacabey, Ş. G. Özçelik, M. S. Turan, C. Baltacıoğlu, E. Küçüköner

Abstract:

Main concern about deep-fat-fried food materials is their high final oil contents absorbed during frying process and/or after cooling period, since diet including high content of oil is accepted unhealthy by consumers. Different methods have been evaluated to decrease oil content of fried food stuffs. One promising method is partially drying of food material before frying. In the present study it was aimed to control and decrease the final oil content of zucchini slices by means of partial drying and to optimize process conditions. Conventional oven drying was used to decrease moisture content of zucchini slices at a certain extent. Process performance in terms of oil uptake was evaluated by comparing oil content of predried and then fried zucchini slices with those determined for directly fried ones. For predrying and frying processes, oven temperature and weight loss and frying oil temperature and time pairs were controlled variables, respectively. Zucchini slices were also directly fried for sensory evaluations revealing preferred properties of final product in terms of surface color, moisture content, texture and taste. These properties of directly fried zucchini slices taking the highest score at the end of sensory evaluation were determined and used as targets in optimization procedure. Response surface methodology was used for process optimization. The properties, determined after sensory evaluation, were selected as targets; meanwhile oil content was aimed to be minimized. Results indicated that final oil content of zucchini slices could be reduced from 58% to 46% by controlling conditions of predrying and frying processes. As a result, it was suggested that predrying could be one choose to reduce oil content of fried zucchini slices for health diet. This project (113R015) has been supported by TUBITAK.

Keywords: health process, optimization, response surface methodology, oil uptake, conventional oven

Procedia PDF Downloads 355
491 Analysis of Bridge-Pile Foundation System in Multi-layered Non-Linear Soil Strata Using Energy-Based Method

Authors: Arvan Prakash Ankitha, Madasamy Arockiasamy

Abstract:

The increasing demand for adopting pile foundations in bridgeshas pointed towardsthe need to constantly improve the existing analytical techniques for better understanding of the behavior of such foundation systems. This study presents a simplistic approach using the energy-based method to assess the displacement responses of piles subjected to general loading conditions: Axial Load, Lateral Load, and a Bending Moment. The governing differential equations and the boundary conditions for a bridge pile embedded in multi-layered soil strata subjected to the general loading conditions are obtained using the Hamilton’s principle employing variational principles and minimization of energies. The soil non-linearity has been incorporated through simple constitutive relationships that account for degradation of soil moduli with increasing strain values.A simple power law based on published literature is used where the soil is assumed to be nonlinear-elastic and perfectly plastic. A Tresca yield surface is assumed to develop the soil stiffness variation with different strain levels that defines the non-linearity of the soil strata. This numerical technique has been applied to a pile foundation in a two - layered soil strata for a pier supporting the bridge and solved using the software MATLAB R2019a. The analysis yields the bridge pile displacements at any depth along the length of the pile. The results of the analysis are in good agreement with the published field data and the three-dimensional finite element analysis results performed using the software ANSYS 2019R3. The methodology can be extended to study the response of the multi-strata soil supporting group piles underneath the bridge piers.

Keywords: pile foundations, deep foundations, multilayer soil strata, energy based method

Procedia PDF Downloads 112
490 A Benchtop Experiment to Study Changes in Tracer Distribution in the Subarachnoid Space

Authors: Smruti Mahapatra, Dipankar Biswas, Richard Um, Michael Meggyesy, Riccardo Serra, Noah Gorelick, Steven Marra, Amir Manbachi, Mark G. Luciano

Abstract:

Intracranial pressure (ICP) is profoundly regulated by the effects of cardiac pulsation and the volume of the incoming blood. Furthermore, these effects on ICP are incremented by the presence of a rigid skull that does not allow for changes in total volume during the cardiac cycle. These factors play a pivotal role in cerebrospinal fluid (CSF) dynamics and distribution, with consequences that are not well understood to this date and that may have a deep effect on the Central Nervous System (CNS) functioning. We designed this study with two specific aims: (a) To study how pulsatility influences local CSF flow, and (b) To study how modulating intracranial pressure affects drug distribution throughout the SAS globally. In order to achieve these aims, we built an elaborate in-vitro model of the SAS closely mimicking the dimensions and flow rates of physiological systems. To modulate intracranial pressure, we used an intracranially implanted, cardiac-gated, volume-oscillating balloon (CADENCE device). Commercially available dye was used to visualize changes in CSF flow. We first implemented two control cases, seeing how the tracer behaves in the presence of pulsations from the brain phantom and the balloon individually. After establishing the controls, we tested 2 cases, having the brain and the balloon pulsate together in sync and out of sync. We then analyzed the distribution area using image processing software. The in-sync case produced a significant increase, 5x times, in the tracer distribution area relative to the out-of-sync case. Assuming that the tracer fluid would mimic blood flow movement, a drug introduced in the SAS with such a system in place would enhance drug distribution and increase the bioavailability of therapeutic drugs to a wider spectrum of brain tissue.

Keywords: blood-brain barrier, cardiac-gated, cerebrospinal fluid, drug delivery, neurosurgery

Procedia PDF Downloads 165
489 Legal Judgment Prediction through Indictments via Data Visualization in Chinese

Authors: Kuo-Chun Chien, Chia-Hui Chang, Ren-Der Sun

Abstract:

Legal Judgment Prediction (LJP) is a subtask for legal AI. Its main purpose is to use the facts of a case to predict the judgment result. In Taiwan's criminal procedure, when prosecutors complete the investigation of the case, they will decide whether to prosecute the suspect and which article of criminal law should be used based on the facts and evidence of the case. In this study, we collected 305,240 indictments from the public inquiry system of the procuratorate of the Ministry of Justice, which included 169 charges and 317 articles from 21 laws. We take the crime facts in the indictments as the main input to jointly learn the prediction model for law source, article, and charge simultaneously based on the pre-trained Bert model. For single article cases where the frequency of the charge and article are greater than 50, the prediction performance of law sources, articles, and charges reach 97.66, 92.22, and 60.52 macro-f1, respectively. To understand the big performance gap between articles and charges, we used a bipartite graph to visualize the relationship between the articles and charges, and found that the reason for the poor prediction performance was actually due to the wording precision. Some charges use the simplest words, while others may include the perpetrator or the result to make the charges more specific. For example, Article 284 of the Criminal Law may be indicted as “negligent injury”, "negligent death”, "business injury", "driving business injury", or "non-driving business injury". As another example, Article 10 of the Drug Hazard Control Regulations can be charged as “Drug Control Regulations” or “Drug Hazard Control Regulations”. In order to solve the above problems and more accurately predict the article and charge, we plan to include the article content or charge names in the input, and use the sentence-pair classification method for question-answer problems in the BERT model to improve the performance. We will also consider a sequence-to-sequence approach to charge prediction.

Keywords: legal judgment prediction, deep learning, natural language processing, BERT, data visualization

Procedia PDF Downloads 102
488 Dielectric Response Analysis Measurement for Diagnostic Oil-Paper Insulation System on Aged Inter Bus Transformer 3x10 MVA

Authors: Eki Farlen, Akas

Abstract:

Condition assessment of oil-paper-insulated power transformers, particularly of water content, is becoming increasingly important for aged transformers. As insulation ages, it can produce water, which reduces its dielectric strength, accelerates the cellulose ageing process, and causes gas bubbles to form at high temperatures. This paper mainly assesses the life condition of oil-paper insulation system of Inter Bus Transformer (IBT) 30 MVA, 150/30 kV in PT PLN-Substation Jelok that has been operating for 41 years, since 1974. Valuable information about the condition of high voltage insulation may be obtained by measuring its dielectric response. This paper describes in detail the interpretation of Dielectric Response Analysis (DIRANA) measurements and the test result compared to other insulation tests to get deep information for diagnostic, such as Tan delta test, oil characteristic test and Dissolve Gas Analysis (DGA) test. This paper mainly discusses the parameter relationship between moisture content, water content, acidity, oil conductivity and dissipation factor. The result and analysis show that IBT 30 MVA Jelok phase U and W had just been ageing due to high acidity level (>0.2 mgKOH/g) which cause high moisture in cellulose/paper (%) are in wet category about 4.7% and 5% and water content in oil (ppm) about 3.13 ppm and 3.33 ppm at temperature 20°C. High acidity level can make oxidation process and produce water in paper and particle which can decrease the value of Interfacial Tension (IFT) below 22 mN/m (poor category) for both phase U and W. Even if paper insulation of transformer are in wet condition, dissipation factor and capacitance at the same frequency (50 Hz) from both measurement DIRANA test and Tangent delta test give the same result (almost), the results are 0.69% and 0.71% (<1%), it may be acceptable and should not be investigated. The DGA results show that TDCG are in level one (1) condition and there are no found a Key Gases, it means that transformers had no failure during operation like arching, partial discharge and thermal in oil or cellulose.

Keywords: diagnostic, inter-bus transformer, oil-paper insulation, moisture, dissipation factor

Procedia PDF Downloads 262
487 Bilateral Thalamic Hypodense Lesions in Computing Tomography

Authors: Angelis P. Barlampas

Abstract:

Purpose of Learning Objective: This case depicts the need for cooperation between the emergency department and the radiologist to achieve the best diagnostic result for the patient. The clinical picture must correlate well with the radiology report and when it does not, this is not necessarily someone’s fault. Careful interpretation and good knowledge of the limitations, advantages and disadvantages of each imaging procedure are essential for the final diagnostic goal. Methods or Background: A patient was brought to the emergency department by their relatives. He was suddenly confused and his mental status was altered. He hadn't any history of mental illness and was otherwise healthy. A computing tomography scan without contrast was done, but it was unremarkable. Because of high clinical suspicion of probable neurologic disease, he was admitted to the hospital. Results or Findings: Another T was done after 48 hours. It showed a hypodense region in both thalamic areas. Taking into account that the first CT was normal, but the initial clinical picture of the patient was alerting of something wrong, the repetitive CT exam is highly suggestive of a probable diagnosis of bilateral thalamic infractions. Differential diagnosis: Primary bilateral thalamic glioma, Wernicke encephalopathy, osmotic myelinolysis, Fabry disease, Wilson disease, Leigh disease, West Nile encephalitis, Greutzfeldt Jacob disease, top of the basilar syndrome, deep venous thrombosis, mild to moderate cerebral hypotension, posterior reversible encephalopathy syndrome, Neurofibromatosis type 1. Conclusion: As is the case of limitations for any imaging procedure, the same applies to CT. The acute ischemic attack can not depict on CT. A period of 24 to 48 hours has to elapse before any abnormality can be seen. So, despite the fact that there are no obvious findings of an ischemic episode, like paresis or imiparesis, one must be careful not to attribute the patient’s clinical signs to other conditions, such as toxic effects, metabolic disorders, psychiatric symptoms, etc. Further investigation with MRI or at least a repeated CT must be done.

Keywords: CNS, CT, thalamus, emergency department

Procedia PDF Downloads 92
486 Trends and Inequalities in Distance to and Use of Nearest Natural Space in the Context of the 20-Minute Neighbourhood: A 4-Wave National Repeat Crosssectional Study, 2013 to 2019

Authors: Jonathan R. Olsen, Natalie Nicholls, Jenna Panter, Hannah Burnett, Michael Tornow, Richard Mitchell

Abstract:

The 20-minute neighborhood is a policy priority for governments worldwide and a key feature of this policy is providing access to natural space within 800 meters of home. The study aims were to (1) examine the association between distance to nearest natural space and frequent use over time and (2) examine whether frequent use and changes in use were patterned by income and housing tenure over time. Bi-annual Scottish Household Survey data were obtained for 2013 to 2019 (n:42128 aged 16+). Adults were asked the walking distance to their nearest natural space, the frequency of visits to this space and their housing tenure, as well as age, sex and income. We examined the association between distance from home of nearest natural space, housing tenure, and the likelihood of frequent natural space use (visited once a week or more). Two-way interaction terms were further applied to explore variation in the association between tenure and frequent natural space use over time. We found that 87% of respondents lived within 10 minute walk of a natural space, meeting the policy specification for a 20-minute neighbourhood. Greater proximity to natural space was associated with increased use; individuals living a 6 to 10 minute walk and over 10 minute walk were respectively 53% and 78% less likely to report frequent natural space use than those living within a 5 minute walk. Housing tenure was an important predictor of frequent natural space use; private renters and homeowners were more likely to report frequent natural space use than social renters. Our findings provide evidence that proximity to natural space is a strong predictor of frequent use. Our study provides important evidence that time-based access measures alone do not consider deep-rooted socioeconomic variation in use of Natural space. Policy makers should ensure a nuanced lens is applied to operationalising and monitoring the 20-minute neighbourhood to safeguard against exacerbating existing inequalities.

Keywords: natural space, housing, inequalities, 20-minute neighbourhood, urban design

Procedia PDF Downloads 98
485 Tribological Aspects of Advanced Roll Material in Cold Rolling of Stainless Steel

Authors: Mohammed Tahir, Jonas Lagergren

Abstract:

Vancron 40, a nitrided powder metallurgical tool Steel, is used in cold work applications where the predominant failure mechanisms are adhesive wear or galling. Typical applications of Vancron 40 are among others fine blanking, cold extrusion, deep drawing and cold work rolls for cluster mills. Vancron 40 positive results for cold work rolls for cluster mills and as a tool for some severe metal forming process makes it competitive compared to other type of work rolls that require higher precision, among others in cold rolling of thin stainless steel, which required high surface finish quality. In this project, three roll materials for cold rolling of stainless steel strip was examined, Vancron 40, Narva 12B (a high-carbon, high-chromium tool steel alloyed with tungsten) and Supra 3 (a Chromium-molybdenum tungsten-vanadium alloyed high speed steel). The purpose of this project was to study the depth profiles of the ironed stainless steel strips, emergence of galling and to study the lubrication performance used by steel industries. Laboratory experiments were conducted to examine scratch of the strip, galling and surface roughness of the roll materials under severe tribological conditions. The critical sliding length for onset of galling was estimated for stainless steel with four different lubricants. Laboratory experiments result of performance evaluation of resistance capability of rolls toward adhesive wear under severe conditions for low and high reductions. Vancron 40 in combination with cold rolling lubricant gave good surface quality, prevents galling of metal surfaces and good bearing capacity.

Keywords: Vancron 40, cold rolling, adhesive wear, galling, surface finish, lubricant, stainless steel

Procedia PDF Downloads 513
484 The Impact of the Flipped Classroom Instructional Model on MPharm Students in Two Pharmacy Schools in the UK

Authors: Mona Almanasef, Angel Chater, Jane Portlock

Abstract:

Introduction: A 'flipped classroom' uses technology to shift the traditional lecture outside the scheduled class time and uses the face-to-face time to engage students in interactive activities. Aim of the Study: Assess the feasibility, acceptability, and effectiveness of using the 'flipped classroom' teaching format with MPharm students in two pharmacy schools in the UK: UCL School of Pharmacy and the School of Pharmacy and Biomedical Sciences at University of Portsmouth. Methods: An experimental mixed methods design was employed, with final year MPharm students in two phases; 1) a qualitative study using focus groups, 2) a quasi-experiment measuring knowledge acquisition and satisfaction by delivering a session on rheumatoid arthritis, in two teaching formats: the flipped classroom and the traditional lecture. Results: The flipped classroom approach was preferred over the traditional lecture for delivering a pharmacy practice topic, and it was comparable or better than the traditional lecture with respect to knowledge acquisition. In addition, this teaching approach was found to overcome the perceived challenges of the traditional lecture method such as fast pace instructions, student disengagement and boredom due to lack of activities and/or social anxiety. However, high workload and difficult or new concepts could be barriers to pre-class preparation, and therefore successful flipped classroom. The flipped classroom encouraged learning scaffolding where students could benefit from application of knowledge, and interaction with peers and the lecturer, which might, in turn, facilitate learning consolidation and deep understanding. This research indicated that the flipped classroom was beneficial for all learning styles. Conclusion: Implementing the flipped classroom at both pharmacy institutions was successful and well received by final year MPharm students. Given the attention now being put on the Teaching Excellence Framework (TEF), understanding effective methods of teaching to enhance student achievement and satisfaction is now more valuable than ever.

Keywords: blended learning, flipped classroom, inverted classroom, pharmacy education

Procedia PDF Downloads 118
483 X-Ray Detector Technology Optimization in Computed Tomography

Authors: Aziz Ikhlef

Abstract:

Most of multi-slices Computed Tomography (CT) scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This is translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80 kVp and 140 kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.

Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts

Procedia PDF Downloads 176
482 Aerial Survey and 3D Scanning Technology Applied to the Survey of Cultural Heritage of Su-Paiwan, an Aboriginal Settlement, Taiwan

Authors: April Hueimin Lu, Liangj-Ju Yao, Jun-Tin Lin, Susan Siru Liu

Abstract:

This paper discusses the application of aerial survey technology and 3D laser scanning technology in the surveying and mapping work of the settlements and slate houses of the old Taiwanese aborigines. The relics of old Taiwanese aborigines with thousands of history are widely distributed in the deep mountains of Taiwan, with a vast area and inconvenient transportation. When constructing the basic data of cultural assets, it is necessary to apply new technology to carry out efficient and accurate settlement mapping work. In this paper, taking the old Paiwan as an example, the aerial survey of the settlement of about 5 hectares and the 3D laser scanning of a slate house were carried out. The obtained orthophoto image was used as an important basis for drawing the settlement map. This 3D landscape data of topography and buildings derived from the aerial survey is important for subsequent preservation planning as well as building 3D scan provides a more detailed record of architectural forms and materials. The 3D settlement data from the aerial survey can be further applied to the 3D virtual model and animation of the settlement for virtual presentation. The information from the 3D scanning of the slate house can also be used for further digital archives and data queries through network resources. The results of this study show that, in large-scale settlement surveys, aerial surveying technology is used to construct the topography of settlements with buildings and spatial information of landscape, as well as the application of 3D scanning for small-scale records of individual buildings. This application of 3D technology, greatly increasing the efficiency and accuracy of survey and mapping work of aboriginal settlements, is much helpful for further preservation planning and rejuvenation of aboriginal cultural heritage.

Keywords: aerial survey, 3D scanning, aboriginal settlement, settlement architecture cluster, ecological landscape area, old Paiwan settlements, slat house, photogrammetry, SfM, MVS), Point cloud, SIFT, DSM, 3D model

Procedia PDF Downloads 139
481 Predicting Radioactive Waste Glass Viscosity, Density and Dissolution with Machine Learning

Authors: Joseph Lillington, Tom Gout, Mike Harrison, Ian Farnan

Abstract:

The vitrification of high-level nuclear waste within borosilicate glass and its incorporation within a multi-barrier repository deep underground is widely accepted as the preferred disposal method. However, for this to happen, any safety case will require validation that the initially localized radionuclides will not be considerably released into the near/far-field. Therefore, accurate mechanistic models are necessary to predict glass dissolution, and these should be robust to a variety of incorporated waste species and leaching test conditions, particularly given substantial variations across international waste-streams. Here, machine learning is used to predict glass material properties (viscosity, density) and glass leaching model parameters from large-scale industrial data. A variety of different machine learning algorithms have been compared to assess performance. Density was predicted solely from composition, whereas viscosity additionally considered temperature. To predict suitable glass leaching model parameters, a large simulated dataset was created by coupling MATLAB and the chemical reactive-transport code HYTEC, considering the state-of-the-art GRAAL model (glass reactivity in allowance of the alteration layer). The trained models were then subsequently applied to the large-scale industrial, experimental data to identify potentially appropriate model parameters. Results indicate that ensemble methods can accurately predict viscosity as a function of temperature and composition across all three industrial datasets. Glass density prediction shows reliable learning performance with predictions primarily being within the experimental uncertainty of the test data. Furthermore, machine learning can predict glass dissolution model parameters behavior, demonstrating potential value in GRAAL model development and in assessing suitable model parameters for large-scale industrial glass dissolution data.

Keywords: machine learning, predictive modelling, pattern recognition, radioactive waste glass

Procedia PDF Downloads 96
480 GAILoc: Improving Fingerprinting-Based Localization System Using Generative Artificial Intelligence

Authors: Getaneh Berie Tarekegn

Abstract:

A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 39 cm, and more than 90% of the errors are less than 82 cm. That is, numerical results proved that, in comparison to traditional methods, the proposed SRCLoc method can significantly improve positioning performance and reduce radio map construction costs.

Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine

Procedia PDF Downloads 48
479 Characterization of Lahar Sands for Reclamation Projects in the Manila Bay, Philippines

Authors: Julian Sandoval, Philipp Schober

Abstract:

Lahar sand (lahars) is a material that originates from volcanic debris flows. During and after a volcano eruption, the lahars can move at speeds up to 22 meters per hour or more, so they can easily cover extensive areas and destroy any structure in their path. Mount Pinatubo eruption (1991) brought lahars to its vicinities, and its use has been a matter of research ever since. Lahars are often disposed of for land reclamation projects in the Manila Bay, Philippines. After reclamation, some deep loss deposits may still present and they are prone to liquefaction. To mitigate the risk of liquefaction of such deposits, Vibro compaction has been proposed and used as a ground improvement technique. Cone penetration testing (CPT) campaigns are usually initiated to monitor the effectiveness of the ground improvement works by vibro compaction. The CPT cone resistance is used to analyses the in-situ relative density of the reclaimed sand before and after compaction. Available correlations between the CPT cone resistance and the relative density are only valid for non-crushable sands. Due to the partially crushable nature of lahars, the CPT data requires to be adjusted to allow for a correct interpretation of the CPT data. The objective of this paper is to characterize the chemical and mechanical properties of the lahar sands used for an ongoing project in the Port of Manila, which comprises reclamation activities using lahars from the east of Mount Pinatubo, it investigates their effect in the proposed correction factor. Additionally, numerous CPTs were carried out in a test trial and during the execution of the project. Based on this data, the influence of the grid spacing, compaction steps and the holding time on the compaction results are analyzed. Moreover, the so-called “aging effect” of the lahars is studied by comparing the results of the CPT testing campaign at different times after the vibro compaction activities. A considerable increase in the tip resistance of the CPT was observed over time.

Keywords: vibro compaction, CPT, lahar sands, correction factor, chemical composition

Procedia PDF Downloads 189