Search results for: analytic network process
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18936

Search results for: analytic network process

16026 Fairness in Grading of Work-Integrated Learning Assessment: Key Stakeholders’ Challenges and Solutions

Authors: Geraldine O’Neill

Abstract:

Work-integrated learning is a valuable learning experience for students in higher education. However, the fairness of the assessment process has been identified as a challenge. This study explored solutions to this challenge through interviews with expert authors in the field and workshops across nine different disciplines in Ireland. In keeping with the use of a participatory and action research methodology, the key stakeholders in the process, the students, educators, and practitioners, identified some solutions. The solutions included the need to: clarify the assessments’ expectations; enhance the flexibility of the competencies, reduce the number of competencies; use grading scales with lower specificity; support practitioner training, and empower students in the assessment process. The results are discussed as they relate to interactional, procedural, and distributive fairness.

Keywords: competencies, fairness, grading scales, work-integrated learning

Procedia PDF Downloads 110
16025 Performance Based Road Asset Evaluation

Authors: Kidus Dawit Gedamu

Abstract:

Addis Ababa City Road Authority is responsible for managing and setting performance evaluation of the city’s road network using the International Roughness Index (IRI). This helps the authority to conduct pavement condition assessments of asphalt roads each year to determine the health status or Level of service (LOS) of the roadway network and plan program improvements such as maintenance, resurfacing and rehabilitation. For a lower IRI limit economical and acceptable maintenance strategy may be selected among a number of maintenance alternatives. The Highway Development and Management (HDM-4) tool can do such measures to help decide which option is the best by evaluating the economic and structural conditions. This paper specifically addresses flexible pavement, including two principal arterial streets under the administration of the Addis Ababa City Roads Authority. The roads include the road from Megenagna Interchange to Ayat Square and from Ayat Square to Tafo RA. First, it was assessed the procedures followed by the city's road authority to develop the appropriate road maintenance strategies. Questionnaire surveys and interviews are used to collect information from the city's road maintenance departments. Second, the project analysis was performed for functional and economic comparison of different maintenance alternatives using HDM-4.

Keywords: appropriate maintenance strategy, cost stream, road deterioration, maintenance alternative

Procedia PDF Downloads 42
16024 Strengthening by Assessment: A Case Study of Rail Bridges

Authors: Evangelos G. Ilias, Panagiotis G. Ilias, Vasileios T. Popotas

Abstract:

The United Kingdom has one of the oldest railway networks in the world dating back to 1825 when the world’s first passenger railway was opened. The network has some 40,000 bridges of various construction types using a wide range of materials including masonry, steel, cast iron, wrought iron, concrete and timber. It is commonly accepted that the successful operation of the network is vital for the economy of the United Kingdom, consequently the cost effective maintenance of the existing infrastructure is a high priority to maintain the operability of the network, prevent deterioration and to extend the life of the assets. Every bridge on the railway network is required to be assessed every eighteen years and a structured approach to assessments is adopted with three main types of progressively more detailed assessments used. These assessment types include Level 0 (standardized spreadsheet assessment tools), Level 1 (analytical hand calculations) and Level 2 (generally finite element analyses). There is a degree of conservatism in the first two types of assessment dictated to some extent by the relevant standards which can lead to some structures not achieving the required load rating. In these situations, a Level 2 Assessment is often carried out using finite element analysis to uncover ‘latent strength’ and improve the load rating. If successful, the more sophisticated analysis can save on costly strengthening or replacement works and avoid disruption to the operational railway. This paper presents the ‘strengthening by assessment’ achieved by Level 2 analyses. The use of more accurate analysis assumptions and the implementation of non-linear modelling and functions (material, geometric and support) to better understand buckling modes and the structural behaviour of historic construction details that are not specifically covered by assessment codes are outlined. Metallic bridges which are susceptible to loss of section size through corrosion have largest scope for improvement by the Level 2 Assessment methodology. Three case studies are presented, demonstrating the effectiveness of the sophisticated Level 2 Assessment methodology using finite element analysis against the conservative approaches employed for Level 0 and Level 1 Assessments. One rail overbridge and two rail underbridges that did not achieve the required load rating by means of a Level 1 Assessment due to the inadequate restraint provided by U-Frame action are examined and the increase in assessed capacity given by the Level 2 Assessment is outlined.

Keywords: assessment, bridges, buckling, finite element analysis, non-linear modelling, strengthening

Procedia PDF Downloads 296
16023 Development and Implementation of Curvature Dependent Force Correction Algorithm for the Planning of Forced Controlled Robotic Grinding

Authors: Aiman Alshare, Sahar Qaadan

Abstract:

A curvature dependent force correction algorithm for planning force controlled grinding process with off-line programming flexibility is designed for ABB industrial robot, in order to avoid the manual interface during the process. The machining path utilizes a spline curve fit that is constructed from the CAD data of the workpiece. The fitted spline has a continuity of the second order to assure path smoothness. The implemented algorithm computes uniform forces normal to the grinding surface of the workpiece, by constructing a curvature path in the spatial coordinates using the spline method.

Keywords: ABB industrial robot, grinding process, offline programming, CAD data extraction, force correction algorithm

Procedia PDF Downloads 347
16022 Explanation Conceptual Model of the Architectural Form Effect on Structures in Building Aesthetics

Authors: Fatemeh Nejati, Farah Habib, Sayeh Goudarzi

Abstract:

Architecture and structure have always been closely interrelated so that they should be integrated into a unified, coherent and beautiful universe, while in the contemporary era, both structures and architecture proceed separately. The purpose of architecture is the art of creating form and space and order for human service, and the goal of the structural engineer is the transfer of loads to the structure, too. This research seeks to achieve the goal by looking at the relationship between the form of architecture and structure from its inception to the present day to the Global Identification and Management Plan. Finally, by identifying the main components of the design of the structure in interaction with the architectural form, an effective step is conducted in the Professional training direction and solutions to professionals. Therefore, after reviewing the evolution of structural and architectural coordination in various historical periods as well as how to reach the form of the structure in different times and places, components are required to test the components and present the final theory that one hundred to be tested in this regard. Finally, this research indicates the fact that the form of architecture and structure has an aesthetic link, which is influenced by a number of components that could be edited and has a regular order throughout history that could be regular. The research methodology is analytic, and it is comparative using analytical and matrix diagrams and diagrams and tools for conducting library research and interviewing.

Keywords: architecture, structural form, structural and architectural coordination, effective components, aesthetics

Procedia PDF Downloads 200
16021 Fabrication of Wollastonite/Hydroxyapatite Coatings on Zirconia by Room Temperature Spray Process

Authors: Jong Kook Lee, Sangcheol Eum, Jaehong Kim

Abstract:

Wollastonite/hydroxyapatite composite coatings on zirconia were obtained by room temperature spray process. Wollastonite powder was synthesized by solid-state reaction between calcite and silica powder. Hydroxyapatite powder was prepared from bovine bone by the calcination at 1200oC 1h. From two starting raw powders, three kinds of powder mixture were obtained by the ball milling for 24h. By using these powders, wollastonite/hydroxyapatite coatings were fabricated on zirconia substrates by a room temperature spray process, and their microstructure and biological behavior were investigated and compared with pure wollastonite and hydroxyapatite coatings. Wollastonite/hydroxyapatite coatings on zirconia substrates were homogeneously formed in microstructure and had a nanoscaled grain size. The phase composition of the resultant wollastonite/hydroxyapatite coatings was similar to that of the starting powders, however, the grain size of the wollastonite or hydroxyapatite particles was reduced to about 100 nm due to their formation by particle impaction and fracture. The wollastonite/hydroxyapatite coating layer exhibited bioactivity in a stimulated body fluid and forming ability of new hydroxyapatite precipitates of 25 nm during in vitro test in SBF solution, which was enhanced by the increasing wollastonite content.

Keywords: wollastonite, hydroxyapatite composite coatings, room temperature spay process, zirconia

Procedia PDF Downloads 466
16020 Globally Attractive Mild Solutions for Non-Local in Time Subdiffusion Equations of Neutral Type

Authors: Jorge Gonzalez Camus, Carlos Lizama

Abstract:

In this work is proved the existence of at least one globally attractive mild solution to the Cauchy problem, for fractional evolution equation of neutral type, involving the fractional derivate in Caputo sense. An almost sectorial operator on a Banach space X and a kernel belonging to a large class appears in the equation, which covers many relevant cases from physics applications, in particular, the important case of time - fractional evolution equations of neutral type. The main tool used in this work was the Hausdorff measure of noncompactness and fixed point theorems, specifically Darbo-type. Initially, the equation is a Cauchy problem, involving a fractional derivate in Caputo sense. Then, is formulated the equivalent integral version, and defining a convenient functional, using the analytic integral resolvent operator, and verifying the hypothesis of the fixed point theorem of Darbo type, give us the existence of mild solution for the initial problem. Furthermore, each mild solution is globally attractive, a property that is desired in asymptotic behavior for that solution.

Keywords: attractive mild solutions, integral Volterra equations, neutral type equations, non-local in time equations

Procedia PDF Downloads 139
16019 Capability Prediction of Machining Processes Based on Uncertainty Analysis

Authors: Hamed Afrasiab, Saeed Khodaygan

Abstract:

Prediction of machining process capability in the design stage plays a key role to reach the precision design and manufacturing of mechanical products. Inaccuracies in machining process lead to errors in position and orientation of machined features on the part, and strongly affect the process capability in the final quality of the product. In this paper, an efficient systematic approach is given to investigate the machining errors to predict the manufacturing errors of the parts and capability prediction of corresponding machining processes. A mathematical formulation of fixture locators modeling is presented to establish the relationship between the part errors and the related sources. Based on this method, the final machining errors of the part can be accurately estimated by relating them to the combined dimensional and geometric tolerances of the workpiece – fixture system. This method is developed for uncertainty analysis based on the Worst Case and statistical approaches. The application of the presented method is illustrated through presenting an example and the computational results are compared with the Monte Carlo simulation results.

Keywords: process capability, machining error, dimensional and geometrical tolerances, uncertainty analysis

Procedia PDF Downloads 297
16018 Automatic Product Identification Based on Deep-Learning Theory in an Assembly Line

Authors: Fidel Lòpez Saca, Carlos Avilés-Cruz, Miguel Magos-Rivera, José Antonio Lara-Chávez

Abstract:

Automated object recognition and identification systems are widely used throughout the world, particularly in assembly lines, where they perform quality control and automatic part selection tasks. This article presents the design and implementation of an object recognition system in an assembly line. The proposed shapes-color recognition system is based on deep learning theory in a specially designed convolutional network architecture. The used methodology involve stages such as: image capturing, color filtering, location of object mass centers, horizontal and vertical object boundaries, and object clipping. Once the objects are cut out, they are sent to a convolutional neural network, which automatically identifies the type of figure. The identification system works in real-time. The implementation was done on a Raspberry Pi 3 system and on a Jetson-Nano device. The proposal is used in an assembly course of bachelor’s degree in industrial engineering. The results presented include studying the efficiency of the recognition and processing time.

Keywords: deep-learning, image classification, image identification, industrial engineering.

Procedia PDF Downloads 144
16017 Reading and Writing Memories in Artificial and Human Reasoning

Authors: Ian O'Loughlin

Abstract:

Memory networks aim to integrate some of the recent successes in machine learning with a dynamic memory base that can be updated and deployed in artificial reasoning tasks. These models involve training networks to identify, update, and operate over stored elements in a large memory array in order, for example, to ably perform question and answer tasks parsing real-world and simulated discourses. This family of approaches still faces numerous challenges: the performance of these network models in simulated domains remains considerably better than in open, real-world domains, wide-context cues remain elusive in parsing words and sentences, and even moderately complex sentence structures remain problematic. This innovation, employing an array of stored and updatable ‘memory’ elements over which the system operates as it parses text input and develops responses to questions, is a compelling one for at least two reasons: first, it addresses one of the difficulties that standard machine learning techniques face, by providing a way to store a large bank of facts, offering a way forward for the kinds of long-term reasoning that, for example, recurrent neural networks trained on a corpus have difficulty performing. Second, the addition of a stored long-term memory component in artificial reasoning seems psychologically plausible; human reasoning appears replete with invocations of long-term memory, and the stored but dynamic elements in the arrays of memory networks are deeply reminiscent of the way that human memory is readily and often characterized. However, this apparent psychological plausibility is belied by a recent turn in the study of human memory in cognitive science. In recent years, the very notion that there is a stored element which enables remembering, however dynamic or reconstructive it may be, has come under deep suspicion. In the wake of constructive memory studies, amnesia and impairment studies, and studies of implicit memory—as well as following considerations from the cognitive neuroscience of memory and conceptual analyses from the philosophy of mind and cognitive science—researchers are now rejecting storage and retrieval, even in principle, and instead seeking and developing models of human memory wherein plasticity and dynamics are the rule rather than the exception. In these models, storage is entirely avoided by modeling memory using a recurrent neural network designed to fit a preconceived energy function that attains zero values only for desired memory patterns, so that these patterns are the sole stable equilibrium points in the attractor network. So although the array of long-term memory elements in memory networks seem psychologically appropriate for reasoning systems, they may actually be incurring difficulties that are theoretically analogous to those that older, storage-based models of human memory have demonstrated. The kind of emergent stability found in the attractor network models more closely fits our best understanding of human long-term memory than do the memory network arrays, despite appearances to the contrary.

Keywords: artificial reasoning, human memory, machine learning, neural networks

Procedia PDF Downloads 255
16016 Using Hidden Markov Chain for Improving the Dependability of Safety-Critical Wireless Sensor Networks

Authors: Issam Alnader, Aboubaker Lasebae, Rand Raheem

Abstract:

Wireless sensor networks (WSNs) are distributed network systems used in a wide range of applications, including safety-critical systems. The latter provide critical services, often concerned with human life or assets. Therefore, ensuring the dependability requirements of Safety critical systems is of paramount importance. The purpose of this paper is to utilize the Hidden Markov Model (HMM) to elongate the service availability of WSNs by increasing the time it takes a node to become obsolete via optimal load balancing. We propose an HMM algorithm that, given a WSN, analyses and predicts undesirable situations, notably, nodes dying unexpectedly or prematurely. We apply this technique to improve on C. Lius’ algorithm, a scheduling-based algorithm which has served to improve the lifetime of WSNs. Our experiments show that our HMM technique improves the lifetime of the network, achieved by detecting nodes that die early and rebalancing their load. Our technique can also be used for diagnosis and provide maintenance warnings to WSN system administrators. Finally, our technique can be used to improve algorithms other than C. Liu’s.

Keywords: wireless sensor networks, IoT, dependability of safety WSNs, energy conservation, sleep awake schedule

Procedia PDF Downloads 85
16015 DTI Connectome Changes in the Acute Phase of Aneurysmal Subarachnoid Hemorrhage Improve Outcome Classification

Authors: Sarah E. Nelson, Casey Weiner, Alexander Sigmon, Jun Hua, Haris I. Sair, Jose I. Suarez, Robert D. Stevens

Abstract:

Graph-theoretical information from structural connectomes indicated significant connectivity changes and improved acute prognostication in a Random Forest (RF) model in aneurysmal subarachnoid hemorrhage (aSAH), which can lead to significant morbidity and mortality and has traditionally been fraught by poor methods to predict outcome. This study’s hypothesis was that structural connectivity changes occur in canonical brain networks of acute aSAH patients, and that these changes are associated with functional outcome at six months. In a prospective cohort of patients admitted to a single institution for management of acute aSAH, patients underwent diffusion tensor imaging (DTI) as part of a multimodal MRI scan. A weighted undirected structural connectome was created of each patient’s images using Constant Solid Angle (CSA) tractography, with 176 regions of interest (ROIs) defined by the Johns Hopkins Eve atlas. ROIs were sorted into four networks: Default Mode Network, Executive Control Network, Salience Network, and Whole Brain. The resulting nodes and edges were characterized using graph-theoretic features, including Node Strength (NS), Betweenness Centrality (BC), Network Degree (ND), and Connectedness (C). Clinical (including demographics and World Federation of Neurologic Surgeons scale) and graph features were used separately and in combination to train RF and Logistic Regression classifiers to predict two outcomes: dichotomized modified Rankin Score (mRS) at discharge and at six months after discharge (favorable outcome mRS 0-2, unfavorable outcome mRS 3-6). A total of 56 aSAH patients underwent DTI a median (IQR) of 7 (IQR=8.5) days after admission. The best performing model (RF) combining clinical and DTI graph features had a mean Area Under the Receiver Operator Characteristic Curve (AUROC) of 0.88 ± 0.00 and Area Under the Precision Recall Curve (AUPRC) of 0.95 ± 0.00 over 500 trials. The combined model performed better than the clinical model alone (AUROC 0.81 ± 0.01, AUPRC 0.91 ± 0.00). The highest-ranked graph features for prediction were NS, BC, and ND. These results indicate reorganization of the connectome early after aSAH. The performance of clinical prognostic models was increased significantly by the inclusion of DTI-derived graph connectivity metrics. This methodology could significantly improve prognostication of aSAH.

Keywords: connectomics, diffusion tensor imaging, graph theory, machine learning, subarachnoid hemorrhage

Procedia PDF Downloads 175
16014 Enhancing the Recruitment Process through Machine Learning: An Automated CV Screening System

Authors: Kaoutar Ben Azzou, Hanaa Talei

Abstract:

Human resources is an important department in each organization as it manages the life cycle of employees from recruitment training to retirement or termination of contracts. The recruitment process starts with a job opening, followed by a selection of the best-fit candidates from all applicants. Matching the best profile for a job position requires a manual way of looking at many CVs, which requires hours of work that can sometimes lead to choosing not the best profile. The work presented in this paper aims at reducing the workload of HR personnel by automating the preliminary stages of the candidate screening process, thereby fostering a more streamlined recruitment workflow. This tool introduces an automated system designed to help with the recruitment process by scanning candidates' CVs, extracting pertinent features, and employing machine learning algorithms to decide the most fitting job profile for each candidate. Our work employs natural language processing (NLP) techniques to identify and extract key features from unstructured text extracted from a CV, such as education, work experience, and skills. Subsequently, the system utilizes these features to match candidates with job profiles, leveraging the power of classification algorithms.

Keywords: automated recruitment, candidate screening, machine learning, human resources management

Procedia PDF Downloads 37
16013 Parametric Study of Underground Opening Stability under Uncertainty Conditions

Authors: Aram Yakoby, Yossef H. Hatzor, Shmulik Pinkert

Abstract:

This work presents an applied engineering method for evaluating the stability of underground openings under conditions of uncertainty. The developed method is demonstrated by a comprehensive parametric study on a case of large-diameter vertical borehole stability analysis, with uncertainties regarding the in-situ stress distribution. To this aim, a safety factor analysis is performed for the stability of both supported and unsupported boreholes. In the analysis, we used analytic geomechanical calculations and advanced numerical modeling to evaluate the estimated stress field. In addition, the work presents the development of a boundary condition for the numerical model that fits the nature of the problem and yields excellent accuracy. The borehole stability analysis is studied in terms of (1) the stress ratio in the vertical and horizontal directions, (2) the mechanical properties and geometry of the support system, and (3) the parametric sensitivity. The method's results are studied in light of a real case study of an underground waste disposal site. The conclusions of this study focus on the developed method for capturing the parametric uncertainty, the definition of critical geological depths, the criteria for implementing structural support, and the effectiveness of further in-situ investigations.

Keywords: borehole stability, in-situ stress, parametric study, factor of safety

Procedia PDF Downloads 47
16012 An Evaluation of Impact of Video Billboard on the Marketing of GSM Services in Lagos Metropolis

Authors: Shola Haruna Adeosun, F. Adebiyi Ajoke, Odedeji Adeoye

Abstract:

Video billboard advertising by networks and brand switching was conceived out of inquisition at the huge billboard advertising expenditures made by the three major GSM network operators in Nigeria. The study was anchored on Lagos State Metropolis with a current census population over 1,000,000. From this population, a purposive sample of 400 was adopted, and the questionnaire designed for the survey was carefully allocated to members of this ample in the five geographical zones of the city so that each rung of the society was well represented. The data obtained were analyzed using tables and simple percentages. The results obtained showed that subscribers of these networks were hardly influenced by the video billboard advertisements. They overwhelmingly showed that rather than the slogans of the GSM networks carried on the video billboards, it was the incentives to subscribers as well as the promotional strategies of these organizations that moved them to switch from one network to another. These switching lasted only as long as the incentives and promotions were in effect. The results of the study also seemed to rekindle the age-old debate on media effects, by the unyielding schools of the theory of ‘all-powerful media’, ‘the limited effects media’, ‘the controlled effects media’ and ‘the negotiated media influence’.

Keywords: evaluation, impact, video billboard, marketing, services

Procedia PDF Downloads 236
16011 Analyzing and Predicting the CL-20 Detonation Reaction Mechanism Based on Artificial Intelligence Algorithm

Authors: Kaining Zhang, Lang Chen, Danyang Liu, Jianying Lu, Kun Yang, Junying Wu

Abstract:

In order to solve the problem of a large amount of simulation and limited simulation scale in the first-principle molecular dynamics simulation of energetic material detonation reaction, we established an artificial intelligence model for analyzing and predicting the detonation reaction mechanism of CL-20 based on the first-principle molecular dynamics simulation of the multiscale shock technique (MSST). We employed principal component analysis to identify the dominant charge features governing molecular reactions. We adopted the K-means clustering algorithm to cluster the reaction paths and screen out the key reactions. We introduced the neural network algorithm to construct the mapping relationship between the charge characteristics of the molecular structure and the key reaction characteristics so as to establish a calculation method for predicting detonation reactions based on the charge characteristics of CL-20 and realize the rapid analysis of the reaction mechanism of energetic materials.

Keywords: energetic material detonation reaction, first-principle molecular dynamics simulation of multiscale shock technique, neural network, CL-20

Procedia PDF Downloads 91
16010 Investigation of Oscillation Mechanism of a Large-scale Solar Photovoltaic and Wind Hybrid Power Plant

Authors: Ting Kai Chia, Ruifeng Yan, Feifei Bai, Tapan Saha

Abstract:

This research presents a real-world power system oscillation incident in 2022 originated by a hybrid solar photovoltaic (PV) and wind renewable energy farm with a rated capacity of approximately 300MW in Australia. The voltage and reactive power outputs recorded at the point of common coupling (PCC) oscillated at a sub-synchronous frequency region, which sustained for approximately five hours in the network. The reactive power oscillation gradually increased over time and reached a recorded maximum of approximately 250MVar peak-to-peak (from inductive to capacitive). The network service provider was not able to quickly identify the location of the oscillation source because the issue was widespread across the network. After the incident, the original equipment manufacturer (OEM) concluded that the oscillation problem was caused by the incorrect setting recovery of the hybrid power plant controller (HPPC) in the voltage and reactive power control loop after a loss of communication event. The voltage controller normally outputs a reactive (Q) reference value to the Q controller which controls the Q dispatch setpoint of PV and wind plants in the hybrid farm. Meanwhile, a feed-forward (FF) configuration is used to bypass the Q controller in case there is a loss of communication. Further study found that the FF control mode was still engaged when communication was re-established, which ultimately resulted in the oscillation event. However, there was no detailed explanation of why the FF control mode can cause instability in the hybrid farm. Also, there was no duplication of the event in the simulation to analyze the root cause of the oscillation. Therefore, this research aims to model and replicate the oscillation event in a simulation environment and investigate the underlying behavior of the HPPC and the consequent oscillation mechanism during the incident. The outcome of this research will provide significant benefits to the safe operation of large-scale renewable energy generators and power networks.

Keywords: PV, oscillation, modelling, wind

Procedia PDF Downloads 14
16009 Impact of Instagram Food Bloggers on Consumer (Generation Z) Decision Making Process in Islamabad. Pakistan

Authors: Tabinda Sadiq, Tehmina Ashfaq Qazi, Hoor Shumail

Abstract:

Recently, the advent of emerging technology has created an emerging generation of restaurant marketing. It explores the aspects that influence customers’ decision-making process in selecting a restaurant after reading food bloggers' reviews online. The motivation behind this research is to investigate the correlation between the credibility of the source and their attitude toward restaurant visits. The researcher collected the data by distributing a survey questionnaire through google forms by employing the Source credibility theory. Non- probability purposive sampling technique was used to collect data. The questionnaire used a predeveloped and validated scale by Ohanian to measure the relationship. Also, the researcher collected data from 250 respondents in order to investigate the influence of food bloggers on Gen Z's decision-making process. SPSS statistical version 26 was used for statistical testing and analyzing the data. The findings of the survey revealed that there is a moderate positive correlation between the variables. So, it can be analyzed that food bloggers do have an impact on Generation Z's decision making process.

Keywords: credibility, decision making, food bloggers, generation z, e-wom

Procedia PDF Downloads 58
16008 F-VarNet: Fast Variational Network for MRI Reconstruction

Authors: Omer Cahana, Maya Herman, Ofer Levi

Abstract:

Magnetic resonance imaging (MRI) is a long medical scan that stems from a long acquisition time. This length is mainly due to the traditional sampling theorem, which defines a lower boundary for sampling. However, it is still possible to accelerate the scan by using a different approach, such as compress sensing (CS) or parallel imaging (PI). These two complementary methods can be combined to achieve a faster scan with high-fidelity imaging. In order to achieve that, two properties have to exist: i) the signal must be sparse under a known transform domain, ii) the sampling method must be incoherent. In addition, a nonlinear reconstruction algorithm needs to be applied to recover the signal. While the rapid advance in the deep learning (DL) field, which has demonstrated tremendous successes in various computer vision task’s, the field of MRI reconstruction is still in an early stage. In this paper, we present an extension of the state-of-the-art model in MRI reconstruction -VarNet. We utilize VarNet by using dilated convolution in different scales, which extends the receptive field to capture more contextual information. Moreover, we simplified the sensitivity map estimation (SME), for it holds many unnecessary layers for this task. Those improvements have shown significant decreases in computation costs as well as higher accuracy.

Keywords: MRI, deep learning, variational network, computer vision, compress sensing

Procedia PDF Downloads 138
16007 Public Space Appropriation of a Public Peripheric Library in El Agustino, Lima Metropolitana: A Qualitative Study

Authors: Camila Freire Barrios, Gonzalo Rivera Talavera

Abstract:

The importance of public spaces has been shown for many years, and in different disciplines, with one example being their ability for developing a sustainable social environment, especially in mega cities like Lima. The aim of this study was to explore the process of space appropriation that occurs in the Peripheral Library of the district El Agustino in Lima, Peru. Space appropriation is a process by which people develop a link with a place within a specific sociocultural context. This process has been related to positive outcomes, such as: participation and in the development of compassionate behaviors with these places. To achieve the purpose of the research, a qualitative design was selected because this will allowed exploring in deep the process in an specific context. The study interviewed six adults, all of whom were deliberately chosen to have the longest residence time in the district and also utilized the library the most. In a complementary manner, two children and one adolescent were interviewed. Likewise, two observations were made on a weekday and weekend, and public documentation information was collected. As a result, five categories linked to this process were identified. It was found that the process of space appropriation begins with the needs of the people who arrive at the library, which provides benefits to these people by fulfilling them. Next in the process, through the construction of meanings, the library is then valued as a pleasant, productive, safe and regulated place; as a result, people become identified with the library. The identification generated is subsequently reflected in the level of participation that the person has in the library, which may go in a continuum from no participating at all to a more direct involvement in the library activities, as well as voluntary and altruistic work. Finally, this process leads to the library becoming part of the neighborhood. This study allows having a better understanding of how sociospatial processes work in a Latinamerican context and in cities like Lima, where the third of the country’s population lives. Also, Lima has grown in the past 50 years in a excessively way and with lack of planification. Therefore, these results brings new research questions and highlights the importance of learning how to design public spaces in order to promote these processes to develop.

Keywords: bond with the place, place identity, public spaces, space appropriation

Procedia PDF Downloads 218
16006 Character and Evolution of Electronic Waste: A Technologically Developing Country's Experience

Authors: Karen C. Olufokunbi, Odetunji A. Odejobi

Abstract:

The discourse of this paper is the examination of the generation, accumulation and growth of e-waste in a developing country. Images and other data about computer e-waste were collected using a digital camera, 290 copies of questionnaire and three structured interviews using Obafemi Awolowo University (OAU), Ile-Ife, Nigeria environment as a case study. The numerical data were analysed using R data analysis and process tool. Automata-based techniques and Petri net modeling tool were used to design and simulate a computational model for the recovery of saleable materials from e-waste. The R analysis showed that at a 95 percent confidence level, the computer equipment that will be disposed by 2020 will be 417 units. Compared to the 800 units in circulation in 2014, 50 percent of personal computer components will become e-waste. This indicates that personal computer components were in high demand due to their low costs and will be disposed more rapidly when replaced by new computer equipment Also, 57 percent of the respondents discarded their computer e-waste by throwing it into the garbage bin or by dumping it. The simulated model using Coloured Petri net modelling tool for the process showed that the e-waste dynamics is a forward sequential process in the form of a pipeline meaning that an e-waste recovery of saleable materials process occurs in identifiable discrete stages indicating that e-waste will continue to accumulate and grow in volume with time.

Keywords: Coloured Petri net, computational modelling, electronic waste, electronic waste process dynamics

Procedia PDF Downloads 152
16005 Supergrid Modeling and Operation and Control of Multi Terminal DC Grids for the Deployment of a Meshed HVDC Grid in South Asia

Authors: Farhan Beg, Raymond Moberly

Abstract:

The Indian subcontinent is facing a massive challenge with regards to energy security in member countries, to provide reliable electricity to facilitate development across various sectors of the economy and consequently achieve the developmental targets. The instability of the current precarious situation is observable in the frequent system failures and blackouts. The deployment of interconnected electricity ‘Supergrid’ designed to carry huge quanta of power across the Indian sub-continent is proposed in this paper. Besides enabling energy security in the subcontinent, it will also provide a platform for Renewable Energy Sources (RES) integration. This paper assesses the need and conditions for a Supergrid deployment and consequently proposes a meshed topology based on Voltage Source High Voltage Direct Current (VSC-HVDC) converters for the Supergrid modeling. Various control schemes for the control of voltage and power are utilized for the regulation of the network parameters. A 3 terminal Multi Terminal Direct Current (MTDC) network is used for the simulations.

Keywords: super grid, wind and solar energy, high voltage direct current, electricity management, load flow analysis

Procedia PDF Downloads 416
16004 A Safety Analysis Method for Multi-Agent Systems

Authors: Ching Louis Liu, Edmund Kazmierczak, Tim Miller

Abstract:

Safety analysis for multi-agent systems is complicated by the, potentially nonlinear, interactions between agents. This paper proposes a method for analyzing the safety of multi-agent systems by explicitly focusing on interactions and the accident data of systems that are similar in structure and function to the system being analyzed. The method creates a Bayesian network using the accident data from similar systems. A feature of our method is that the events in accident data are labeled with HAZOP guide words. Our method uses an Ontology to abstract away from the details of a multi-agent implementation. Using the ontology, our methods then constructs an “Interaction Map,” a graphical representation of the patterns of interactions between agents and other artifacts. Interaction maps combined with statistical data from accidents and the HAZOP classifications of events can be converted into a Bayesian Network. Bayesian networks allow designers to explore “what it” scenarios and make design trade-offs that maintain safety. We show how to use the Bayesian networks, and the interaction maps to improve multi-agent system designs.

Keywords: multi-agent system, safety analysis, safety model, integration map

Procedia PDF Downloads 401
16003 DNpro: A Deep Learning Network Approach to Predicting Protein Stability Changes Induced by Single-Site Mutations

Authors: Xiao Zhou, Jianlin Cheng

Abstract:

A single amino acid mutation can have a significant impact on the stability of protein structure. Thus, the prediction of protein stability change induced by single site mutations is critical and useful for studying protein function and structure. Here, we presented a deep learning network with the dropout technique for predicting protein stability changes upon single amino acid substitution. While using only protein sequence as input, the overall prediction accuracy of the method on a standard benchmark is >85%, which is higher than existing sequence-based methods and is comparable to the methods that use not only protein sequence but also tertiary structure, pH value and temperature. The results demonstrate that deep learning is a promising technique for protein stability prediction. The good performance of this sequence-based method makes it a valuable tool for predicting the impact of mutations on most proteins whose experimental structures are not available. Both the downloadable software package and the user-friendly web server (DNpro) that implement the method for predicting protein stability changes induced by amino acid mutations are freely available for the community to use.

Keywords: bioinformatics, deep learning, protein stability prediction, biological data mining

Procedia PDF Downloads 444
16002 Design of Process Parameters in Electromagnetic Forming Apparatus by FEM

Authors: Hyeong-Gyu Park, Hak-Gon Noh, Beom-Soo Kang, Jeong Kim

Abstract:

Electromagnetic forming (EMF) process is one of a high-speed forming process, which uses an electromagnetic body (Lorentz) force to deform work-piece. Advantages of EMF are summarized as improvement of formability, reduction in wrinkling, non-contact forming. In this study, the spiral coil is considered to evaluate formability in terms of pressure distribution of the forming process. It also is represented forming results of numerical analysis using ANSYS code. In the numerical simulation, RLC circuit coupled with spiral coil was made to consider the design parameters such as system input current and electromagnetic force. The simulation results show that even though input peak currents level are same level in each case, forming condition is certainly different because of frequency of input current and magnitude of current density and magnetic flux density. Finally, the simulation results appear that electromagnetic forming force apparently affected by input current frequency which determines magnitude of current density and magnetic flux density.

Keywords: electromagnetic forming, high-speed forming, RLC circuit, Lorentz force

Procedia PDF Downloads 443
16001 Influence of Decolourisation Condition on the Physicochemical Properties of Shea (Vitellaria paradoxa Gaertner F) Butter

Authors: Ahmed Mohammed Mohagir, Ahmat-Charfadine Mahamat, Nde Divine Bup, Richard Kamga, César Kapseu

Abstract:

In this investigation, kinetics studies of adsorption of colour material of shea butter showed a peak at the wavelength 440 nm and the equilibrium time was found to be 30 min. Response surface methodology applying Doehlert experimental design was used to investigate decolourisation parameters of crude shea butter. The decolourisation process was significantly influenced by three independent parameters: contact time, decolourisation temperature and adsorbent dose. The responses of the process were oil loss, acid value, peroxide value and colour index. Response surface plots were successfully made to visualise the effect of the independent parameters on the responses of the process.

Keywords: decolourisation, doehlert experimental design, physicochemical characterisation, RSM, shea butter

Procedia PDF Downloads 399
16000 Study of Self-Assembled Photocatalyst by Metal-Terpyridine Interactions in Polymer Network

Authors: Dong-Cheol Jeong, Jookyung Lee, Yu Hyeon Ro, Changsik Song

Abstract:

The design and synthesis of photo-active polymeric systems are important in regard to solar energy harvesting and utilization. In this study, we synthesized photo-active polymer, thin films, and polymer gel via iterative self-assembly using reversible metal-terpyridine (M-tpy) interactions. The photocurrent generated in the polymeric thin films with Zn(II) was much higher than those of other films. Apparent diffusion rate constant (kapp) was measured for the electron hopping process via potential-step chronoamperometry. As a result, the kapp for the polymeric thin films with Zn(II) was almost two times larger than those with other metal ions. We found that the anodic photocurrents increased with the inclusion of the multi-walled carbon nanotube (MWNT) layer. Inclusion of MWNTs can provide efficient electron transfer pathways. In addition, polymer gel based on interactions between terpyridine and metal ions was shown the photocatalytic activity. Interestingly, in the Mg-terpyridine gel, the reaction rate of benzylamine to imine photo-oxidative coupling was faster than Fe-terpyridine gel because the Mg-terpyridine gel has two steps electron transfer pathway but Fe-terpyridine gel has three steps electron transfer pathway.

Keywords: terpyridine, photocatalyst, self-assebly, metal-ligand

Procedia PDF Downloads 294
15999 AI-Enabled Smart Contracts for Reliable Traceability in the Industry 4.0

Authors: Harris Niavis, Dimitra Politaki

Abstract:

The manufacturing industry was collecting vast amounts of data for monitoring product quality thanks to the advances in the ICT sector and dedicated IoT infrastructure is deployed to track and trace the production line. However, industries have not yet managed to unleash the full potential of these data due to defective data collection methods and untrusted data storage and sharing. Blockchain is gaining increasing ground as a key technology enabler for Industry 4.0 and the smart manufacturing domain, as it enables the secure storage and exchange of data between stakeholders. On the other hand, AI techniques are more and more used to detect anomalies in batch and time-series data that enable the identification of unusual behaviors. The proposed scheme is based on smart contracts to enable automation and transparency in the data exchange, coupled with anomaly detection algorithms to enable reliable data ingestion in the system. Before sensor measurements are fed to the blockchain component and the smart contracts, the anomaly detection mechanism uniquely combines artificial intelligence models to effectively detect unusual values such as outliers and extreme deviations in data coming from them. Specifically, Autoregressive integrated moving average, Long short-term memory (LSTM) and Dense-based autoencoders, as well as Generative adversarial networks (GAN) models, are used to detect both point and collective anomalies. Towards the goal of preserving the privacy of industries' information, the smart contracts employ techniques to ensure that only anonymized pointers to the actual data are stored on the ledger while sensitive information remains off-chain. In the same spirit, blockchain technology guarantees the security of the data storage through strong cryptography as well as the integrity of the data through the decentralization of the network and the execution of the smart contracts by the majority of the blockchain network actors. The blockchain component of the Data Traceability Software is based on the Hyperledger Fabric framework, which lays the ground for the deployment of smart contracts and APIs to expose the functionality to the end-users. The results of this work demonstrate that such a system can increase the quality of the end-products and the trustworthiness of the monitoring process in the smart manufacturing domain. The proposed AI-enabled data traceability software can be employed by industries to accurately trace and verify records about quality through the entire production chain and take advantage of the multitude of monitoring records in their databases.

Keywords: blockchain, data quality, industry4.0, product quality

Procedia PDF Downloads 167
15998 Selecting Graduates for the Interns’ Award by Using Multisource Feedback Process: Does It Work?

Authors: Kathryn Strachan, Sameer Otoom, Amal AL-Gallaf, Ahmed Al Ansari

Abstract:

Introduction: Introducing a reliable method to select graduates for an award in higher education can be challenging but is not impossible. Multisource feedback (MSF) is a popular assessment tool that relies on evaluations of different groups of people, including physicians and non-physicians. It is useful for assessing several domains, including professionalism, communication and collaboration and may be useful for selecting the best interns to receive a University award. Methods: 16 graduates responded to an invitation to participate in the student award, which was conducted by the Royal College of Surgeons of Ireland-Bahrain Medical University of Bahrain (RCSI Bahrain) using the MSF process. Five individuals from the following categories rated each participant: physicians, nurses, and fellow students. RCSI Bahrain graduates were assessed in the following domains; professionalism, communication, and collaboration. Mean and standard deviation were calculated and the award was given to the graduate who scored the highest among his/her colleagues. Cronbach’s coefficient was used to determine the questionnaire’s internal consistency and reliability. Factor analysis was conducted to examine for the construct validity. Results: 16 graduates participated in the RCSI-Bahrain interns’ award based on the MSF process, giving us a 16.5% response rate. The instrument was found to be suitable for factor analysis and showed 3 factor solutions representing 79.3% of the total variance. Reliability analysis using Cronbach’s α reliability of internal consistency indicated that the full scale of the instrument had high internal consistency (Cronbach’s α 0.98). Conclusion: This study found the MSF process to be reliable and valid for selecting the best graduates for the interns’ awards. However, the low response rates may suggest that the process is not feasible for allowing the majority of the students to participate in the selection process. Further research studies may be required to support the feasibility of the MSF process in selecting graduates for the university award.

Keywords: MSF, RCSI, validity, Bahrain

Procedia PDF Downloads 326
15997 Internal Assessment of Satisfaction with the Quality of the Learning Process

Authors: Bulatbayeva A. A., Maxutova I. O., Ergalieva A. N.

Abstract:

This article presents a study of the practice of self-assessment of the quality of training cadets in a military higher specialized educational institution. The research was carried out by means of a questionnaire survey aimed at identifying the degree of satisfaction of cadets with the organization of the educational process, quality of teaching, the quality of the organization of independent work, and the system of their assessment. In general, the results of the study are of an intermediate nature. Proven tools will be incorporated into the planning and effective management of the learning process. The results of the study can be useful for the administrators and managers of the military education system for teachers of military higher educational institutions for adjusting the content and technologies of training future specialists. The publication was prepared as part of applied grant research for 2020-2022 by order of the Ministry of Education and Science of the Republic of Kazakhstan on the topic "Development of a comprehensive methodology for assessing the quality of education of graduates of military special educational institutions."

Keywords: teaching quality, quality satisfaction, learning management, quality management, process approach, classroom learning, interactive technologies, teaching quality

Procedia PDF Downloads 115