Search results for: lexical and numerical error-recognition tasks
850 Relativity in Toddlers' Understanding of the Physical World as Key to Misconceptions in the Science Classroom
Authors: Michael Hast
Abstract:
Within their first year, infants can differentiate between objects based on their weight. By at least 5 years children hold consistent weight-related misconceptions about the physical world, such as that heavy things fall faster than lighter ones because of their weight. Such misconceptions are seen as a challenge for science education since they are often highly resistant to change through instruction. Understanding the time point of emergence of such ideas could, therefore, be crucial for early science pedagogy. The paper thus discusses two studies that jointly address the issue by examining young children’s search behaviour in hidden displacement tasks under consideration of relative object weight. In both studies, they were tested with a heavy or a light ball, and they either had information about one of the balls only or both. In Study 1, 88 toddlers aged 2 to 3½ years watched a ball being dropped into a curved tube and were then allowed to search for the ball in three locations – one straight beneath the tube entrance, one where the curved tube lead to, and one that corresponded to neither of the previous outcomes. Success and failure at the task were not impacted by weight of the balls alone in any particular way. However, from around 3 years onwards, relative lightness, gained through having tactile experience of both balls beforehand, enhanced search success. Conversely, relative heaviness increased search errors such that children increasingly searched in the location immediately beneath the tube entry – known as the gravity bias. In Study 2, 60 toddlers aged 2, 2½ and 3 years watched a ball roll down a ramp and behind a screen with four doors, with a barrier placed along the ramp after one of four doors. Toddlers were allowed to open the doors to find the ball. While search accuracy generally increased with age, relative weight did not play a role in 2-year-olds’ search behaviour. Relative lightness improved 2½-year-olds’ searches. At 3 years, both relative lightness and relative heaviness had a significant impact, with the former improving search accuracy and the latter reducing it. Taken together, both studies suggest that between 2 and 3 years of age, relative object weight is increasingly taken into consideration in navigating naïve physical concepts. In particular, it appears to contribute to the early emergence of misconceptions relating to object weight. This insight from developmental psychology research may have consequences for early science education and related pedagogy towards early conceptual change.Keywords: conceptual development, early science education, intuitive physics, misconceptions, object weight
Procedia PDF Downloads 190849 Mechanisms Underlying Comprehension of Visualized Personal Health Information: An Eye Tracking Study
Authors: Da Tao, Mingfu Qin, Wenkai Li, Tieyan Wang
Abstract:
While the use of electronic personal health portals has gained increasing popularity in the healthcare industry, users usually experience difficulty in comprehending and correctly responding to personal health information, partly due to inappropriate or poor presentation of the information. The way personal health information is visualized may affect how users perceive and assess their personal health information. This study was conducted to examine the effects of information visualization format and visualization mode on the comprehension and perceptions of personal health information among personal health information users with eye tracking techniques. A two-factor within-subjects experimental design was employed, where participants were instructed to complete a series of personal health information comprehension tasks under varied types of visualization mode (i.e., whether the information visualization is static or dynamic) and three visualization formats (i.e., bar graph, instrument-like graph, and text-only format). Data on a set of measures, including comprehension performance, perceptions, and eye movement indicators, were collected during the task completion in the experiment. Repeated measure analysis of variance analyses (RM-ANOVAs) was used for data analysis. The results showed that while the visualization format yielded no effects on comprehension performance, it significantly affected users’ perceptions (such as perceived ease of use and satisfaction). The two graphic visualizations yielded significantly higher favorable scores on subjective evaluations than that of the text format. While visualization mode showed no effects on users’ perception measures, it significantly affected users' comprehension performance in that dynamic visualization significantly reduced users' information search time. Both visualization format and visualization mode had significant main effects on eye movement behaviors, and their interaction effects were also significant. While the bar graph format and text format had similar time to first fixation across dynamic and static visualizations, instrument-like graph format had a larger time to first fixation for dynamic visualization than for static visualization. The two graphic visualization formats yielded shorter total fixation duration compared with the text-only format, indicating their ability to improve information comprehension efficiency. The results suggest that dynamic visualization can improve efficiency in comprehending important health information, and graphic visualization formats were favored more by users. The findings are helpful in the underlying comprehension mechanism of visualized personal health information and provide important implications for optimal design and visualization of personal health information.Keywords: eye tracking, information comprehension, personal health information, visualization
Procedia PDF Downloads 109848 Optimization of Economic Order Quantity of Multi-Item Inventory Control Problem through Nonlinear Programming Technique
Authors: Prabha Rohatgi
Abstract:
To obtain an efficient control over a huge amount of inventory of drugs in pharmacy department of any hospital, generally, the medicines are categorized on the basis of their cost ‘ABC’ (Always Better Control), first and then categorize on the basis of their criticality ‘VED’ (Vital, Essential, desirable) for prioritization. About one-third of the annual expenditure of a hospital is spent on medicines. To minimize the inventory investment, the hospital management may like to keep the medicines inventory low, as medicines are perishable items. The main aim of each and every hospital is to provide better services to the patients under certain limited resources. To achieve the satisfactory level of health care services to outdoor patients, a hospital has to keep eye on the wastage of medicines because expiry date of medicines causes a great loss of money though it was limited and allocated for a particular period of time. The objectives of this study are to identify the categories of medicines requiring incentive managerial control. In this paper, to minimize the total inventory cost and the cost associated with the wastage of money due to expiry of medicines, an inventory control model is used as an estimation tool and then nonlinear programming technique is used under limited budget and fixed number of orders to be placed in a limited time period. Numerical computations have been given and shown that by using scientific methods in hospital services, we can give more effective way of inventory management under limited resources and can provide better health care services. The secondary data has been collected from a hospital to give empirical evidence.Keywords: ABC-VED inventory classification, multi item inventory problem, nonlinear programming technique, optimization of EOQ
Procedia PDF Downloads 255847 Learning with Music: The Effects of Musical Tension on Long-Term Declarative Memory Formation
Authors: Nawras Kurzom, Avi Mendelsohn
Abstract:
The effects of background music on learning and memory are inconsistent, partly due to the intrinsic complexity and variety of music and partly to individual differences in music perception and preference. A prominent musical feature that is known to elicit strong emotional responses is musical tension. Musical tension can be brought about by building anticipation of rhythm, harmony, melody, and dynamics. Delaying the resolution of dominant-to-tonic chord progressions, as well as using dissonant harmonics, can elicit feelings of tension, which can, in turn, affect memory formation of concomitant information. The aim of the presented studies was to explore how forming declarative memory is influenced by musical tension, brought about within continuous music as well as in the form of isolated chords with varying degrees of dissonance/consonance. The effects of musical tension on long-term memory of declarative information were studied in two ways: 1) by evoking tension within continuous music pieces by delaying the release of harmonic progressions from dominant to tonic chords, and 2) by using isolated single complex chords with various degrees of dissonance/roughness. Musical tension was validated through subjective reports of tension, as well as physiological measurements of skin conductance response (SCR) and pupil dilation responses to the chords. In addition, music information retrieval (MIR) was used to quantify musical properties associated with tension and its release. Each experiment included an encoding phase, wherein individuals studied stimuli (words or images) with different musical conditions. Memory for the studied stimuli was tested 24 hours later via recognition tasks. In three separate experiments, we found positive relationships between tension perception and physiological measurements of SCR and pupil dilation. As for memory performance, we found that background music, in general, led to superior memory performance as compared to silence. We detected a trade-off effect between tension perception and memory, such that individuals who perceived musical tension as such displayed reduced memory performance for images encoded during musical tension, whereas tense music benefited memory for those who were less sensitive to the perception of musical tension. Musical tension exerts complex interactions with perception, emotional responses, and cognitive performance on individuals with and without musical training. Delineating the conditions and mechanisms that underlie the interactions between musical tension and memory can benefit our understanding of musical perception at large and the diverse effects that music has on ongoing processing of declarative information.Keywords: musical tension, declarative memory, learning and memory, musical perception
Procedia PDF Downloads 98846 Finite Difference Modelling of Temperature Distribution around Fire Generated Heat Source in an Enclosure
Authors: A. A. Dare, E. U. Iniegbedion
Abstract:
Industrial furnaces generally involve enclosures of fire typically initiated by the combustion of gases. The fire leads to temperature distribution inside the enclosure. A proper understanding of the temperature and velocity distribution within the enclosure is often required for optimal design and use of the furnace. This study was therefore directed at numerical modeling of temperature distribution inside an enclosure as typical in a furnace. A mathematical model was developed from the conservation of mass, momentum and energy. The stream function-vorticity formulation of the governing equations was solved by an alternating direction implicit (ADI) finite difference technique. The finite difference formulation obtained were then developed into a computer code. This was used to determine the temperature, velocities, stream function and vorticity. The effect of the wall heat conduction was also considered, by assuming a one-dimensional heat flow through the wall. The computer code (MATLAB program) developed was used for the determination of the aforementioned variables. The results obtained showed that the transient temperature distribution assumed a uniform profile which becomes more chaotic with increasing time. The vertical velocity showed increasing turbulent behavior with time, while the horizontal velocity assumed decreasing laminar behavior with time. All of these behaviours were equally reported in the literature. The developed model has provided understanding of heat transfer process in an industrial furnace.Keywords: heat source, modelling, enclosure, furnace
Procedia PDF Downloads 255845 Two-Dimensional CFD Simulation of the Behaviors of Ferromagnetic Nanoparticles in Channel
Authors: Farhad Aalizadeh, Ali Moosavi
Abstract:
This paper presents a two-dimensional Computational Fluid Dynamics (CFDs) simulation for the steady, particle tracking. The purpose of this paper is applied magnetic field effect on Magnetic Nanoparticles velocities distribution. It is shown that the permeability of the particles determines the effect of the magnetic field on the deposition of the particles and the deposition of the particles is inversely proportional to the Reynolds number. Using MHD and its property it is possible to control the flow velocity, remove the fouling on the walls and return the system to its original form. we consider a channel 2D geometry and solve for the resulting spatial distribution of particles. According to obtained results when only magnetic fields are applied perpendicular to the flow, local particles velocity is decreased due to the direct effect of the magnetic field return the system to its original fom. In the method first, in order to avoid mixing with blood, the ferromagnetic particles are covered with a gel-like chemical composition and are injected into the blood vessels. Then, a magnetic field source with a specified distance from the vessel is used and the particles are guided to the affected area. This paper presents a two-dimensional Computational Fluid Dynamics (CFDs) simulation for the steady, laminar flow of an incompressible magnetorheological (MR) fluid between two fixed parallel plates in the presence of a uniform magnetic field. The purpose of this study is to develop a numerical tool that is able to simulate MR fluids flow in valve mode and determineB0, applied magnetic field effect on flow velocities and pressure distributions.Keywords: MHD, channel clots, magnetic nanoparticles, simulations
Procedia PDF Downloads 368844 Numerical Investigation of Material Behavior During Non-Equal Channel Multi Angular Extrusion
Authors: Mohamed S. El-Asfoury, Ahmed Abdel-Moneim, Mohamed N. A. Nasr
Abstract:
The current study uses finite element modeling to investigate and analyze a modified form of the from the conventional equal channel multi-angular pressing (ECMAP), using non-equal channels, on the workpiece material plastic deformation. The modified process non-equal channel multi-angular extrusion (NECMAE) is modeled using two-dimensional plane strain finite element model built using the commercial software ABAQUS. The workpiece material used is pure aluminum. The model was first validated by comparing its results to analytical solutions for single-pass equal channel angular extrusion (ECAP), as well as previously published data. After that, the model was used to examine the effects of different % of reductions of the area (for the second stage) on material plastic deformation, corner gap, and required the load. Three levels of reduction in the area were modeled; 10%, 30%, and 50%, and compared to single-pass and double-pass ECAP. Cases with a higher reduction in the area were found to have smaller corner gaps, higher and much uniform plastic deformation, as well as higher required loads. The current results are mainly attributed to the back pressure effects exerted by the second stage, as well as strain hardening effects experienced during the first stage.Keywords: non-equal channel angular extrusion, multi-pass, sever plastic deformation, back pressure, Finite Element Modelling (FEM)
Procedia PDF Downloads 422843 Towards Creative Movie Title Generation Using Deep Neural Models
Authors: Simon Espigolé, Igor Shalyminov, Helen Hastie
Abstract:
Deep machine learning techniques including deep neural networks (DNN) have been used to model language and dialogue for conversational agents to perform tasks, such as giving technical support and also for general chit-chat. They have been shown to be capable of generating long, diverse and coherent sentences in end-to-end dialogue systems and natural language generation. However, these systems tend to imitate the training data and will only generate the concepts and language within the scope of what they have been trained on. This work explores how deep neural networks can be used in a task that would normally require human creativity, whereby the human would read the movie description and/or watch the movie and come up with a compelling, interesting movie title. This task differs from simple summarization in that the movie title may not necessarily be derivable from the content or semantics of the movie description. Here, we train a type of DNN called a sequence-to-sequence model (seq2seq) that takes as input a short textual movie description and some information on e.g. genre of the movie. It then learns to output a movie title. The idea is that the DNN will learn certain techniques and approaches that the human movie titler may deploy that may not be immediately obvious to the human-eye. To give an example of a generated movie title, for the movie synopsis: ‘A hitman concludes his legacy with one more job, only to discover he may be the one getting hit.’; the original, true title is ‘The Driver’ and the one generated by the model is ‘The Masquerade’. A human evaluation was conducted where the DNN output was compared to the true human-generated title, as well as a number of baselines, on three 5-point Likert scales: ‘creativity’, ‘naturalness’ and ‘suitability’. Subjects were also asked which of the two systems they preferred. The scores of the DNN model were comparable to the scores of the human-generated movie title, with means m=3.11, m=3.12, respectively. There is room for improvement in these models as they were rated significantly less ‘natural’ and ‘suitable’ when compared to the human title. In addition, the human-generated title was preferred overall 58% of the time when pitted against the DNN model. These results, however, are encouraging given the comparison with a highly-considered, well-crafted human-generated movie title. Movie titles go through a rigorous process of assessment by experts and focus groups, who have watched the movie. This process is in place due to the large amount of money at stake and the importance of creating an effective title that captures the audiences’ attention. Our work shows progress towards automating this process, which in turn may lead to a better understanding of creativity itself.Keywords: creativity, deep machine learning, natural language generation, movies
Procedia PDF Downloads 326842 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings
Authors: Gaelle Candel, David Naccache
Abstract:
t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embeddings. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n²) to O(n²=k), and the memory requirement from n² to 2(n=k)², which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution, and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.Keywords: concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning
Procedia PDF Downloads 144841 Experimental Characterization of Anti-Icing System and Accretion of Re-Emitted Droplets on Turbojet Engine Blades
Authors: Guillaume Linassier, Morgan Balland, Hugo Pervier, Marie Pervier, David Hammond
Abstract:
Atmospheric icing for turbojet is caused by ingestion of super-cooled water droplets. To prevent operability risks, manufacturer can implement ice protection systems. Thermal systems are commonly used for this purpose, but their activation can cause the formation of a water liquid film, that can freeze downstream the heated surface or even on other components. In the framework of STORM, a European project dedicated to icing physics in turbojet engines, a cascade rig representative of engine inlet blades was built and tested in an icing wind tunnel. This mock-up integrates two rows of blades, the upstream one being anti-iced using an electro-thermal device the downstream one being unheated. Under icing conditions, the anti-icing system is activated and set at power level to observe a liquid film on the surface and droplet re-emission at the trailing edge. These re-emitted droplets will impinge on the downstream row and contribute to ice accretion. A complete experimental database was generated, including the characterization of ice accretion shapes, and the characterization of electro-thermal anti-icing system (power limit for apparition of the runback water or ice accretion). These data will be used for validation of numerical tools for modeling thermal anti-icing systems in the scope of engine application, as well as validation of re-emission droplets model for stator parts.Keywords: turbomachine, anti-icing, cascade rig, runback water
Procedia PDF Downloads 182840 Uncertainty Assessment in Building Energy Performance
Authors: Fally Titikpina, Abderafi Charki, Antoine Caucheteux, David Bigaud
Abstract:
The building sector is one of the largest energy consumer with about 40% of the final energy consumption in the European Union. Ensuring building energy performance is of scientific, technological and sociological matter. To assess a building energy performance, the consumption being predicted or estimated during the design stage is compared with the measured consumption when the building is operational. When valuing this performance, many buildings show significant differences between the calculated and measured consumption. In order to assess the performance accurately and ensure the thermal efficiency of the building, it is necessary to evaluate the uncertainties involved not only in measurement but also those induced by the propagation of dynamic and static input data in the model being used. The evaluation of measurement uncertainty is based on both the knowledge about the measurement process and the input quantities which influence the result of measurement. Measurement uncertainty can be evaluated within the framework of conventional statistics presented in the \textit{Guide to the Expression of Measurement Uncertainty (GUM)} as well as by Bayesian Statistical Theory (BST). Another choice is the use of numerical methods like Monte Carlo Simulation (MCS). In this paper, we proposed to evaluate the uncertainty associated to the use of a simplified model for the estimation of the energy consumption of a given building. A detailed review and discussion of these three approaches (GUM, MCS and BST) is given. Therefore, an office building has been monitored and multiple sensors have been mounted on candidate locations to get required data. The monitored zone is composed of six offices and has an overall surface of 102 $m^2$. Temperature data, electrical and heating consumption, windows opening and occupancy rate are the features for our research work.Keywords: building energy performance, uncertainty evaluation, GUM, bayesian approach, monte carlo method
Procedia PDF Downloads 459839 Integrated Watershed Management Practice in Chelchai Hyrcanian Forests in the North of Iran
Authors: Mashad Maramaei, Behrooz Chogan, Reza Ahmadi
Abstract:
Human health and the health of his watershed are inseparable. This is because a watershed is an interconnected system of "land", "water", "air" and "life". Nowadays, most of the world's watersheds show symptoms of unhealthiness and require a prompt solution. It is believed that suitable solution is a participatory and Integrated Watershed Management (IWM). In recent decades the Hyrcanian forests in the north of Iran, which belongs to the end of the third geological era, are suffering from many environmental challenges such as land degradation, increasing trends of flood, drought and accelerated soil erosion. These challenges in the main forested area of the country impose many tangible and intangible damages and human losses. This is despite the fact that in the past decades, forestry programs, watershed management and other activities in the region have been implemented in a parallel and uncoordinated manner. Therefore, recently; the Natural Resources and Watershed Management Organization has resorted to the concept of IWM planning the Hyrcanian watersheds. The Chelchai watershed as mostly degraded watershed in the eastern part of the Hyrcanian forests has been selected as a pilot watershed for implementation of the IWM. It has a drainage area of 25680 hectares and receives an average annual precipitation of 650 mm. In this mountainous region, the average temperature is 17.3 degrees Celsius. About 34% of the watershed is under cultivation, 64% under forest cover, 2% under built up areas and etc. In this research, the effectiveness or ineffectiveness of the IWM model implementation of the Natural Resources and Watershed Management Organization has been evaluated based on questionnaire method and field studies. The results indicated that IWM activities in the study area should be reconsidered and revived. Based on this research and the lessons learned during five years' experience in the Chelchai watershed; authors believe that seven important tasks are necessary for socially acceptable and successful implementation of IWM projects. These are: 1) Establishment of Local Coordination Committee (LCC) at the watershed level 2) working for development of a IWM law among government organizations to organize watershed management and eliminate parallel and contradictory activities 3) More investment on education of local communities, especially women and children 4) Development of trust builder and pattern projects that showing best agricultural and livestock management activities at each of 26 villages 5) Assigning forest protection to local communities. 6) Capacity building of government stakeholders. 7) Helping in the marketing of watershed products.Keywords: integrated watershed management, Chelchai, Hyrcanian forests, Iran
Procedia PDF Downloads 22838 The Effect of a Saturated Kink on the Dynamics of Tungsten Impurities in the Plasma Core
Authors: H. E. Ferrari, R. Farengo, C. F. Clauser
Abstract:
Tungsten (W) will be used in ITER as one of the plasma facing components (PFCs). The W could migrate to the plasma center. This could have a potentially deleterious effect on plasma confinement. Electron cyclotron resonance heating (ECRH) can be used to prevent W accumulation. We simulated a series of H mode discharges in ASDEX U with PFC containing W, where central ECRH was used to prevent W accumulation in the plasma center. The experiments showed that the W density profiles were flat after a sawtooth crash, and become hollow in between sawtooth crashes when ECRH has been applied. It was also observed that a saturated kink mode was active in these conditions. We studied the effect of saturated kink like instabilities on the redistribution of W impurities. The kink was modeled as the sum of a simple analytical equilibrium (large aspect ratio, circular cross section) plus the perturbation produced by the kink. A numerical code that follows the exact trajectories of the impurity ions in the total fields and includes collisions was employed. The code is written in Cuda C and runs in Graphical Processing Units (GPUs), allowing simulations with a large number of particles with modest resources. Our simulations show that when the W ions have a thermal velocity distribution, the kink has no effect on the W density. When we consider the plasma rotation, the kink can affect the W density. When the average passing frequency of the W particles is similar to the frequency of the kink mode, the expulsion of W ions from the plasma core is maximum, and the W density shows a hollow structure. This could have implications for the mitigation of W accumulation.Keywords: impurity transport, kink instability, tungsten accumulation, tungsten dynamics
Procedia PDF Downloads 171837 Evaluation of Residual Stresses in Human Face as a Function of Growth
Authors: M. A. Askari, M. A. Nazari, P. Perrier, Y. Payan
Abstract:
Growth and remodeling of biological structures have gained lots of attention over the past decades. Determining the response of living tissues to mechanical loads is necessary for a wide range of developing fields such as prosthetics design or computerassisted surgical interventions. It is a well-known fact that biological structures are never stress-free, even when externally unloaded. The exact origin of these residual stresses is not clear, but theoretically, growth is one of the main sources. Extracting body organ’s shapes from medical imaging does not produce any information regarding the existing residual stresses in that organ. The simplest cause of such stresses is gravity since an organ grows under its influence from birth. Ignoring such residual stresses might cause erroneous results in numerical simulations. Accounting for residual stresses due to tissue growth can improve the accuracy of mechanical analysis results. This paper presents an original computational framework based on gradual growth to determine the residual stresses due to growth. To illustrate the method, we apply it to a finite element model of a healthy human face reconstructed from medical images. The distribution of residual stress in facial tissues is computed, which can overcome the effect of gravity and maintain tissues firmness. Our assumption is that tissue wrinkles caused by aging could be a consequence of decreasing residual stress and thus not counteracting gravity. Taking into account these stresses seems therefore extremely important in maxillofacial surgery. It would indeed help surgeons to estimate tissues changes after surgery.Keywords: finite element method, growth, residual stress, soft tissue
Procedia PDF Downloads 270836 Private Coded Computation of Matrix Multiplication
Authors: Malihe Aliasgari, Yousef Nejatbakhsh
Abstract:
The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers
Procedia PDF Downloads 122835 The Importance of Efficient and Sustainable Water Resources Management and the Role of Artificial Intelligence in Preventing Forced Migration
Authors: Fateme Aysin Anka, Farzad Kiani
Abstract:
Forced migration is a situation in which people are forced to leave their homes against their will due to political conflicts, wars and conflicts, natural disasters, climate change, economic crises, or other emergencies. This type of migration takes place under conditions where people cannot lead a sustainable life due to reasons such as security, shelter and meeting their basic needs. This type of migration may occur in connection with different factors that affect people's living conditions. In addition to these general and widespread reasons, water security and resources will be one that is starting now and will be encountered more and more in the future. Forced migration may occur due to insufficient or depleted water resources in the areas where people live. In this case, people's living conditions become unsustainable, and they may have to go elsewhere, as they cannot obtain their basic needs, such as drinking water, water used for agriculture and industry. To cope with these situations, it is important to minimize the causes, as international organizations and societies must provide assistance (for example, humanitarian aid, shelter, medical support and education) and protection to address (or mitigate) this problem. From the international perspective, plans such as the Green New Deal (GND) and the European Green Deal (EGD) draw attention to the need for people to live equally in a cleaner and greener world. Especially recently, with the advancement of technology, science and methods have become more efficient. In this regard, in this article, a multidisciplinary case model is presented by reinforcing the water problem with an engineering approach within the framework of the social dimension. It is worth emphasizing that this problem is largely linked to climate change and the lack of a sustainable water management perspective. As a matter of fact, the United Nations Development Agency (UNDA) draws attention to this problem in its universally accepted sustainable development goals. Therefore, an artificial intelligence-based approach has been applied to solve this problem by focusing on the water management problem. The most general but also important aspect in the management of water resources is its correct consumption. In this context, the artificial intelligence-based system undertakes tasks such as water demand forecasting and distribution management, emergency and crisis management, water pollution detection and prevention, and maintenance and repair control and forecasting.Keywords: water resource management, forced migration, multidisciplinary studies, artificial intelligence
Procedia PDF Downloads 86834 Critical Core Skills Profiling in the Singaporean Workforce
Authors: Bi Xiao Fang, Tan Bao Zhen
Abstract:
Soft skills, core competencies, and generic competencies are exchangeable terminologies often used to represent a similar concept. In the Singapore context, such skills are currently being referred to as Critical Core Skills (CCS). In 2019, SkillsFuture Singapore (SSG) reviewed the Generic Skills and Competencies (GSC) framework that was first introduced in 2016, culminating in the development of the Critical Core Skills (CCS) framework comprising 16 soft skills classified into three clusters. The CCS framework is part of the Skills Framework, and whose stated purpose is to create a common skills language for individuals, employers and training providers. It is also developed with the objectives of building deep skills for a lean workforce, enhance business competitiveness and support employment and employability. This further helps to facilitate skills recognition and support the design of training programs for skills and career development. According to SSG, every job role requires a set of technical skills and a set of Critical Core Skills to perform well at work, whereby technical skills refer to skills required to perform key tasks of the job. There has been an increasing emphasis on soft skills for the future of work. A recent study involving approximately 80 organizations across 28 sectors in Singapore revealed that more enterprises are beginning to recognize that soft skills support their employees’ performance and business competitiveness. Though CCS is of high importance for the development of the workforce’s employability, there is little attention paid to the CCS use and profiling across occupations. A better understanding of how CCS is distributed across the economy will thus significantly enhance SSG’s career guidance services as well as training providers’ services to graduates and workers and guide organizations in their hiring for soft skills. This CCS profiling study sought to understand how CCS is demanded in different occupations. To achieve its research objectives, this study adopted a quantitative method to measure CCS use across different occupations in the Singaporean workforce. Based on the CCS framework developed by SSG, the research team adopted a formative approach to developing the CCS profiling tool to measure the importance of and self-efficacy in the use of CCS among the Singaporean workforce. Drawing on the survey results from 2500 participants, this study managed to profile them into seven occupation groups based on the different patterns of importance and confidence levels of the use of CCS. Each occupation group is labeled according to the most salient and demanded CCS. In the meantime, the CCS in each occupation group, which may need some further strengthening, were also identified. The profiling of CCS use has significant implications for different stakeholders, e.g., employers could leverage the profiling results to hire the staff with the soft skills demanded by the job.Keywords: employability, skills profiling, skills measurement, soft skills
Procedia PDF Downloads 95833 Relay-Augmented Bottleneck Throughput Maximization for Correlated Data Routing: A Game Theoretic Perspective
Authors: Isra Elfatih Salih Edrees, Mehmet Serdar Ufuk Türeli
Abstract:
In this paper, an energy-aware method is presented, integrating energy-efficient relay-augmented techniques for correlated data routing with the goal of optimizing bottleneck throughput in wireless sensor networks. The system tackles the dual challenge of throughput optimization while considering sensor network energy consumption. A unique routing metric has been developed to enable throughput maximization while minimizing energy consumption by utilizing data correlation patterns. The paper introduces a game theoretic framework to address the NP-complete optimization problem inherent in throughput-maximizing correlation-aware routing with energy limitations. By creating an algorithm that blends energy-aware route selection strategies with the best reaction dynamics, this framework provides a local solution. The suggested technique considerably raises the bottleneck throughput for each source in the network while reducing energy consumption by choosing the best routes that strike a compromise between throughput enhancement and energy efficiency. Extensive numerical analyses verify the efficiency of the method. The outcomes demonstrate the significant decrease in energy consumption attained by the energy-efficient relay-augmented bottleneck throughput maximization technique, in addition to confirming the anticipated throughput benefits.Keywords: correlated data aggregation, energy efficiency, game theory, relay-augmented routing, throughput maximization, wireless sensor networks
Procedia PDF Downloads 82832 TARF: Web Toolkit for Annotating RNA-Related Genomic Features
Abstract:
Genomic features, the genome-based coordinates, are commonly used for the representation of biological features such as genes, RNA transcripts and transcription factor binding sites. For the analysis of RNA-related genomic features, such as RNA modification sites, a common task is to correlate these features with transcript components (5'UTR, CDS, 3'UTR) to explore their distribution characteristics in terms of transcriptomic coordinates, e.g., to examine whether a specific type of biological feature is enriched near transcription start sites. Existing approaches for performing these tasks involve the manipulation of a gene database, conversion from genome-based coordinate to transcript-based coordinate, and visualization methods that are capable of showing RNA transcript components and distribution of the features. These steps are complicated and time consuming, and this is especially true for researchers who are not familiar with relevant tools. To overcome this obstacle, we develop a dedicated web app TARF, which represents web toolkit for annotating RNA-related genomic features. TARF web tool intends to provide a web-based way to easily annotate and visualize RNA-related genomic features. Once a user has uploaded the features with BED format and specified a built-in transcript database or uploaded a customized gene database with GTF format, the tool could fulfill its three main functions. First, it adds annotation on gene and RNA transcript components. For every features provided by the user, the overlapping with RNA transcript components are identified, and the information is combined in one table which is available for copy and download. Summary statistics about ambiguous belongings are also carried out. Second, the tool provides a convenient visualization method of the features on single gene/transcript level. For the selected gene, the tool shows the features with gene model on genome-based view, and also maps the features to transcript-based coordinate and show the distribution against one single spliced RNA transcript. Third, a global transcriptomic view of the genomic features is generated utilizing the Guitar R/Bioconductor package. The distribution of features on RNA transcripts are normalized with respect to RNA transcript landmarks and the enrichment of the features on different RNA transcript components is demonstrated. We tested the newly developed TARF toolkit with 3 different types of genomics features related to chromatin H3K4me3, RNA N6-methyladenosine (m6A) and RNA 5-methylcytosine (m5C), which are obtained from ChIP-Seq, MeRIP-Seq and RNA BS-Seq data, respectively. TARF successfully revealed their respective distribution characteristics, i.e. H3K4me3, m6A and m5C are enriched near transcription starting sites, stop codons and 5’UTRs, respectively. Overall, TARF is a useful web toolkit for annotation and visualization of RNA-related genomic features, and should help simplify the analysis of various RNA-related genomic features, especially those related RNA modifications.Keywords: RNA-related genomic features, annotation, visualization, web server
Procedia PDF Downloads 208831 Numerical Investigation of a New Two-Fluid Model for Semi-Dilute Polymer Solutions
Authors: Soroush Hooshyar, Mohamadali Masoudian, Natalie Germann
Abstract:
Many soft materials such as polymer solutions can develop localized bands with different shear rates, which are known as shear bands. Using the generalized bracket approach of nonequilibrium thermodynamics, we recently developed a new two-fluid model to study shear banding for semi-dilute polymer solutions. The two-fluid approach is an appropriate means for describing diffusion processes such as Fickian diffusion and stress-induced migration. In this approach, it is assumed that the local gradients in concentration and, if accounted for, also stress generate a nontrivial velocity difference between the components. Since the differential velocity is treated as a state variable in our model, the implementation of the boundary conditions arising from the derivative diffusive terms is straightforward. Our model is a good candidate for benchmark simulations because of its simplicity. We analyzed its behavior in cylindrical Couette flow, a rectilinear channel flow, and a 4:1 planar contraction flow. The latter problem was solved using the OpenFOAM finite volume package and the impact of shear banding on the lip and salient vortices was investigated. For the other smooth geometries, we employed a standard Chebyshev pseudospectral collocation method. The results showed that the steady-state solution is unique with respect to initial conditions, deformation history, and the value of the diffusivity constant. However, smaller the value of the diffusivity constant is, the more time it takes to reach the steady state.Keywords: nonequilibrium thermodynamics, planar contraction, polymer solutions, shear banding, two-fluid approach
Procedia PDF Downloads 333830 Accelerating Quantum Chemistry Calculations: Machine Learning for Efficient Evaluation of Electron-Repulsion Integrals
Authors: Nishant Rodrigues, Nicole Spanedda, Chilukuri K. Mohan, Arindam Chakraborty
Abstract:
A crucial objective in quantum chemistry is the computation of the energy levels of chemical systems. This task requires electron-repulsion integrals as inputs, and the steep computational cost of evaluating these integrals poses a major numerical challenge in efficient implementation of quantum chemical software. This work presents a moment-based machine-learning approach for the efficient evaluation of electron-repulsion integrals. These integrals were approximated using linear combinations of a small number of moments. Machine learning algorithms were applied to estimate the coefficients in the linear combination. A random forest approach was used to identify promising features using a recursive feature elimination approach, which performed best for learning the sign of each coefficient but not the magnitude. A neural network with two hidden layers were then used to learn the coefficient magnitudes along with an iterative feature masking approach to perform input vector compression, identifying a small subset of orbitals whose coefficients are sufficient for the quantum state energy computation. Finally, a small ensemble of neural networks (with a median rule for decision fusion) was shown to improve results when compared to a single network.Keywords: quantum energy calculations, atomic orbitals, electron-repulsion integrals, ensemble machine learning, random forests, neural networks, feature extraction
Procedia PDF Downloads 114829 Imputing the Minimum Social Value of Public Healthcare: A General Equilibrium Model of Israel
Authors: Erez Yerushalmi, Sani Ziv
Abstract:
The rising demand for healthcare services, without a corresponding rise in public supply, led to a debate on whether to increase private healthcare provision - especially in hospital services and second-tier healthcare. Proponents for increasing private healthcare highlight gains in efficiency, while opponents its risk to social welfare. None, however, provide a measure of the social value and its impact on the economy in terms of a monetary value. In this paper, we impute a minimum social value of public healthcare that corresponds to indifference between gains in efficiency, with losses to social welfare. Our approach resembles contingent valuation methods that introduce a hypothetical market for non-commodities, but is different from them because we use numerical simulation techniques to exploit certain market failure conditions. In this paper, we develop a general equilibrium model that distinguishes between public-private healthcare services and public-private financing. Furthermore, the social value is modelled as a by product of healthcare services. The model is then calibrated to our unique health focused Social Accounting Matrix of Israel, and simulates the introduction of a hypothetical health-labour market - given that it is heavily regulated in the baseline (i.e., the true situation in Israel today). For baseline parameters, we estimate the minimum social value at around 18% public healthcare financing. The intuition is that the gain in economic welfare from improved efficiency, is offset by the loss in social welfare due to a reduction in available social value. We furthermore simulate a deregulated healthcare scenario that internalizes the imputed value of social value and searches for the optimal weight of public and private healthcare provision.Keywords: contingent valuation method (CVM), general equilibrium model, hypothetical market, private-public healthcare, social value of public healthcare
Procedia PDF Downloads 146828 Tumor Size and Lymph Node Metastasis Detection in Colon Cancer Patients Using MR Images
Authors: Mohammadreza Hedyehzadeh, Mahdi Yousefi
Abstract:
Colon cancer is one of the most common cancer, which predicted to increase its prevalence due to the bad eating habits of peoples. Nowadays, due to the busyness of people, the use of fast foods is increasing, and therefore, diagnosis of this disease and its treatment are of particular importance. To determine the best treatment approach for each specific colon cancer patients, the oncologist should be known the stage of the tumor. The most common method to determine the tumor stage is TNM staging system. In this system, M indicates the presence of metastasis, N indicates the extent of spread to the lymph nodes, and T indicates the size of the tumor. It is clear that in order to determine all three of these parameters, an imaging method must be used, and the gold standard imaging protocols for this purpose are CT and PET/CT. In CT imaging, due to the use of X-rays, the risk of cancer and the absorbed dose of the patient is high, while in the PET/CT method, there is a lack of access to the device due to its high cost. Therefore, in this study, we aimed to estimate the tumor size and the extent of its spread to the lymph nodes using MR images. More than 1300 MR images collected from the TCIA portal, and in the first step (pre-processing), histogram equalization to improve image qualities and resizing to get the same image size was done. Two expert radiologists, which work more than 21 years on colon cancer cases, segmented the images and extracted the tumor region from the images. The next step is feature extraction from segmented images and then classify the data into three classes: T0N0، T3N1 و T3N2. In this article, the VGG-16 convolutional neural network has been used to perform both of the above-mentioned tasks, i.e., feature extraction and classification. This network has 13 convolution layers for feature extraction and three fully connected layers with the softmax activation function for classification. In order to validate the proposed method, the 10-fold cross validation method used in such a way that the data was randomly divided into three parts: training (70% of data), validation (10% of data) and the rest for testing. It is repeated 10 times, each time, the accuracy, sensitivity and specificity of the model are calculated and the average of ten repetitions is reported as the result. The accuracy, specificity and sensitivity of the proposed method for testing dataset was 89/09%, 95/8% and 96/4%. Compared to previous studies, using a safe imaging technique (MRI) and non-use of predefined hand-crafted imaging features to determine the stage of colon cancer patients are some of the study advantages.Keywords: colon cancer, VGG-16, magnetic resonance imaging, tumor size, lymph node metastasis
Procedia PDF Downloads 59827 Analysis of a Differential System to Get Insights on the Potential Establishment of Microsporidia MB in the Mosquito Population for Malaria Control
Authors: Charlene N. T. Mfangnia, Henri E. Z. Tonnang, Berge Tsanou, Jeremy Herren
Abstract:
Microsporidia MB is a recently discovered symbiont capable of blocking the transmission of Plasmodium from mosquitoes to humans. The symbiont can spread both horizontally and vertically among the mosquito population. This dual transmission gives the symbiont the ability to invade the mosquito population. The replacement of the mosquito population by the population of symbiont-infected mosquitoes then appears as a promising strategy for malaria control. In this context, the present study uses differential equations to model the transmission dynamics of Microsporidia MB in the population of female Anopheles mosquitoes. Long-term propagation scenarios of the symbiont, such as extinction, persistence or total infection, are obtained through the determination of the target and basic reproduction numbers, the equilibria, and the study of their stability. The stability is illustrated numerically, and the contribution of vertical and horizontal transmission in the spread of the symbiont is assessed. Data obtained from laboratory experiments are then used to explain the low prevalence observed in nature. The study also shows that the male death rate, the mating rate and the attractiveness of MB-positive mosquitoes are the factors that most influence the transmission of the symbiont. In addition, the introduction of temperature and the study of bifurcations show the significant influence of the environmental condition in the propagation of Microsporidia MB. This finding proves the necessity of taking into account environmental variables for the potential establishment of the symbiont in a new area.Keywords: differential equations, stability analysis, malaria, microsporidia MB, horizontal transmission, vertical transmission, numerical illustration
Procedia PDF Downloads 113826 Psychological Consultation of Married Couples at Various Stages of Formation of the Young Family
Authors: Gulden Aykinbaeva, Assem Umirzakova, Assel Makhadiyeva
Abstract:
The problem of studying of young married couples in connection with a change of social institute of a family and marriage is represented very actual for family consultation, considering a family role in the development of modern society. Results of numerous researchs say that one of difficult in formation and stabilization of a matrimony is the period of a young family. This period is characterized by various processes of integration, adaptation and emotional compatibility of spouses. The young family in it the period endures the first standard crisis which postpones a print for the further development of the family scenario. Emergence new, earlier not existing, systems of values render a huge value on the process of formation of a young family and each of spouses separately. Possibly to solve the set family tasks at the development of the uniform system of the family relations in which socially mature persons capable to consider a family as the creativity of each other act as subjects. Due to the research objective in work the following techniques were used: a questionnaire of satisfaction with V. V. Stolin's marriage and A. N. Volkova's technique directed on detection of coherence of family values and role installations in a married couple, and also content – the analysis. Development of an internal basis of a family on mutual clearing of values is important during the work with married couples. 'The mature view' of the partner in the marriage union provides coherence between the expected and real behavior of the partner that is important for the realization of the purposes of adaptation in a family. For research of communication of the data obtained by means of A. N. Volkova's techniques, V. V. Stolina and content – the analysis, the correlation analysis, with the application of the criterion of Spirmen was used. The analysis of results of the conducted research allowed us to determine the number of consistent patterns: 1. Nature of change of satisfaction with marriage at spouses testifies that the matrimonial relations undergo high-quality changes at different stages of formation of a young family. 2. The matrimonial relations in the course of their development, formation and functioning in young marriage undergo considerable changes on psychological, social and psychological and insignificant — at the psychophysiological and sociocultural levels. The material received by us allows to plan ways of further detailed researches of the development of the matrimonial relations not only in the young marriage but also at further stages of development of a matrimony. We believe that the results received in this research can be almost applied at creation of algorithms of selection of marriage partners, at diagnostics of character and the maintenance of matrimonial disharmonies, at the forecast of stability of marriage and a family.Keywords: married couples, formation of the young family, psychological consultation, matrimony
Procedia PDF Downloads 395825 Molecular Dynamics Simulation for Vibration Analysis at Nanocomposite Plates
Authors: Babak Safaei, A. M. Fattahi
Abstract:
Polymer/carbon nanotube nanocomposites have a wide range of promising applications Due to their enhanced properties. In this work, free vibration analysis of single-walled carbon nanotube-reinforced composite plates is conducted in which carbon nanotubes are embedded in an amorphous polyethylene. The rule of mixture based on various types of plate model namely classical plate theory (CLPT), first-order shear deformation theory (FSDT), and higher-order shear deformation theory (HSDT) was employed to obtain fundamental frequencies of the nanocomposite plates. Generalized differential quadrature (GDQ) method was used to discretize the governing differential equations along with the simply supported and clamped boundary conditions. The material properties of the nanocomposite plates were evaluated using molecular dynamic (MD) simulation corresponding to both short-(10,10) SWCNT and long-(10,10) SWCNT composites. Then the results obtained directly from MD simulations were fitted with those calculated by the rule of mixture to extract appropriate values of carbon nanotube efficiency parameters accounting for the scale-dependent material properties. The selected numerical results are presented to address the influences of nanotube volume fraction and edge supports on the value of fundamental frequency of carbon nanotube-reinforced composite plates corresponding to both long- and short-nanotube composites.Keywords: nanocomposites, molecular dynamics simulation, free vibration, generalized, differential quadrature (GDQ) method
Procedia PDF Downloads 329824 Theoretical-Experimental Investigations on Free Vibration of Glass Fiber/Polyester Composite Conical Shells Containing Fluid
Authors: Tran Ich Thinh, Nguyen Manh Cuong
Abstract:
Free vibrations of partial fluid-filled composite truncated conical shells are investigated using the Dynamic Stiffness Method (DSM) or Continuous Element Method (CEM) based on the First Order Shear Deformation Theory (FSDT) and non-viscous incompressible fluid equations. Numerical examples are given for analyzing natural frequencies and harmonic responses of clamped-free conical shells partially and completely filled with fluid. To compare with the theoretical results, detailed experimental results have been obtained on the free vibration of a clamped-free conical shells partially filled with water by using a multi-vibration measuring machine (DEWEBOOK-DASYLab 5.61.10). Three glass fiber/polyester composite truncated cones with the radius of the larger end 285 mm, thickness 2 mm, and the cone lengths along the generators are 285 mm, 427.5 mm and 570 mm with the semi-vertex angles 27, 14 and 9 degrees respectively were used, and the filling ratio of the contained water was 0, 0.25, 0.50, 0.75 and 1.0. The results calculated by proposed computational model for studied composite conical shells are in good agreement with experiments. Obtained results indicate that the fluid filling can reduce significantly the natural frequencies of composite conical shells. Parametric studies including circumferential wave number, fluid depth and cone angles are carried out.Keywords: dynamic stiffness method, experimental study, free vibration, fluid-shell interaction, glass fiber/polyester composite conical shell
Procedia PDF Downloads 498823 Functional Neurocognitive Imaging (fNCI): A Diagnostic Tool for Assessing Concussion Neuromarker Abnormalities and Treating Post-Concussion Syndrome in Mild Traumatic Brain Injury Patients
Authors: Parker Murray, Marci Johnson, Tyson S. Burnham, Alina K. Fong, Mark D. Allen, Bruce McIff
Abstract:
Purpose: Pathological dysregulation of Neurovascular Coupling (NVC) caused by mild traumatic brain injury (mTBI) is the predominant source of chronic post-concussion syndrome (PCS) symptomology. fNCI has the ability to localize dysregulation in NVC by measuring blood-oxygen-level-dependent (BOLD) signaling during the performance of fMRI-adapted neuropsychological evaluations. With fNCI, 57 brain areas consistently affected by concussion were identified as PCS neural markers, which were validated on large samples of concussion patients and healthy controls. These neuromarkers provide the basis for a computation of PCS severity which is referred to as the Severity Index Score (SIS). The SIS has proven valuable in making pre-treatment decisions, monitoring treatment efficiency, and assessing long-term stability of outcomes. Methods and Materials: After being scanned while performing various cognitive tasks, 476 concussed patients received an SIS score based on the neural dysregulation of the 57 previously identified brain regions. These scans provide an objective measurement of attentional, subcortical, visual processing, language processing, and executive functioning abilities, which were used as biomarkers for post-concussive neural dysregulation. Initial SIS scores were used to develop individualized therapy incorporating cognitive, occupational, and neuromuscular modalities. These scores were also used to establish pre-treatment benchmarks and measure post-treatment improvement. Results: Changes in SIS were calculated in percent change from pre- to post-treatment. Patients showed a mean improvement of 76.5 percent (σ= 23.3), and 75.7 percent of patients showed at least 60 percent improvement. Longitudinal reassessment of 24 of the patients, measured an average of 7.6 months post-treatment, shows that SIS improvement is maintained and improved, with an average of 90.6 percent improvement from their original scan. Conclusions: fNCI provides a reliable measurement of NVC allowing for identification of concussion pathology. Additionally, fNCI derived SIS scores direct tailored therapy to restore NVC, subsequently resolving chronic PCS resulting from mTBI.Keywords: concussion, functional magnetic resonance imaging (fMRI), neurovascular coupling (NVC), post-concussion syndrome (PCS)
Procedia PDF Downloads 357822 Modeling and Implementation of a Hierarchical Safety Controller for Human Machine Collaboration
Authors: Damtew Samson Zerihun
Abstract:
This paper primarily describes the concept of a hierarchical safety control (HSC) in discrete manufacturing to up-hold productivity with human intervention and machine failures using a systematic approach, through increasing the system availability and using additional knowledge on machines so as to improve the human machine collaboration (HMC). It also highlights the implemented PLC safety algorithm, in applying this generic concept to a concrete pro-duction line using a lab demonstrator called FATIE (Factory Automation Test and Integration Environment). Furthermore, the paper describes a model and provide a systematic representation of human-machine collabora-tion in discrete manufacturing and to this end, the Hierarchical Safety Control concept is proposed. This offers a ge-neric description of human-machine collaboration based on Finite State Machines (FSM) that can be applied to vari-ous discrete manufacturing lines instead of using ad-hoc solutions for each line. With its reusability, flexibility, and extendibility, the Hierarchical Safety Control scheme allows upholding productivity while maintaining safety with reduced engineering effort compared to existing solutions. The approach to the solution begins with a successful partitioning of different zones around the Integrated Manufacturing System (IMS), which are defined by operator tasks and the risk assessment, used to describe the location of the human operator and thus to identify the related po-tential hazards and trigger the corresponding safety functions to mitigate it. This includes selective reduced speed zones and stop zones, and in addition with the hierarchical safety control scheme and advanced safety functions such as safe standstill and safe reduced speed are used to achieve the main goals in improving the safe Human Ma-chine Collaboration and increasing the productivity. In a sample scenarios, It is shown that an increase of productivity in the order of 2.5% is already possible with a hi-erarchical safety control, which consequently under a given assumptions, a total sum of 213 € could be saved for each intervention, compared to a protective stop reaction. Thereby the loss is reduced by 22.8%, if occasional haz-ard can be refined in a hierarchical way. Furthermore, production downtime due to temporary unavailability of safety devices can be avoided with safety failover that can save millions per year. Moreover, the paper highlights the proof of the development, implementation and application of the concept on the lab demonstrator (FATIE), where it is realized on the new safety PLCs, Drive Units, HMI as well as Safety devices in addition to the main components of the IMS.Keywords: discrete automation, hierarchical safety controller, human machine collaboration, programmable logical controller
Procedia PDF Downloads 369821 Soft Computing Employment to Optimize Safety Stock Levels in Supply Chain Dairy Product under Supply and Demand Uncertainty
Authors: Riyadh Jamegh, Alla Eldin Kassam, Sawsan Sabih
Abstract:
In order to overcome uncertainty conditions and inability to meet customers' requests due to these conditions, organizations tend to reserve a certain safety stock level (SSL). This level must be chosen carefully in order to avoid the increase in holding cost due to excess in SSL or shortage cost due to too low SSL. This paper used soft computing fuzzy logic to identify optimal SSL; this fuzzy model uses the dynamic concept to cope with high complexity environment status. The proposed model can deal with three input variables, i.e., demand stability level, raw material availability level, and on hand inventory level by using dynamic fuzzy logic to obtain the best SSL as an output. In this model, demand stability, raw material, and on hand inventory levels are described linguistically and then treated by inference rules of the fuzzy model to extract the best level of safety stock. The aim of this research is to provide dynamic approach which is used to identify safety stock level, and it can be implanted in different industries. Numerical case study in the dairy industry with Yogurt 200 gm cup product is explained to approve the validity of the proposed model. The obtained results are compared with the current level of safety stock which is calculated by using the traditional approach. The importance of the proposed model has been demonstrated by the significant reduction in safety stock level.Keywords: inventory optimization, soft computing, safety stock optimization, dairy industries inventory optimization
Procedia PDF Downloads 125