Search results for: time to surgery
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18580

Search results for: time to surgery

12700 A Dissipative Particle Dynamics Study of a Capsule in Microfluidic Intracellular Delivery System

Authors: Nishanthi N. S., Srikanth Vedantam

Abstract:

Intracellular delivery of materials has always proved to be a challenge in research and therapeutic applications. Usually, vector-based methods, such as liposomes and polymeric materials, and physical methods, such as electroporation and sonoporation have been used for introducing nucleic acids or proteins. Reliance on exogenous materials, toxicity, off-target effects was the short-comings of these methods. Microinjection was an alternative process which addressed the above drawbacks. However, its low throughput had hindered its adoption widely. Mechanical deformation of cells by squeezing them through constriction channel can cause the temporary development of pores that would facilitate non-targeted diffusion of materials. Advantages of this method include high efficiency in intracellular delivery, a wide choice of materials, improved viability and high throughput. This cell squeezing process can be studied deeper by employing simple models and efficient computational procedures. In our current work, we present a finite sized dissipative particle dynamics (FDPD) model to simulate the dynamics of the cell flowing through a constricted channel. The cell is modeled as a capsule with FDPD particles connected through a spring network to represent the membrane. The total energy of the capsule is associated with linear and radial springs in addition to constraint of the fixed area. By performing detailed simulations, we studied the strain on the membrane of the capsule for channels with varying constriction heights. The strain on the capsule membrane was found to be similar though the constriction heights vary. When strain on the membrane was correlated to the development of pores, we found higher porosity in capsule flowing in wider channel. This is due to localization of strain to a smaller region in the narrow constriction channel. But the residence time of the capsule increased as the channel constriction narrowed indicating that strain for an increased time will cause less cell viability.

Keywords: capsule, cell squeezing, dissipative particle dynamics, intracellular delivery, microfluidics, numerical simulations

Procedia PDF Downloads 130
12699 Interstellar Mission to Wolf 359: Possibilities for the Future

Authors: Rajasekar Anand Thiyagarajan

Abstract:

One of the driving forces of mankind is the “le r`eve d'etoiles" or the “dream of stars", which has been the dynamo of our civilization. Since the beginning of the dawn of the civilization, mankind has looked upon the heavens with wonder and he has tried to understand the meaning of those twinkling lights. As human history has progressed, the understanding of those twinkling lights has progressed, as we now know a lot of information about stars. However, the dream of stars or the dream of reaching those stars always remains within the expectations of mankind. In fact, the needs of the civilization constantly drive for better knowledge and the capability of reaching those stars is one such way that knowledge and exultation can be achieved. This paper takes a futuristic case study of an interstellar mission to Wolf 359, which is approximately 8.3 light years away from us. In terms of galactic distances, 8.3 light years is not much, but as far as present space technology capabilities are concerned, it is next to impossible for us to reach those distances. Several studies have been conducted on various missions to Alpha Centauri and other nearby stars such as Barnard's star and Wolf 359. However, taking a more distant star such as Wolf 359 will help test the mankind's drive for interstellar exploration, as exotic means of travel are needed. This paper will take a futuristic case study of the event and various possibilities of space travel will be discussed in detail. Comprehensive tables and graphs will be given, which will depict the amount of time that will pass at each mode of travel and more importantly some idea on the cost in terms of energy as well as money will be discussed within today's context. In addition, prerequisites to an interstellar mission to Wolf 359 will be given in detail as well as a sample mission which will take place to that particular destination. Even though the possibility of such a mission is probably nonexistent for the 21st century, it is essential to do these exercises so that mankind's understanding of the universe will be increased. In addition, this paper hopes to establish some general guidelines for such an interstellar mission.

Keywords: wolf 359, interstellar mission, alpha centauri, core diameter, core length, reflector thickness enrichment, gas temperature, reflector temperature, power density, mass of the space craft, acceleration of the space craft, time expansion

Procedia PDF Downloads 410
12698 Hand Gesture Detection via EmguCV Canny Pruning

Authors: N. N. Mosola, S. J. Molete, L. S. Masoebe, M. Letsae

Abstract:

Hand gesture recognition is a technique used to locate, detect, and recognize a hand gesture. Detection and recognition are concepts of Artificial Intelligence (AI). AI concepts are applicable in Human Computer Interaction (HCI), Expert systems (ES), etc. Hand gesture recognition can be used in sign language interpretation. Sign language is a visual communication tool. This tool is used mostly by deaf societies and those with speech disorder. Communication barriers exist when societies with speech disorder interact with others. This research aims to build a hand recognition system for Lesotho’s Sesotho and English language interpretation. The system will help to bridge the communication problems encountered by the mentioned societies. The system has various processing modules. The modules consist of a hand detection engine, image processing engine, feature extraction, and sign recognition. Detection is a process of identifying an object. The proposed system uses Canny pruning Haar and Haarcascade detection algorithms. Canny pruning implements the Canny edge detection. This is an optimal image processing algorithm. It is used to detect edges of an object. The system employs a skin detection algorithm. The skin detection performs background subtraction, computes the convex hull, and the centroid to assist in the detection process. Recognition is a process of gesture classification. Template matching classifies each hand gesture in real-time. The system was tested using various experiments. The results obtained show that time, distance, and light are factors that affect the rate of detection and ultimately recognition. Detection rate is directly proportional to the distance of the hand from the camera. Different lighting conditions were considered. The more the light intensity, the faster the detection rate. Based on the results obtained from this research, the applied methodologies are efficient and provide a plausible solution towards a light-weight, inexpensive system which can be used for sign language interpretation.

Keywords: canny pruning, hand recognition, machine learning, skin tracking

Procedia PDF Downloads 166
12697 Application of Data Driven Based Models as Early Warning Tools of High Stream Flow Events and Floods

Authors: Mohammed Seyam, Faridah Othman, Ahmed El-Shafie

Abstract:

The early warning of high stream flow events (HSF) and floods is an important aspect in the management of surface water and rivers systems. This process can be performed using either process-based models or data driven-based models such as artificial intelligence (AI) techniques. The main goal of this study is to develop efficient AI-based model for predicting the real-time hourly stream flow (Q) and apply it as early warning tool of HSF and floods in the downstream area of the Selangor River basin, taken here as a paradigm of humid tropical rivers in Southeast Asia. The performance of AI-based models has been improved through the integration of the lag time (Lt) estimation in the modelling process. A total of 8753 patterns of Q, water level, and rainfall hourly records representing one-year period (2011) were utilized in the modelling process. Six hydrological scenarios have been arranged through hypothetical cases of input variables to investigate how the changes in RF intensity in upstream stations can lead formation of floods. The initial SF was changed for each scenario in order to include wide range of hydrological situations in this study. The performance evaluation of the developed AI-based model shows that high correlation coefficient (R) between the observed and predicted Q is achieved. The AI-based model has been successfully employed in early warning throughout the advance detection of the hydrological conditions that could lead to formations of floods and HSF, where represented by three levels of severity (i.e., alert, warning, and danger). Based on the results of the scenarios, reaching the danger level in the downstream area required high RF intensity in at least two upstream areas. According to results of applications, it can be concluded that AI-based models are beneficial tools to the local authorities for flood control and awareness.

Keywords: floods, stream flow, hydrological modelling, hydrology, artificial intelligence

Procedia PDF Downloads 233
12696 Bioethanol Production from Marine Algae Ulva Lactuca and Sargassum Swartzii: Saccharification and Process Optimization

Authors: M. Jerold, V. Sivasubramanian, A. George, B.S. Ashik, S. S. Kumar

Abstract:

Bioethanol is a sustainable biofuel that can be used alternative to fossil fuels. Today, third generation (3G) biofuel is gaining more attention than first and second-generation biofuel. The more lignin content in the lignocellulosic biomass is the major drawback of second generation biofuels. Algae are the renewable feedstock used in the third generation biofuel production. Algae contain a large number of carbohydrates, therefore it can be used for the fermentation by hydrolysis process. There are two groups of Algae, such as micro and macroalgae. In the present investigation, Macroalgae was chosen as raw material for the production of bioethanol. Two marine algae viz. Ulva Lactuca and Sargassum swartzii were used for the experimental studies. The algal biomass was characterized using various analytical techniques like Elemental Analysis, Scanning Electron Microscopy Analysis and Fourier Transform Infrared Spectroscopy to understand the physio-Chemical characteristics. The batch experiment was done to study the hydrolysis and operation parameters such as pH, agitation, fermentation time, inoculum size. The saccharification was done with acid and alkali treatment. The experimental results showed that NaOH treatment was shown to enhance the bioethanol. From the hydrolysis study, it was found that 0.5 M Alkali treatment would serve as optimum concentration for the saccharification of polysaccharide sugar to monomeric sugar. The maximum yield of bioethanol was attained at a fermentation time of 9 days. The inoculum volume of 1mL was found to be lowest for the ethanol fermentation. The agitation studies show that the fermentation was higher during the process. The percentage yield of bioethanol was found to be 22.752% and 14.23 %. The elemental analysis showed that S. swartzii contains a higher carbon source. The results confirmed hydrolysis was not completed to recover the sugar from biomass. The specific gravity of ethanol was found to 0.8047 and 0.808 for Ulva Lactuca and Sargassum swartzii, respectively. The purity of bioethanol also studied and found to be 92.55 %. Therefore, marine algae can be used as a most promising renewable feedstock for the production of bioethanol.

Keywords: algae, biomass, bioethaol, biofuel, pretreatment

Procedia PDF Downloads 144
12695 Framework Proposal on How to Use Game-Based Learning, Collaboration and Design Challenges to Teach Mechatronics

Authors: Michael Wendland

Abstract:

This paper presents a framework to teach a methodical design approach by the help of using a mixture of game-based learning, design challenges and competitions as forms of direct assessment. In today’s world, developing products is more complex than ever. Conflicting goals of product cost and quality with limited time as well as post-pandemic part shortages increase the difficulty. Common design approaches for mechatronic products mitigate some of these effects by helping the users with their methodical framework. Due to the inherent complexity of these products, the number of involved resources and the comprehensive design processes, students very rarely have enough time or motivation to experience a complete approach in one semester course. But, for students to be successful in the industrial world, it is crucial to know these methodical frameworks and to gain first-hand experience. Therefore, it is necessary to teach these design approaches in a real-world setting and keep the motivation high as well as learning to manage upcoming problems. This is achieved by using a game-based approach and a set of design challenges that are given to the students. In order to mimic industrial collaboration, they work in teams of up to six participants and are given the main development target to design a remote-controlled robot that can manipulate a specified object. By setting this clear goal without a given solution path, a constricted time-frame and limited maximal cost, the students are subjected to similar boundary conditions as in the real world. They must follow the methodical approach steps by specifying requirements, conceptualizing their ideas, drafting, designing, manufacturing and building a prototype using rapid prototyping. At the end of the course, the prototypes will be entered into a contest against the other teams. The complete design process is accompanied by theoretical input via lectures which is immediately transferred by the students to their own design problem in practical sessions. To increase motivation in these sessions, a playful learning approach has been chosen, i.e. designing the first concepts is supported by using lego construction kits. After each challenge, mandatory online quizzes help to deepen the acquired knowledge of the students and badges are awarded to those who complete a quiz, resulting in higher motivation and a level-up on a fictional leaderboard. The final contest is held in presence and involves all teams with their functional prototypes that now need to contest against each other. Prices for the best mechanical design, the most innovative approach and for the winner of the robotic contest are awarded. Each robot design gets evaluated with regards to the specified requirements and partial grades are derived from the results. This paper concludes with a critical review of the proposed framework, the game-based approach for the designed prototypes, the reality of the boundary conditions, the problems that occurred during the design and manufacturing process, the experiences and feedback of the students and the effectiveness of their collaboration as well as a discussion of the potential transfer to other educational areas.

Keywords: design challenges, game-based learning, playful learning, methodical framework, mechatronics, student assessment, constructive alignment

Procedia PDF Downloads 57
12694 Radio Frequency Energy Harvesting Friendly Self-Clocked Digital Low Drop-Out for System-On-Chip Internet of Things

Authors: Christos Konstantopoulos, Thomas Ussmueller

Abstract:

Digital low drop-out regulators, in contrast to analog counterparts, provide an architecture of sub-1 V regulation with low power consumption, high power efficiency, and system integration. Towards an optimized integration in the ultra-low-power system-on-chip Internet of Things architecture that is operated through a radio frequency energy harvesting scheme, the D-LDO regulator should constitute the main regulator that operates the master-clock and rest loads of the SoC. In this context, we present a D-LDO with linear search coarse regulation and asynchronous fine regulation, which incorporates an in-regulator clock generation unit that provides an autonomous, self-start-up, and power-efficient D-LDO design. In contrast to contemporary D-LDO designs that employ ring-oscillator architecture which start-up time is dependent on the frequency, this work presents a fast start-up burst oscillator based on a high-gain stage with wake-up time independent of coarse regulation frequency. The design is implemented in a 55-nm Global Foundries CMOS process. With the purpose to validate the self-start-up capability of the presented D-LDO in the presence of ultra-low input power, an on-chip test-bench with an RF rectifier is implemented as well, which provides the RF to DC operation and feeds the D-LDO. Power efficiency and load regulation curves of the D-LDO are presented as extracted from the RF to regulated DC operation. The D-LDO regulator presents 83.6 % power efficiency during the RF to DC operation with a 3.65 uA load current and voltage regulator referred input power of -27 dBm. It succeeds 486 nA maximum quiescent current with CL 75 pF, the maximum current efficiency of 99.2%, and 1.16x power efficiency improvement compared to analog voltage regulator counterpart oriented to SoC IoT loads. Complementary, the transient performance of the D-LDO is evaluated under the transient droop test, and the achieved figure-of-merit is compared with state-of-art implementations.

Keywords: D-LDO, Internet of Things, RF energy harvesting, voltage regulators

Procedia PDF Downloads 126
12693 Neural Network Mechanisms Underlying the Combination Sensitivity Property in the HVC of Songbirds

Authors: Zeina Merabi, Arij Dao

Abstract:

The temporal order of information processing in the brain is an important code in many acoustic signals, including speech, music, and animal vocalizations. Despite its significance, surprisingly little is known about its underlying cellular mechanisms and network manifestations. In the songbird telencephalic nucleus HVC, a subset of neurons shows temporal combination sensitivity (TCS). These neurons show a high temporal specificity, responding differently to distinct patterns of spectral elements and their combinations. HVC neuron types include basal-ganglia-projecting HVCX, forebrain-projecting HVCRA, and interneurons (HVC¬INT), each exhibiting distinct cellular, electrophysiological and functional properties. In this work, we develop conductance-based neural network models connecting the different classes of HVC neurons via different wiring scenarios, aiming to explore possible neural mechanisms that orchestrate the combination sensitivity property exhibited by HVCX, as well as replicating in vivo firing patterns observed when TCS neurons are presented with various auditory stimuli. The ionic and synaptic currents for each class of neurons that are presented in our networks and are based on pharmacological studies, rendering our networks biologically plausible. We present for the first time several realistic scenarios in which the different types of HVC neurons can interact to produce this behavior. The different networks highlight neural mechanisms that could potentially help to explain some aspects of combination sensitivity, including 1) interplay between inhibitory interneurons’ activity and the post inhibitory firing of the HVCX neurons enabled by T-type Ca2+ and H currents, 2) temporal summation of synaptic inputs at the TCS site of opposing signals that are time-and frequency- dependent, and 3) reciprocal inhibitory and excitatory loops as a potent mechanism to encode information over many milliseconds. The result is a plausible network model characterizing auditory processing in HVC. Our next step is to test the predictions of the model.

Keywords: combination sensitivity, songbirds, neural networks, spatiotemporal integration

Procedia PDF Downloads 48
12692 Road Accident Blackspot Analysis: Development of Decision Criteria for Accident Blackspot Safety Strategies

Authors: Tania Viju, Bimal P., Naseer M. A.

Abstract:

This study aims to develop a conceptual framework for the decision support system (DSS), that helps the decision-makers to dynamically choose appropriate safety measures for each identified accident blackspot. An accident blackspot is a segment of road where the frequency of accident occurrence is disproportionately greater than other sections on roadways. According to a report by the World Bank, India accounts for the highest, that is, eleven percent of the global death in road accidents with just one percent of the world’s vehicles. Hence in 2015, the Ministry of Road Transport and Highways of India gave prime importance to the rectification of accident blackspots. To enhance road traffic safety and reduce the traffic accident rate, effectively identifying and rectifying accident blackspots is of great importance. This study helps to understand and evaluate the existing methods in accident blackspot identification and prediction that are used around the world and their application in Indian roadways. The decision support system, with the help of IoT, ICT and smart systems, acts as a management and planning tool for the government for employing efficient and cost-effective rectification strategies. In order to develop a decision criterion, several factors in terms of quantitative as well as qualitative data that influence the safety conditions of the road are analyzed. Factors include past accident severity data, occurrence time, light, weather and road conditions, visibility, driver conditions, junction type, land use, road markings and signs, road geometry, etc. The framework conceptualizes decision-making by classifying blackspot stretches based on factors like accident occurrence time, different climatic and road conditions and suggesting mitigation measures based on these identified factors. The decision support system will help the public administration dynamically manage and plan the necessary safety interventions required to enhance the safety of the road network.

Keywords: decision support system, dynamic management, road accident blackspots, road safety

Procedia PDF Downloads 125
12691 Wetting Features of Butterflies Morpho Peleides and Anti-icing Behavior

Authors: Burdin Louise, Brulez Anne-Catherine, Mazurcyk Radoslaw, Leclercq Jean-louis, Benayoun Stéphane

Abstract:

By using a biomimetic approach, an investigation was conducted to determine the connections between morphology and wetting. The interest is focused on the Morpho peleides butterfly. This butterfly is already well-known among researchers for its brilliant iridescent color and has inspired numerous innovations. The intricate structure of its wings is responsible for such color. However, this multiscale structure exhibits a multitude of other features, such as hydrophobicity. Given the limited research on the wetting properties of Morpho butterfly, a detailed analysis of its wetting behavior is proposed. Multiscale surface topographies of the Morpho peleides butterfly were analyzed using scanning electron microscope and atomic force microscopy. To understand the relationship between morphology and wettability, a goniometer was employed to measured static and dynamic contact angle. Since several studies have consistently demonstrated that superhydrophobic surfaces can effectively delay freezing, icing delay time the Morpho’s wings was also measured. The results revealed contact angles close to 136°, indicating a high degree of hydrophobicity. Moreover, sliding angles (SA) were measured in different directions, including along and against the rolling-outward direction. The findings suggest anisotropic wetting. Specifically, when the wing was tilted along the rolling outward direction (i.e., away from the insect’s body) SA was about 7°. While, when the wing was tilted against the rolling outward direction, SA was about 29°. This phenomenon is directly linked to the butterfly’s survival strategy. To investigate the exclusive morphological impact on anti-icing properties, PDMS replicas of the Morpho butterfly were obtained. When compared to flat PDMS and microscale textured PDMS, Morpho replications exhibited a longer freezing time. Therefore, this could be a source of inspiration for designing superhydrophobic surfaces with anti-icing applications or functional surfaces with controlled wettability.

Keywords: biomimetic, anisotropic wetting, anti-icing, multiscale roughness

Procedia PDF Downloads 43
12690 An Experimental Investigation on Explosive Phase Change of Liquefied Propane During a Bleve Event

Authors: Frederic Heymes, Michael Albrecht Birk, Roland Eyssette

Abstract:

Boiling Liquid Expanding Vapor Explosion (BLEVE) has been a well know industrial accident for over 6 decades now, and yet it is still poorly predicted and avoided. BLEVE is created when a vessel containing a pressure liquefied gas (PLG) is engulfed in a fire until the tank rupture. At this time, the pressure drops suddenly, leading the liquid to be in a superheated state. The vapor expansion and the violent boiling of the liquid produce several shock waves. This works aimed at understanding the contribution of vapor ad liquid phases in the overpressure generation in the near field. An experimental work was undertaken at a small scale to reproduce realistic BLEVE explosions. Key parameters were controlled through the experiments, such as failure pressure, fluid mass in the vessel, and weakened length of the vessel. Thirty-four propane BLEVEs were then performed to collect data on scenarios similar to common industrial cases. The aerial overpressure was recorded all around the vessel, and also the internal pressure changed during the explosion and ground loading under the vessel. Several high-speed cameras were used to see the vessel explosion and the blast creation by shadowgraph. Results highlight how the pressure field is anisotropic around the cylindrical vessel and highlights a strong dependency between vapor content and maximum overpressure from the lead shock. The time chronology of events reveals that the vapor phase is the main contributor to the aerial overpressure peak. A prediction model is built upon this assumption. Secondary flow patterns are observed after the lead. A theory on how the second shock observed in experiments forms is exposed thanks to an analogy with numerical simulation. The phase change dynamics are also discussed thanks to a window in the vessel. Ground loading measurements are finally presented and discussed to give insight into the order of magnitude of the force.

Keywords: phase change, superheated state, explosion, vapor expansion, blast, shock wave, pressure liquefied gas

Procedia PDF Downloads 64
12689 Departures from Anatolian Seljuk Building Complex with Iwan/Eyvan: The Tradition of Iwan Tombs

Authors: Mehmet Uysal, Yavuz Arat, Uğur Tuztaşı

Abstract:

As man constructed the spaces that he lived in he also designed spaces where their dead will stay according to their belief systems. These spaces are sometimes monumentalized by the means of a stone on the top of a mountain, sometimes signed by totems and sometimes became structures to protect graves and symbolize the person or make him unforgettable. Various grave monuments have been constructed from the earliest primitive societies to developed societies. Every belief system built structures for itself; Pyramids for pharaohs, grave monuments for kings and emperors, temples and tombs for important men of religion. These spaces are also architectural works like a school or a dwelling and have importance in history of architecture. After Turks embraced Islamism, examples of very beautiful tombs are built in Middle Asia during the Seljuk Period. By the time Seljuks came to Anatolia they built important tombs having peerless architectural characteristics firstly around Ahlat. After Anatolia Seljuks made Konya the capital city and Konya became administrative, cultural and scientific center, very important tombs were built in Konya. Different from the local tomb architecture, the architecture of tombs with half-open “eyvan/Iwan” is significant. Although iwan buildings is vastly used in Anatolian civil architecture and monumental buildings its best exmaples are observed in 13th century Medrese buildings. The iwan tomb tradition which was observed during the time period when this building typology was shaped and departed from the resident tradition in the form of iwan tombs are rarely represented. However, similar tombs were build in resemblance to this tradition. This study provides information on samples of iwan tombs (Gömeç Hatun Tomb, Emir Yavaştagel Tomb, and Beşparmak Tomb) and evaluates the departures from iwan building complexes in view of architectural language. This paper also gives information about iwan tombs among tombs having importance in Islamic Architectural Heritage.

Keywords: Seljuk Building Complex, Eyvan/Iwan, Anatolia, Islamic Architectural Heritage, tomb

Procedia PDF Downloads 394
12688 Adaptive Routing in NoC-Based Heterogeneous MPSoCs

Authors: M. K. Benhaoua, A. E. H. Benyamina, T. Djeradi, P. Boulet

Abstract:

In this paper, we propose adaptive routing that considers the routing of communications in order to optimize the overall performance. The routing technique uses a newly proposed Algorithm to route communications between the tasks. The routing we propose of the communications leads to a better optimization of several performance metrics (time and energy consumption). Experimental results show that the proposed routing approach provides significant performance improvements when compared to those using static routing.

Keywords: multi-processor systems-on-chip (mpsocs), network-on-chip (noc), heterogeneous architectures, adaptive routin

Procedia PDF Downloads 363
12687 Normal and Peaberry Coffee Beans Classification from Green Coffee Bean Images Using Convolutional Neural Networks and Support Vector Machine

Authors: Hira Lal Gope, Hidekazu Fukai

Abstract:

The aim of this study is to develop a system which can identify and sort peaberries automatically at low cost for coffee producers in developing countries. In this paper, the focus is on the classification of peaberries and normal coffee beans using image processing and machine learning techniques. The peaberry is not bad and not a normal bean. The peaberry is born in an only single seed, relatively round seed from a coffee cherry instead of the usual flat-sided pair of beans. It has another value and flavor. To make the taste of the coffee better, it is necessary to separate the peaberry and normal bean before green coffee beans roasting. Otherwise, the taste of total beans will be mixed, and it will be bad. In roaster procedure time, all the beans shape, size, and weight must be unique; otherwise, the larger bean will take more time for roasting inside. The peaberry has a different size and different shape even though they have the same weight as normal beans. The peaberry roasts slower than other normal beans. Therefore, neither technique provides a good option to select the peaberries. Defect beans, e.g., sour, broken, black, and fade bean, are easy to check and pick up manually by hand. On the other hand, the peaberry pick up is very difficult even for trained specialists because the shape and color of the peaberry are similar to normal beans. In this study, we use image processing and machine learning techniques to discriminate the normal and peaberry bean as a part of the sorting system. As the first step, we applied Deep Convolutional Neural Networks (CNN) and Support Vector Machine (SVM) as machine learning techniques to discriminate the peaberry and normal bean. As a result, better performance was obtained with CNN than with SVM for the discrimination of the peaberry. The trained artificial neural network with high performance CPU and GPU in this work will be simply installed into the inexpensive and low in calculation Raspberry Pi system. We assume that this system will be used in under developed countries. The study evaluates and compares the feasibility of the methods in terms of accuracy of classification and processing speed.

Keywords: convolutional neural networks, coffee bean, peaberry, sorting, support vector machine

Procedia PDF Downloads 132
12686 Pushover Analysis of a Typical Bridge Built in Central Zone of Mexico

Authors: Arturo Galvan, Jatziri Y. Moreno-Martinez, Daniel Arroyo-Montoya, Jose M. Gutierrez-Villalobos

Abstract:

Bridges are one of the most seismically vulnerable structures on highway transportation systems. The general process for assessing the seismic vulnerability of a bridge involves the evaluation of its overall capacity and demand. One of the most common procedures to obtain this capacity is by means of pushover analysis of the structure. Typically, the bridge capacity is assessed using non-linear static methods or non-linear dynamic analyses. The non-linear dynamic approaches use step by step numerical solutions for assessing the capacity with the consuming computer time inconvenience. In this study, a nonlinear static analysis (‘pushover analysis’) was performed to predict the collapse mechanism of a typical bridge built in the central zone of Mexico (Celaya, Guanajuato). The bridge superstructure consists of three simple supported spans with a total length of 76 m: 22 m of the length of extreme spans and 32 m of length of the central span. The deck width is of 14 m and the concrete slab depth is of 18 cm. The bridge is built by means of frames of five piers with hollow box-shaped sections. The dimensions of these piers are 7.05 m height and 1.20 m diameter. The numerical model was created using a commercial software considering linear and non-linear elements. In all cases, the piers were represented by frame type elements with geometrical properties obtained from the structural project and construction drawings of the bridge. The deck was modeled with a mesh of rectangular thin shell (plate bending and stretching) finite elements. The moment-curvature analysis was performed for the sections of the piers of the bridge considering in each pier the effect of confined concrete and its reinforcing steel. In this way, plastic hinges were defined on the base of the piers to carry out the pushover analysis. In addition, time history analyses were performed using 19 accelerograms of real earthquakes that have been registered in Guanajuato. In this way, the displacements produced by the bridge were determined. Finally, pushover analysis was applied through the control of displacements in the piers to obtain the overall capacity of the bridge before the failure occurs. It was concluded that the lateral deformation of the piers due to a critical earthquake occurred in this zone is almost imperceptible due to the geometry and reinforcement demanded by the current design standards and compared to its displacement capacity, they were excessive. According to the analysis, it was found that the frames built with five piers increase the rigidity in the transverse direction of the bridge. Hence it is proposed to reduce these frames of five piers to three piers, maintaining the same geometrical characteristics and the same reinforcement in each pier. Also, the mechanical properties of materials (concrete and reinforcing steel) were maintained. Once a pushover analysis was performed considering this configuration, it was concluded that the bridge would continue having a “correct” seismic behavior, at least for the 19 accelerograms considered in this study. In this way, costs in material, construction, time and labor would be reduced in this study case.

Keywords: collapse mechanism, moment-curvature analysis, overall capacity, push-over analysis

Procedia PDF Downloads 143
12685 Programmable Shields in Space

Authors: Tapas Kumar Sinha, Joseph Mathew

Abstract:

At the moment earth is in grave danger due to threats of global warming. The temperature of the earth has risen by almost 20C. Glaciers in the Arctic have started to melt. It would be foolhardy to think that this is a small effect and in time it would go away. Global warming is caused by a number of factors. However, one sure and simple way to totally eliminate this problem is to put programmable shields in space. Just as an umbrella blocks sunlight, a programmable shield in space will block sun rays from reaching the earth as in a solar eclipse and cause cooling in the penumbral region just as it happens during an eclipse.

Keywords: glaciers, green house, global warming space, satellites

Procedia PDF Downloads 573
12684 Slowness in Architecture: The Pace of Human Engagement with the Built Environment

Authors: Jaidev Tripathy

Abstract:

A human generation’s lifestyle, behaviors, habits, and actions are governed heavily by homogenous mindsets. But the current scenario is witnessing a rapid gap in this homogeneity as a result of an intervention, or rather, the dominance of the digital revolution in the human lifestyle. The current mindset for mass production, employment, multi-tasking, rapid involvement, and stiff competition to stay above the rest has led to a major shift in human consciousness. Architecture, as an entity, is being perceived differently. The screens are replacing the skies. The pace at which operation and evolution is taking place has increased. It is paradoxical, that time seems to be moving faster despite the intention to save time. Parallelly, there is an evident shift in architectural typologies spanning across different generations. The architecture of today is now seems influenced heavily from here and there. Mass production of buildings and over-exploitation of resources giving shape to uninspiring algorithmic designs, ambiguously catering to multiple user groups, has become a prevalent theme. Borrow-and-steal replaces influence, and the diminishing depth in today’s designs reflects a lack of understanding and connection. The digitally dominated world, perceived as an aid to connect and network, is making humans less capable of real-life interactions and understanding. It is not wrong, but it doesn’t seem right either. The engagement level between human beings and the built environment is a concern which surfaces. This leads to a question: Does human engagement drive architecture, or does architecture drive human engagement? This paper attempts to relook at architecture's capacity and its relativity with pace to influence the conscious decisions of a human being. Secondary research, supported with case examples, helps in understanding the translation of human engagement with the built environment through physicality of architecture. The procedure, or theme, is pace and the role of slowness in the context of human behaviors, thus bridging the widening gap between the human race and the architecture themselves give shape to, avoiding a possible future dystopian world.

Keywords: junkspace, pace, perception, slowness

Procedia PDF Downloads 97
12683 Simulation of Scaled Model of Tall Multistory Structure: Raft Foundation for Experimental and Numerical Dynamic Studies

Authors: Omar Qaftan

Abstract:

Earthquakes can cause tremendous loss of human life and can result in severe damage to a several of civil engineering structures especially the tall buildings. The response of a multistory structure subjected to earthquake loading is a complex task, and it requires to be studied by physical and numerical modelling. For many circumstances, the scale models on shaking table may be a more economical option than the similar full-scale tests. A shaking table apparatus is a powerful tool that offers a possibility of understanding the actual behaviour of structural systems under earthquake loading. It is required to use a set of scaling relations to predict the behaviour of the full-scale structure. Selecting the scale factors is the most important steps in the simulation of the prototype into the scaled model. In this paper, the principles of scaling modelling procedure are explained in details, and the simulation of scaled multi-storey concrete structure for dynamic studies is investigated. A procedure for a complete dynamic simulation analysis is investigated experimentally and numerically with a scale factor of 1/50. The frequency domain accounting and lateral displacement for both numerical and experimental scaled models are determined. The procedure allows accounting for the actual dynamic behave of actual size porotype structure and scaled model. The procedure is adapted to determine the effects of the tall multi-storey structure on a raft foundation. Four generated accelerograms were used as inputs for the time history motions which are in complying with EC8. The output results of experimental works expressed regarding displacements and accelerations are compared with those obtained from a conventional fixed-base numerical model. Four-time history was applied in both experimental and numerical models, and they concluded that the experimental has an acceptable output accuracy in compare with the numerical model output. Therefore this modelling methodology is valid and qualified for different shaking table experiments tests.

Keywords: structure, raft, soil, interaction

Procedia PDF Downloads 123
12682 Insight into the Physical Ageing of Poly(Butylene Succinate)

Authors: I. Georgousopoulou, S. Vouyiouka, C. Papaspyrides

Abstract:

The hydrolytic degradation of poly(butylene succinate) (PBS) was investigated when exposed to different humidity-temperature environments. To this direction different PBS grades were submitted to hydrolysis runs. Results indicated that the increment of hydrolysis temperature and relative humidity induced significant decrease in the molecular weight and thermal properties of the bioplastic. Τhe derived data can be considered to construct degradation kinetics based on carboxyl content variation versus time.

Keywords: hydrolytic degradation, physical ageing, poly(butylene succinate), polyester

Procedia PDF Downloads 276
12681 The Theology of a Muslim Artist: Tawfiq al-Hakim

Authors: Abdul Rahman Chamseddine

Abstract:

Tawfiq al-Hakim remains one of the most prominent playwrights in his native in Egypt, and in the broader Arab world. His works, at the time of their release, drew international attention and acclaim. His first 1933 masterpiece Ahl al-Kahf (The People of the Cave) especially, garnered fame and recognition in both Europe and the Arab world. Borrowing its title from the Qur’anic Sura, al-Hakim’s play relays the untold story of the life of those 'three saints' after they wake up from their prolonged sleep. The playwright’s selection of topics upon which to base his works displays a deep appreciation of Arabic and Islamic heritage. Al-Hakim was clearly influenced by Islam, to such a degree that he wrote the biography of the Prophet Muhammad in 1936 very early in his career. Knowing that Al-Hakim was preceded by many poets and creative writers in writing the Prophet Muhammad’s biography. Notably like Al-Barudi, Ahmad Shawqi, Haykal, Al-‘Aqqad, and Taha Husayn who have had their own ways in expressing their views of the Prophet Muhammad. The attempt to understand the concern of all those renaissance men and others in the person of the Prophet would be indispensable in this study. This project will examine the reasons behind al-Hakim’s choice to draw upon these particular texts, embedded as they are in the context of Arabic and Islamic heritage, and how the use of traditional texts serves his contemporary goals. The project will also analyze the image of Islam in al-Hakim’s imagination. Elsewhere, he envisions letters or conversations between God and himself, which offers a window into understanding the powerful impact of the Divine on Tawfiq al-Hakim, one that informs his literature and merits further scholarly attention. His works occupying a major rank in Arabic literature, does not reveal Al-Hakim solely but the unquestioned assumptions operative in the life of his community, its mental make-up and its attitudes. Furthermore, studying the reception of works that touch on sensitive issues, like writing a letter to God, in Al-Hakim’s historical context would be of a great significance in the process of comprehending the mentality of the Muslim community at that time.

Keywords: Arabic language, Arabic literature, Arabic theology, modern Arabic literature

Procedia PDF Downloads 339
12680 Material Concepts and Processing Methods for Electrical Insulation

Authors: R. Sekula

Abstract:

Epoxy composites are broadly used as an electrical insulation for the high voltage applications since only such materials can fulfill particular mechanical, thermal, and dielectric requirements. However, properties of the final product are strongly dependent on proper manufacturing process with minimized material failures, as too large shrinkage, voids and cracks. Therefore, application of proper materials (epoxy, hardener, and filler) and process parameters (mold temperature, filling time, filling velocity, initial temperature of internal parts, gelation time), as well as design and geometric parameters are essential features for final quality of the produced components. In this paper, an approach for three-dimensional modeling of all molding stages, namely filling, curing and post-curing is presented. The reactive molding simulation tool is based on a commercial CFD package, and include dedicated models describing viscosity and reaction kinetics that have been successfully implemented to simulate the reactive nature of the system with exothermic effect. Also a dedicated simulation procedure for stress and shrinkage calculations, as well as simulation results are presented in the paper. Second part of the paper is dedicated to recent developments on formulations of functional composites for electrical insulation applications, focusing on thermally conductive materials. Concepts based on filler modifications for epoxy electrical composites have been presented, including the results of the obtained properties. Finally, having in mind tough environmental regulations, in addition to current process and design aspects, an approach for product re-design has been presented focusing on replacement of epoxy material with the thermoplastic one. Such “design-for-recycling” method is one of new directions associated with development of new material and processing concepts of electrical products and brings a lot of additional research challenges. For that, one of the successful products has been presented to illustrate the presented methodology.

Keywords: curing, epoxy insulation, numerical simulations, recycling

Procedia PDF Downloads 262
12679 Lifetime Attachment: Adult Daughters Attachment to Their Old Mothers

Authors: Meltem Anafarta Şendağ, Funda Kutlu

Abstract:

Attachment theory has some major postulates that direct attention of psychologists from many different domains. First, the theory suggests that attachment is a lifetime process. This means that every human being from cradle to grave needs someone stronger to depend on in times of stress. Second, the attachment is a dynamic process and as one goes through developmental stages it is being transferred from one figure to another (friends, romantic partners). Third, the quality of attachment relationships later in time directly affected by the earliest attachment relationship established between the mother and the infant. Depending on these postulates, attachment literature focuses mostly on mother – child attachment during childhood and romantic relationship during adulthood. However, although romantic partners are important attachment figures in adults’ life, parents are not dropped out from the attachment hierarchy but they keep being important attachment figures. Despite the fact that parents could still be an important figure in adults’ life, adult – parent attachment is overlooked in the literature. Accordingly, this study focuses on adult daughters’ current attachment to their old mothers in relation with early parental bonding and current attachment to husbands. Participants of the study were 383 adult women (Average age = 40, ranging between 23 and 70) whose mothers were still alive and who were married at the time of the study. Participants were completed Adult Attachment Scale, Parental Bonding Instrument, and Experiences in Close Relationship – II together with demographic questionnaire. Results revealed that daughters’ attachment to their mothers weakens as they get older, have more children, and have longer marriages. Stronger attachment to mothers was found positively correlated with current satisfaction with the relationship, perception of maternal care before the age of 12 and negatively correlated with perception of controlling behavior before the age 12. Considering the relationship between current parental attachment and romantic attachment, it was found that as the current attachment to mother strengthens attachment avoidance towards husband decreases. Results revealed that although attachment between the adult daughters and old mothers weakens, the relationship is still critical in daughters’ lives. The strength of current attachment with the mother is related both with the early relationship with the mother and current attachment with the husband. The current study is thought to contribute to attachment theory emphasizing the attachment as a lifetime construct.

Keywords: adult daughter, attachment, old mothers, parental bonding

Procedia PDF Downloads 311
12678 Performance Evaluation of Packet Scheduling with Channel Conditioning Aware Based on Wimax Networks

Authors: Elmabruk Laias, Abdalla M. Hanashi, Mohammed Alnas

Abstract:

Worldwide Interoperability for Microwave Access (WiMAX) became one of the most challenging issues, since it was responsible for distributing available resources of the network among all users this leaded to the demand of constructing and designing high efficient scheduling algorithms in order to improve the network utilization, to increase the network throughput, and to minimize the end-to-end delay. In this study, the proposed algorithm focuses on an efficient mechanism to serve non-real time traffic in congested networks by considering channel status.

Keywords: WiMAX, Quality of Services (QoS), OPNE, Diff-Serv (DS).

Procedia PDF Downloads 269
12677 Healthcare Utilization and Costs of Specific Obesity Related Health Conditions in Alberta, Canada

Authors: Sonia Butalia, Huong Luu, Alexis Guigue, Karen J. B. Martins, Khanh Vu, Scott W. Klarenbach

Abstract:

Obesity-related health conditions impose a substantial economic burden on payers due to increased healthcare use. Estimates of healthcare resource use and costs associated with obesity-related comorbidities are needed to inform policies and interventions targeting these conditions. Methods: Adults living with obesity were identified (a procedure-related body mass index code for class 2/3 obesity between 2012 and 2019 in Alberta, Canada; excluding those with bariatric surgery), and outcomes were compared over 1-year (2019/2020) between those who had and did not have specific obesity-related comorbidities. The probability of using a healthcare service (based on the odds ratio of a zero [OR-zero] cost) was compared; 95% confidence intervals (CI) were reported. Logistic regression and a generalized linear model with log link and gamma distribution were used for total healthcare cost comparisons ($CDN); cost ratios and estimated cost differences (95% CI) were reported. Potential socio-demographic and clinical confounders were adjusted for, and incremental cost differences were representative of a referent case. Results: A total of 220,190 adults living with obesity were included; 44% had hypertension, 25% had osteoarthritis, 24% had type-2 diabetes, 17% had cardiovascular disease, 12% had insulin resistance, 9% had chronic back pain, and 4% of females had polycystic ovarian syndrome (PCOS). The probability of hospitalization, ED visit, and ambulatory care was higher in those with a following obesity-related comorbidity versus those without: chronic back pain (hospitalization: 1.8-times [OR-zero: 0.57 [0.55/0.59]] / ED visit: 1.9-times [OR-zero: 0.54 [0.53/0.56]] / ambulatory care visit: 2.4-times [OR-zero: 0.41 [0.40/0.43]]), cardiovascular disease (2.7-times [OR-zero: 0.37 [0.36/0.38]] / 1.9-times [OR-zero: 0.52 [0.51/0.53]] / 2.8-times [OR-zero: 0.36 [0.35/0.36]]), osteoarthritis (2.0-times [OR-zero: 0.51 [0.50/0.53]] / 1.4-times [OR-zero: 0.74 [0.73/0.76]] / 2.5-times [OR-zero: 0.40 [0.40/0.41]]), type-2 diabetes (1.9-times [OR-zero: 0.54 [0.52/0.55]] / 1.4-times [OR-zero: 0.72 [0.70/0.73]] / 2.1-times [OR-zero: 0.47 [0.46/0.47]]), hypertension (1.8-times [OR-zero: 0.56 [0.54/0.57]] / 1.3-times [OR-zero: 0.79 [0.77/0.80]] / 2.2-times [OR-zero: 0.46 [0.45/0.47]]), PCOS (not significant / 1.2-times [OR-zero: 0.83 [0.79/0.88]] / not significant), and insulin resistance (1.1-times [OR-zero: 0.88 [0.84/0.91]] / 1.1-times [OR-zero: 0.92 [0.89/0.94]] / 1.8-times [OR-zero: 0.56 [0.54/0.57]]). After fully adjusting for potential confounders, the total healthcare cost ratio was higher in those with a following obesity-related comorbidity versus those without: chronic back pain (1.54-times [1.51/1.56]), cardiovascular disease (1.45-times [1.43/1.47]), osteoarthritis (1.36-times [1.35/1.38]), type-2 diabetes (1.30-times [1.28/1.31]), hypertension (1.27-times [1.26/1.28]), PCOS (1.08-times [1.05/1.11]), and insulin resistance (1.03-times [1.01/1.04]). Conclusions: Adults with obesity who have specific disease-related health conditions have a higher probability of healthcare use and incur greater costs than those without specific comorbidities; incremental costs are larger when other obesity-related health conditions are not adjusted for. In a specific referent case, hypertension was costliest (44% had this condition with an additional annual cost of $715 [$678/$753]). If these findings hold for the Canadian population, hypertension in persons with obesity represents an estimated additional annual healthcare cost of $2.5 billion among adults living with obesity (based on an adult obesity rate of 26%). Results of this study can inform decision making on investment in interventions that are effective in treating obesity and its complications.

Keywords: administrative data, healthcare cost, obesity-related comorbidities, real world evidence

Procedia PDF Downloads 132
12676 Understanding the Notion between Resiliency and Recovery through a Spatial-Temporal Analysis of Section 404 Wetland Alteration Permits before and after Hurricane Ike

Authors: Md Y. Reja, Samuel D. Brody, Wesley E. Highfield, Galen D. Newman

Abstract:

Historically, wetlands in the United States have been lost due to agriculture, anthropogenic activities, and rapid urbanization along the coast. Such losses of wetlands have resulted in high flooding risk for coastal communities over the period of time. In addition, alteration of wetlands via the Section 404 Clean Water Act permits can increase the flooding risk to future hurricane events, as the cumulative impact of this program is poorly understood and under-accounted. Further, recovery after hurricane events is acting as an encouragement for new development and reconstruction activities by converting wetlands under the wetland alteration permitting program. This study investigates the degree to which hurricane recovery activities in coastal communities are undermining the ability of these places to absorb the impacts of future storm events. Specifically, this work explores how and to what extent wetlands are being affected by the federal permitting program post-Hurricane Ike in 2008. Wetland alteration patterns are examined across three counties (Harris, Galveston, and Chambers County) along the Texas Gulf Coast over a 10-year time period, from 2004-2013 (five years before and after Hurricane Ike) by conducting descriptive spatial analyses. Results indicate that after Hurricane Ike, the number of permits substantially increased in Harris and Chambers County. The vast majority of individual and nationwide type permits were issued within the 100-year floodplain, storm surge zones, and areas damaged by Ike flooding, suggesting that recovery after the hurricane is compromising the ecological resiliency on which coastal communities depend. The authors expect that the findings of this study can increase awareness to policy makers and hazard mitigation planners regarding how to manage wetlands during a long-term recovery process to maintain their natural functions for future flood mitigation.

Keywords: ecological resiliency, Hurricane Ike, recovery, Section 404 Permitting, wetland alteration

Procedia PDF Downloads 235
12675 The Molecular Analysis of Effect of Phytohormones and Spermidine on Tomato Growth under Biotic Stress

Authors: Rumana Keyani, Haleema Sadia, Asia Nosheen, Rabia Naz, Humaira Yasmin, Sidra Zahoor

Abstract:

Tomato is a significant crop of the world and is one of the staple foods of Pakistan. A vast number of plant pathogens from simple viruses to complex parasites cause diseases in tomatoes but fungal infection in our country is quite high. Sometimes the symptoms are too harsh destroying the crop altogether. Countries like our own with continuously increasing massive population and limited resources cannot afford such an economic loss. There is an array of morphological, genetic, biochemical and molecular processes involved in plant resistance mechanisms to biotic stress. The study of different metabolic pathways like Jasmonic acid (JA) pathways and most importantly signaling molecules like ROS/RNS and their redoxin enzymes i.e. TRX and NRX is crucial to disease management, contributing to healthy plant growth. So, improving tolerance in crop plants against biotic stresses is a dire need of our country and world as whole. In the current study, fungal pathogenic strains Alternaria solani and Rhizoctonia solani were used to inoculate tomatoes to check the defense responses of tomato plant against these pathogens at molecular as well as phenotypic level with jasmonic acid and spermidine pretreatment. All the growth parameters (root and shoot length, dry and weight root, shoot weight measured 7 days post-inoculation, exhibited that infection drastically declined the growth of the plant whereas jasmonic acid and spermidine assisted the plants to cope up with the infection. Thus, JA and Spermidine treatments maintained comparatively better growth factors. Antioxidant assays and expression analysis through real time quantitative PCR following time course experiment at 24, 48 and 72 hours intervals also exhibited that activation of JA defense genes and a polyamine Spermidine helps in mediating tomato responses against fungal infection when used alone but the two treatments combined mask the effect of each other.

Keywords: fungal infection, jasmonic acid defence, tomato, spermidine

Procedia PDF Downloads 109
12674 Designing Agile Product Development Processes by Transferring Mechanisms of Action Used in Agile Software Development

Authors: Guenther Schuh, Michael Riesener, Jan Kantelberg

Abstract:

Due to the fugacity of markets and the reduction of product lifecycles, manufacturing companies from high-wage countries are nowadays faced with the challenge to place more innovative products within even shorter development time on the market. At the same time, volatile customer requirements have to be satisfied in order to successfully differentiate from market competitors. One potential approach to address the explained challenges is provided by agile values and principles. These agile values and principles already proofed their success within software development projects in the form of management frameworks like Scrum or concrete procedure models such as Extreme Programming or Crystal Clear. Those models lead to significant improvements regarding quality, costs and development time and are therefore used within most software development projects. Motivated by the success within the software industry, manufacturing companies have tried to transfer agile mechanisms of action to the development of hardware products ever since. Though first empirical studies show similar effects in the agile development of hardware products, no comprehensive procedure model for the design of development iterations has been developed for hardware development yet due to different constraints of the domains. For this reason, this paper focusses on the design of agile product development processes by transferring mechanisms of action used in agile software development towards product development. This is conducted by decomposing the individual systems 'product development' and 'agile software development' into relevant elements and symbiotically composing the elements of both systems in respect of the design of agile product development processes afterwards. In a first step, existing product development processes are described following existing approaches of the system theory. By analyzing existing case studies from industrial companies as well as academic approaches, characteristic objectives, activities and artefacts are identified within a target-, action- and object-system. In partial model two, mechanisms of action are derived from existing procedure models of agile software development. These mechanisms of action are classified in a superior strategy level, in a system level comprising characteristic, domain-independent activities and their cause-effect relationships as well as in an activity-based element level. Within partial model three, the influence of the identified agile mechanism of action towards the characteristic system elements of product development processes is analyzed. For this reason, target-, action- and object-system of the product development are compared with the strategy-, system- and element-level of agile mechanism of action by using the graph theory. Furthermore, the necessity of existence of activities within iteration can be determined by defining activity-specific degrees of freedom. Based on this analysis, agile product development processes are designed in form of different types of iterations within a last step. By defining iteration-differentiating characteristics and their interdependencies, a logic for the configuration of activities, their form of execution as well as relevant artefacts for the specific iteration is developed. Furthermore, characteristic types of iteration for the agile product development are identified.

Keywords: activity-based process model, agile mechanisms of action, agile product development, degrees of freedom

Procedia PDF Downloads 189
12673 Cyber-Med: Practical Detection Methodology of Cyber-Attacks Aimed at Medical Devices Eco-Systems

Authors: Nir Nissim, Erez Shalom, Tomer Lancewiki, Yuval Elovici, Yuval Shahar

Abstract:

Background: A Medical Device (MD) is an instrument, machine, implant, or similar device that includes a component intended for the purpose of the diagnosis, cure, treatment, or prevention of disease in humans or animals. Medical devices play increasingly important roles in health services eco-systems, including: (1) Patient Diagnostics and Monitoring; Medical Treatment and Surgery; and Patient Life Support Devices and Stabilizers. MDs are part of the medical device eco-system and are connected to the network, sending vital information to the internal medical information systems of medical centers that manage this data. Wireless components (e.g. Wi-Fi) are often embedded within medical devices, enabling doctors and technicians to control and configure them remotely. All these functionalities, roles, and uses of MDs make them attractive targets of cyber-attacks launched for many malicious goals; this trend is likely to significantly increase over the next several years, with increased awareness regarding MD vulnerabilities, the enhancement of potential attackers’ skills, and expanded use of medical devices. Significance: We propose to develop and implement Cyber-Med, a unique collaborative project of Ben-Gurion University of the Negev and the Clalit Health Services Health Maintenance Organization. Cyber-Med focuses on the development of a comprehensive detection framework that relies on a critical attack repository that we aim to create. Cyber-Med will allow researchers and companies to better understand the vulnerabilities and attacks associated with medical devices as well as providing a comprehensive platform for developing detection solutions. Methodology: The Cyber-Med detection framework will consist of two independent, but complementary detection approaches: one for known attacks, and the other for unknown attacks. These modules incorporate novel ideas and algorithms inspired by our team's domains of expertise, including cyber security, biomedical informatics, and advanced machine learning, and temporal data mining techniques. The establishment and maintenance of Cyber-Med’s up-to-date attack repository will strengthen the capabilities of Cyber-Med’s detection framework. Major Findings: Based on our initial survey, we have already found more than 15 types of vulnerabilities and possible attacks aimed at MDs and their eco-system. Many of these attacks target individual patients who use devices such pacemakers and insulin pumps. In addition, such attacks are also aimed at MDs that are widely used by medical centers such as MRIs, CTs, and dialysis engines; the information systems that store patient information; protocols such as DICOM; standards such as HL7; and medical information systems such as PACS. However, current detection tools, techniques, and solutions generally fail to detect both the known and unknown attacks launched against MDs. Very little research has been conducted in order to protect these devices from cyber-attacks, since most of the development and engineering efforts are aimed at the devices’ core medical functionality, the contribution to patients’ healthcare, and the business aspects associated with the medical device.

Keywords: medical device, cyber security, attack, detection, machine learning

Procedia PDF Downloads 346
12672 Impedimetric Phage-Based Sensor for the Rapid Detection of Staphylococcus aureus from Nasal Swab

Authors: Z. Yousefniayejahr, S. Bolognini, A. Bonini, C. Campobasso, N. Poma, F. Vivaldi, M. Di Luca, A. Tavanti, F. Di Francesco

Abstract:

Pathogenic bacteria represent a threat to healthcare systems and the food industry because their rapid detection remains challenging. Electrochemical biosensors are gaining prominence as a novel technology for the detection of pathogens due to intrinsic features such as low cost, rapid response time, and portability, which make them a valuable alternative to traditional methodologies. These sensors use biorecognition elements that are crucial for the identification of specific bacteria. In this context, bacteriophages are promising tools for their inherent high selectivity towards bacterial hosts, which is of fundamental importance when detecting bacterial pathogens in complex biological samples. In this study, we present the development of a low-cost and portable sensor based on the Zeno phage for the rapid detection of Staphylococcus aureus. Screen-printed gold electrodes functionalized with the Zeno phage were used, and electrochemical impedance spectroscopy was applied to evaluate the change of the charge transfer resistance (Rct) as a result of the interaction with S. aureus MRSA ATCC 43300. The phage-based biosensor showed a linear range from 101 to 104 CFU/mL with a 20-minute response time and a limit of detection (LOD) of 1.2 CFU/mL under physiological conditions. The biosensor’s ability to recognize various strains of staphylococci was also successfully demonstrated in the presence of clinical isolates collected from different geographic areas. Assays using S. epidermidis were also carried out to verify the species-specificity of the phage sensor. We only observed a remarkable change of the Rct in the presence of the target S. aureus bacteria, while no substantial binding to S. epidermidis occurred. This confirmed that the Zeno phage sensor only targets S. aureus species within the genus Staphylococcus. In addition, the biosensor's specificity with respect to other bacterial species, including gram-positive bacteria like Enterococcus faecium and the gram-negative bacterium Pseudomonas aeruginosa, was evaluated, and a non-significant impedimetric signal was observed. Notably, the biosensor successfully identified S. aureus bacterial cells in a complex matrix such as a nasal swab, opening the possibility of its use in a real-case scenario. We diluted different concentrations of S. aureus from 108 to 100 CFU/mL with a ratio of 1:10 in the nasal swap matrices collected from healthy donors. Three different sensors were applied to measure various concentrations of bacteria. Our sensor indicated high selectivity to detect S. aureus in biological matrices compared to time-consuming traditional methods, such as enzyme-linked immunosorbent assay (ELISA), polymerase chain reaction (PCR), and radioimmunoassay (RIA), etc. With the aim to study the possibility to use this biosensor to address the challenge associated to pathogen detection, ongoing research is focused on the assessment of the biosensor’s analytical performances in different biological samples and the discovery of new phage bioreceptors.

Keywords: electrochemical impedance spectroscopy, bacteriophage, biosensor, Staphylococcus aureus

Procedia PDF Downloads 54
12671 Methods for Material and Process Monitoring by Characterization of (Second and Third Order) Elastic Properties with Lamb Waves

Authors: R. Meier, M. Pander

Abstract:

In accordance with the industry 4.0 concept, manufacturing process steps as well as the materials themselves are going to be more and more digitalized within the next years. The “digital twin” representing the simulated and measured dataset of the (semi-finished) product can be used to control and optimize the individual processing steps and help to reduce costs and expenditure of time in product development, manufacturing, and recycling. In the present work, two material characterization methods based on Lamb waves were evaluated and compared. For demonstration purpose, both methods were shown at a standard industrial product - copper ribbons, often used in photovoltaic modules as well as in high-current microelectronic devices. By numerical approximation of the Rayleigh-Lamb dispersion model on measured phase velocities second order elastic constants (Young’s modulus, Poisson’s ratio) were determined. Furthermore, the effective third order elastic constants were evaluated by applying elastic, “non-destructive”, mechanical stress on the samples. In this way, small microstructural variations due to mechanical preconditioning could be detected for the first time. Both methods were compared with respect to precision and inline application capabilities. Microstructure of the samples was systematically varied by mechanical loading and annealing. Changes in the elastic ultrasound transport properties were correlated with results from microstructural analysis and mechanical testing. In summary, monitoring the elastic material properties of plate-like structures using Lamb waves is valuable for inline and non-destructive material characterization and manufacturing process control. Second order elastic constants analysis is robust over wide environmental and sample conditions, whereas the effective third order elastic constants highly increase the sensitivity with respect to small microstructural changes. Both Lamb wave based characterization methods are fitting perfectly into the industry 4.0 concept.

Keywords: lamb waves, industry 4.0, process control, elasticity, acoustoelasticity, microstructure

Procedia PDF Downloads 214