Search results for: non-convex cost function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10426

Search results for: non-convex cost function

7816 Outcome of Bowel Management Program in Patient with Spinal Cord Injury

Authors: Roongtiwa Chobchuen, Angkana Srikhan, Pattra Wattanapan

Abstract:

Background: Neurogenic bowel is common condition after spinal cord injury. Most of spinal cord injured patients have motor weakness, mobility impairment which leads to constipation. Moreover, the neural pathway involving bowel function is interrupted. Therefore, the bowel management program should be implemented in nursing care in the earliest time after the onset of the disease to prevent the morbidity and mortality. Objective: To study the outcome of bowel management program of the patients with spinal cord injury who admitted for rehabilitation program. Study design: Descriptive study. Setting: Rehabilitation ward in Srinagarind Hospital. Populations: patients with subacute to chronic spinal cord injury who admitted at rehabilitation ward, Srinagarind hospital, aged over 18 years old. Instrument: The neurogenic bowel dysfunction score (NBDS) was used to determine the severity of neurogenic bowel. Procedure and statistical analysis: All participants were asked to complete the demographic data; age gender, duration of disease, diagnosis. The individual bowel function was assessed using NBDS at admission. The patients and caregivers were trained by nurses about the bowel management program which consisted of diet modification, abdominal massage, digital stimulation, stool evacuation including medication and physical activity. The outcome of the bowel management program was assessed by NBDS at discharge. The chi-square test was used to detect the difference in severity of neurogenic bowel at admission and discharge. Results: Sixteen spinal cord injured patients were enrolled in the study (age 45 ± 17 years old, 69% were male). Most of them (50%) were tetraplegia. On the admission, 12.5%, 12.5%, 43.75% and 31.25% were categorized as very minor (NBDS 0-6), minor (NBDS 7-9), moderate (NBDS 10-13) and severe (NBDS 14+) respectively. The severity of neurogenic bowel was decreased significantly at discharge (56.25%, 18.755%, 18.75% and 6.25% for very minor, minor, moderate and severe group respectively; p < 0.001) compared with NBDS at admission. Conclusions: Implementation of the effective bowel program decrease the severity of the neurogenic bowel in patient with spinal cord injury.

Keywords: neurogenic bowel, NBDS, spinal cord injury, bowel program

Procedia PDF Downloads 232
7815 Leveraging Information for Building Supply Chain Competitiveness

Authors: Deepika Joshi

Abstract:

Operations in automotive industry rely greatly on information shared between Supply Chain (SC) partners. This leads to efficient and effective management of SC activity. Automotive sector in India is growing at 14.2 percent per annum and has huge economic importance. We find that no study has been carried out on the role of information sharing in SC management of Indian automotive manufacturers. Considering this research gap, the present study is planned to establish the significance of information sharing in Indian auto-component supply chain activity. An empirical research was conducted for large scale auto component manufacturers from India. Twenty four Supply Chain Performance Indicators (SCPIs) were collected from existing literature. These elements belong to eight diverse but internally related areas of SC management viz., demand management, cost, technology, delivery, quality, flexibility, buyer-supplier relationship, and operational factors. A pair-wise comparison and an open ended questionnaire were designed using these twenty four SCPIs. The questionnaire was then administered among managerial level employees of twenty-five auto-component manufacturing firms. Analytic Network Process (ANP) technique was used to analyze the response of pair-wise questionnaire. Finally, twenty-five priority indexes are developed, one for each respondent. These were averaged to generate an industry specific priority index. The open-ended questions depicted strategies related to information sharing between buyers and suppliers and their influence on supply chain performance. Results show that the impact of information sharing on certain performance indicators is relatively greater than their corresponding variables. For example, flexibility, delivery, demand and cost related elements have massive impact on information sharing. Technology is relatively less influenced by information sharing but it immensely influence the quality of information shared. Responses obtained from managers reveal that timely and accurate information sharing lowers the cost, increases flexibility and on-time delivery of auto parts, therefore, enhancing the competitiveness of Indian automotive industry. Any flaw in dissemination of information can disturb the cycle time of both the parties and thus increases the opportunity cost. Due to supplier’s involvement in decisions related to design of auto parts, quality conformance is found to improve, leading to reduction in rejection rate. Similarly, mutual commitment to share right information at right time between all levels of SC enhances trust level. SC partners share information to perform comprehensive quality planning to ingrain total quality management. This study contributes to operations management literature which faces scarcity of empirical examination on this subject. It views information sharing as a building block which firms can promote and evolve to leverage the operational capability of all SC members. It will provide insights for Indian managers and researchers as every market is unique and suppliers and buyers are driven by local laws, industry status and future vision. While major emphasis in this paper is given to SC operations happening between domestic partners, placing more focus on international SC can bring in distinguished results.

Keywords: Indian auto component industry, information sharing, operations management, supply chain performance indicators

Procedia PDF Downloads 542
7814 Response of Grower Turkeys to Diets Containing Moringa oleifera Leaf Meal in a Tropical Environment

Authors: Augustine O. Ani, Ifeyinwa E. Ezemagu, Eunice A. Akuru

Abstract:

A seven-week study was conducted to evaluate the response of grower turkeys to varying dietary levels of Moringa oleifera leaf meal (MOLM) in a humid tropical environment. A total of 90 twelve weeks old male and female grower turkeys were randomly divided into five groups of 18 birds each in a completely randomized design (CRD) and assigned to five caloric (2.57-2.60 Mcal/kg ME) and isonitrogenous (19.95% crude protein) diets containing five levels (0, 15, 20, 25 and 30%) of MOLM, respectively. Each treatment was replicated three times with 6 birds per replicate housed in a deep litter pen of fresh wood shavings measuring 1.50m x 1.50m. Feed and water were provided to the birds' ad libitum. Parameters measured were: final live weight (FLW) daily weight gain (DWG), daily feed intake (DFI), feed conversion ratio (FCR), protein efficiency ratio (PER), packed cell volume (PCV), haemoglobin concentration (Hb), red blood cell (RBC) count, white blood cell (WBC) count, mean cell volume (MCV), mean cell haemoglobin (MCH) and mean cell haemoglobin concentration (MCHC), feed cost / kg weight gain and apparent nutrient retention. Results showed that grower turkeys fed 20% MOLM diet had significantly (p < 0.05) higher FLW and DWG values (4410.30 g and 34.49 g, respectively) and higher DM and NFE retention values (67.28 and 58.12%, respectively) than turkeys fed other MOLM diets. Feed cost per kg gain decreased significantly (p < 0.05) with increasing levels of MOLM in the diets. The PCV, Hb, WBC, MCV, MCH and MCHC values of grower turkeys fed 20% MOLM diet were significantly (p < 0.05) higher than those of grower turkeys fed other diets. It was concluded that a diet containing 20% MOLM is adequate for the normal growth of grower turkeys in the tropics.

Keywords: Diets, grower turkeys, Moringa oleifera leaf meal, response, tropical environment

Procedia PDF Downloads 135
7813 A Markov Model for the Elderly Disability Transition and Related Factors in China

Authors: Huimin Liu, Li Xiang, Yue Liu, Jing Wang

Abstract:

Background: As one of typical case for the developing countries who are stepping into the aging times globally, more and more older people in China might face the problem of which they could not maintain normal life due to the functional disability. While the government take efforts to build long-term care system and further carry out related policies for the core concept, there is still lack of strong evidence to evaluating the profile of disability states in the elderly population and its transition rate. It has been proved that disability is a dynamic condition of the person rather than irreversible so it means possible to intervene timely on them who might be in a risk of severe disability. Objective: The aim of this study was to depict the picture of the disability transferring status of the older people in China, and then find out individual characteristics that change the state of disability to provide theory basis for disability prevention and early intervention among elderly people. Methods: Data for this study came from the 2011 baseline survey and the 2013 follow-up survey of the China Health and Retirement Longitudinal Study (CHARLS). Normal ADL function, 1~2 ADLs disability,3 or above ADLs disability and death were defined from state 1 to state 4. Multi-state Markov model was applied and the four-state homogeneous model with discrete states and discrete times from two visits follow-up data was constructed to explore factors for various progressive stages. We modeled the effect of explanatory variables on the rates of transition by using a proportional intensities model with covariate, such as gender. Result: In the total sample, state 2 constituent ratio is nearly about 17.0%, while state 3 proportion is blow the former, accounting for 8.5%. Moreover, ADL disability statistics difference is not obvious between two years. About half of the state 2 in 2011 improved to become normal in 2013 even though they get elder. However, state 3 transferred into the proportion of death increased obviously, closed to the proportion back to state 2 or normal functions. From the estimated intensities, we see the older people are eleven times as likely to develop at 1~2 ADLs disability than dying. After disability onset (state 2), progression to state 3 is 30% more likely than recovery. Once in state 3, a mean of 0.76 years is spent before death or recovery. In this model, a typical person in state 2 has a probability of 0.5 of disability-free one year from now while the moderate disabled or above has a probability of 0.14 being dead. Conclusion: On the long-term care cost considerations, preventive programs for delay the disability progression of the elderly could be adopted based on the current disabled state and main factors of each stage. And in general terms, those focusing elderly individuals who are moderate or above disabled should go first.

Keywords: Markov model, elderly people, disability, transition intensity

Procedia PDF Downloads 281
7812 Multiple-Material Flow Control in Construction Supply Chain with External Storage Site

Authors: Fatmah Almathkour

Abstract:

Managing and controlling the construction supply chain (CSC) are very important components of effective construction project execution. The goals of managing the CSC are to reduce uncertainty and optimize the performance of a construction project by improving efficiency and reducing project costs. The heart of much SC activity is addressing risk, and the CSC is no different. The delivery and consumption of construction materials is highly variable due to the complexity of construction operations, rapidly changing demand for certain components, lead time variability from suppliers, transportation time variability, and disruptions at the job site. Current notions of managing and controlling CSC, involve focusing on one project at a time with a push-based material ordering system based on the initial construction schedule and, then, holding a tremendous amount of inventory. A two-stage methodology was proposed to coordinate the feed-forward control of advanced order placement with a supplier to a feedback local control in the form of adding the ability to transship materials between projects to improve efficiency and reduce costs. It focused on the single supplier integrated production and transshipment problem with multiple products. The methodology is used as a design tool for the CSC because it includes an external storage site not associated with one of the projects. The idea is to add this feature to a highly constrained environment to explore its effectiveness in buffering the impact of variability and maintaining project schedule at low cost. The methodology uses deterministic optimization models with objectives that minimizing the total cost of the CSC. To illustrate how this methodology can be used in practice and the types of information that can be gleaned, it is tested on a number of cases based on the real example of multiple construction projects in Kuwait.

Keywords: construction supply chain, inventory control supply chain, transshipment

Procedia PDF Downloads 115
7811 Saving Energy through Scalable Architecture

Authors: John Lamb, Robert Epstein, Vasundhara L. Bhupathi, Sanjeev Kumar Marimekala

Abstract:

In this paper, we focus on the importance of scalable architecture for data centers and buildings in general to help an enterprise achieve environmental sustainability. The scalable architecture helps in many ways, such as adaptability to the business and user requirements, promotes high availability and disaster recovery solutions that are cost effective and low maintenance. The scalable architecture also plays a vital role in three core areas of sustainability: economy, environment, and social, which are also known as the 3 pillars of a sustainability model. If the architecture is scalable, it has many advantages. A few examples are that scalable architecture helps businesses and industries to adapt to changing technology, drive innovation, promote platform independence, and build resilience against natural disasters. Most importantly, having a scalable architecture helps industries bring in cost-effective measures for energy consumption, reduce wastage, increase productivity, and enable a robust environment. It also helps in the reduction of carbon emissions with advanced monitoring and metering capabilities. Scalable architectures help in reducing waste by optimizing the designs to utilize materials efficiently, minimize resources, decrease carbon footprints by using low-impact materials that are environmentally friendly. In this paper we also emphasize the importance of cultural shift towards the reuse and recycling of natural resources for a balanced ecosystem and maintain a circular economy. Also, since all of us are involved in the use of computers, much of the scalable architecture we have studied is related to data centers.

Keywords: scalable architectures, sustainability, application design, disruptive technology, machine learning and natural language processing, AI, social media platform, cloud computing, advanced networking and storage devices, advanced monitoring and metering infrastructure, climate change

Procedia PDF Downloads 77
7810 Reliability Analysis of Glass Epoxy Composite Plate under Low Velocity

Authors: Shivdayal Patel, Suhail Ahmad

Abstract:

Safety assurance and failure prediction of composite material component of an offshore structure due to low velocity impact is essential for associated risk assessment. It is important to incorporate uncertainties associated with material properties and load due to an impact. Likelihood of this hazard causing a chain of failure events plays an important role in risk assessment. The material properties of composites mostly exhibit a scatter due to their in-homogeneity and anisotropic characteristics, brittleness of the matrix and fiber and manufacturing defects. In fact, the probability of occurrence of such a scenario is due to large uncertainties arising in the system. Probabilistic finite element analysis of composite plates due to low-velocity impact is carried out considering uncertainties of material properties and initial impact velocity. Impact-induced damage of composite plate is a probabilistic phenomenon due to a wide range of uncertainties arising in material and loading behavior. A typical failure crack initiates and propagates further into the interface causing de-lamination between dissimilar plies. Since individual crack in the ply is difficult to track. The progressive damage model is implemented in the FE code by a user-defined material subroutine (VUMAT) to overcome these problems. The limit state function is accordingly established while the stresses in the lamina are such that the limit state function (g(x)>0). The Gaussian process response surface method is presently adopted to determine the probability of failure. A comparative study is also carried out for different combination of impactor masses and velocities. The sensitivity based probabilistic design optimization procedure is investigated to achieve better strength and lighter weight of composite structures. Chain of failure events due to different modes of failure is considered to estimate the consequences of failure scenario. Frequencies of occurrence of specific impact hazards yield the expected risk due to economic loss.

Keywords: composites, damage propagation, low velocity impact, probability of failure, uncertainty modeling

Procedia PDF Downloads 267
7809 Optimum Dimensions of Hydraulic Structures Foundation and Protections Using Coupled Genetic Algorithm with Artificial Neural Network Model

Authors: Dheyaa W. Abbood, Rafa H. AL-Suhaili, May S. Saleh

Abstract:

A model using the artificial neural networks and genetic algorithm technique is developed for obtaining optimum dimensions of the foundation length and protections of small hydraulic structures. The procedure involves optimizing an objective function comprising a weighted summation of the state variables. The decision variables considered in the optimization are the upstream and downstream cutoffs length sand their angles of inclination, the foundation length, and the length of the downstream soil protection. These were obtained for a given maximum difference in head, depth of impervious layer and degree of anisotropy.The optimization carried out subjected to constraints that ensure a safe structure against the uplift pressure force and sufficient protection length at the downstream side of the structure to overcome an excessive exit gradient. The Geo-studios oft ware, was used to analyze 1200 different cases. For each case the length of protection and volume of structure required to satisfy the safety factors mentioned previously were estimated. An ANN model was developed and verified using these cases input-output sets as its data base. A MatLAB code was written to perform a genetic algorithm optimization modeling coupled with this ANN model using a formulated optimization model. A sensitivity analysis was done for selecting the cross-over probability, the mutation probability and level ,the number of population, the position of the crossover and the weights distribution for all the terms of the objective function. Results indicate that the most factor that affects the optimum solution is the number of population required. The minimum value that gives stable global optimum solution of this parameters is (30000) while other variables have little effect on the optimum solution.

Keywords: inclined cutoff, optimization, genetic algorithm, artificial neural networks, geo-studio, uplift pressure, exit gradient, factor of safety

Procedia PDF Downloads 313
7808 Modeling Socioeconomic and Political Dynamics of Terrorism in Pakistan

Authors: Syed Toqueer, Omer Younus

Abstract:

Terrorism, today, has emerged as a global menace with Pakistan being the most adversely affected state. Therefore, the motive behind this study is to empirically establish the linkage of terrorism with socio-economic (uneven income distribution, poverty and unemployment) and political nexuses so that a policy recommendation can be put forth to better approach this issue in Pakistan. For this purpose, the study employs two competing models, namely, the distributed lag model and OLS, so that findings of the model may be consolidated comprehensively, over the reference period of 1984-2012. The findings of both models are indicative of the fact that uneven income distribution of Pakistan is rather a contributing factor towards terrorism when measured through GDP per capita. This supports the hypothesis that immiserizing modernization theory is applicable for the state of Pakistan where the underprivileged are marginalized. Results also suggest that other socio-economic variables (poverty, unemployment and consumer confidence) can condense the brutality of terrorism once these conditions are catered to and improved. The rational of opportunity cost is at the base of this argument. Poor conditions of employment and poverty reduces the opportunity cost for individuals to be recruited by terrorist organizations as economic returns are considerably low and thus increasing the supply of volunteers and subsequently increasing the intensity of terrorism. The argument of political freedom as a means of lowering terrorism stands true. The more the people are politically repressed the more alternative and illegal means they will find to make their voice heard. Also, the argument that politically transitioning economy faces more terrorism is found applicable for Pakistan. Finally, the study contributes to an ongoing debate on which of the two set of factors are more significant with relation to terrorism by suggesting that socio-economic factors are found to be the primary causes of terrorism for Pakistan.

Keywords: terrorism, socioeconomic conditions, political freedom, distributed lag model, ordinary least square

Procedia PDF Downloads 314
7807 Wall Shear Stress Under an Impinging Planar Jet Using the Razor Blade Technique

Authors: A. Ritcey, J. R. Mcdermid, S. Ziada

Abstract:

Wall shear stress was experimentally measured under a planar impinging air jet as a function of jet Reynolds number (Rejet = 5000, 8000, 11000) and different normalized impingement distances (H/D = 4, 6, 8, 10, 12) using the razor blade technique to complete a parametric study. The wall pressure, wall pressure gradient, and wall shear stress information were obtained.

Keywords: experimental fluid mechanics, impinging planar jets, skin friction factor, wall shear stress

Procedia PDF Downloads 314
7806 An Investigation of the Effects of Gripping Systems in Geosynthetic Shear Testing

Authors: Charles Sikwanda

Abstract:

The use of geosynthetic materials in geotechnical engineering projects has rapidly increased over the past several years. These materials have resulted in improved performance and cost reduction of geotechnical structures as compared to the use of conventional materials. However, working with geosynthetics requires knowledge of interface parameters for design. These parameters are typically determined by the large direct shear device in accordance with ASTM-D5321 and ASTM-D6243 standards. Although these laboratory tests are standardized, the quality of the results can be largely affected by several factors that include; the shearing rate, applied normal stress, gripping mechanism, and type of the geosynthetic specimens tested. Amongst these factors, poor surface gripping of a specimen is the major source of the discrepancy. If the specimen is inadequately secured to the shearing blocks, it experiences progressive failure and shear strength that deviates from the true field performance of the tested material. This leads to inaccurate, unsafe, and cost ineffective designs of projects. Currently, the ASTM-D5321 and ASTM-D6243 standards do not provide a standardized gripping system for geosynthetic shear strength testing. Over the years, researchers have come up with different gripping systems that can be used such as; glue, metal textured surface, sandblasting, and sandpaper. However, these gripping systems are regularly not adequate to sufficiently secure the tested specimens to the shearing device. This has led to large variability in test results and difficulties in results interpretation. Therefore, this study was aimed at determining the effects of gripping systems in geosynthetic interface shear strength testing using a 300 x 300 mm direct shear box. The results of the research will contribute to easy data interpretation and increase result accuracy and reproducibility.

Keywords: geosynthetics, shear strength parameters, gripping systems, gripping

Procedia PDF Downloads 197
7805 Hydrogen Production at the Forecourt from Off-Peak Electricity and Its Role in Balancing the Grid

Authors: Abdulla Rahil, Rupert Gammon, Neil Brown

Abstract:

The rapid growth of renewable energy sources and their integration into the grid have been motivated by the depletion of fossil fuels and environmental issues. Unfortunately, the grid is unable to cope with the predicted growth of renewable energy which would lead to its instability. To solve this problem, energy storage devices could be used. Electrolytic hydrogen production from an electrolyser is considered a promising option since it is a clean energy source (zero emissions). Choosing flexible operation of an electrolyser (producing hydrogen during the off-peak electricity period and stopping at other times) could bring about many benefits like reducing the cost of hydrogen and helping to balance the electric systems. This paper investigates the price of hydrogen during flexible operation compared with continuous operation, while serving the customer (hydrogen filling station) without interruption. The optimization algorithm is applied to investigate the hydrogen station in both cases (flexible and continuous operation). Three different scenarios are tested to see whether the off-peak electricity price could enhance the reduction of the hydrogen cost. These scenarios are: Standard tariff (1 tier system) during the day (assumed 12 p/kWh) while still satisfying the demand for hydrogen; using off-peak electricity at a lower price (assumed 5 p/kWh) and shutting down the electrolyser at other times; using lower price electricity at off-peak times and high price electricity at other times. This study looks at Derna city, which is located on the coast of the Mediterranean Sea (32° 46′ 0 N, 22° 38′ 0 E) with a high potential for wind resource. Hourly wind speed data which were collected over 24½ years from 1990 to 2014 were in addition to data on hourly radiation and hourly electricity demand collected over a one-year period, together with the petrol station data.

Keywords: hydrogen filling station off-peak electricity, renewable energy, off-peak electricity, electrolytic hydrogen

Procedia PDF Downloads 221
7804 Wicking Bed Cultivation System as a Strategic Proposal for the Cultivation of Milpa and Mexican Medicinal Plants in Urban Spaces

Authors: David Lynch Steinicke, Citlali Aguilera Lira, Andrea León García

Abstract:

The proposal posed in this work comes from a researching-action approach. In Mexico, a dialogue of knowledge may function as a link between traditional, local, pragmatic knowledge, and technological, scientific knowledge. The advantage of generating this nexus lies on the positive impact in the environment, in society and economy. This work attempts to combine, on the one hand the traditional Mexican knowledge such as the usage of medicinal herb and the agroecosystem milpa; and on the other hand make use of a newly created agricultural ecotechnology which main function is to take advantage of the urban space and to save water. This ecotechnology is the wicking bed. In a globalized world, is relevant to have a proposal where the most important aspect is to revalorize the culture through the acquisition of traditional knowledge but at the same time adapting them to the new social and urbanized structures without threatening the environment. The methodology used in this work comes from a researching-action approach combined with a practical dimension where an experimental model made of three wickingbeds was implemented. In this model, there were cultivated medicinal herb and milpa components. The water efficiency and the social acceptance were compared with a traditional ground crop, all this practice was made in an urban social context. The implementation of agricultural ecotechnology has had great social acceptance as its irrigation involves minimal effort and it is economically feasible for low-income people. The wicking bed system raised in this project is attainable to be implemented in schools, urban and peri-urban environments, homemade gardens and public areas. The proposal managed to carry out an innovative and sustainable knowledge-based traditional Mexican agricultural technology, allowing regain Milpa agroecosystem in urban environments to strengthen food security in favour of nutritional and protein benefits for the Mexican fare.

Keywords: milpa, traditional medicine, urban agriculture, wicking bed

Procedia PDF Downloads 375
7803 A Tutorial on Model Predictive Control for Spacecraft Maneuvering Problem with Theory, Experimentation and Applications

Authors: O. B. Iskender, K. V. Ling, V. Dubanchet, L. Simonini

Abstract:

This paper discusses the recent advances and future prospects of spacecraft position and attitude control using Model Predictive Control (MPC). First, the challenges of the space missions are summarized, in particular, taking into account the errors, uncertainties, and constraints imposed by the mission, spacecraft and, onboard processing capabilities. The summary of space mission errors and uncertainties provided in categories; initial condition errors, unmodeled disturbances, sensor, and actuator errors. These previous constraints are classified into two categories: physical and geometric constraints. Last, real-time implementation capability is discussed regarding the required computation time and the impact of sensor and actuator errors based on the Hardware-In-The-Loop (HIL) experiments. The rationales behind the scenarios’ are also presented in the scope of space applications as formation flying, attitude control, rendezvous and docking, rover steering, and precision landing. The objectives of these missions are explained, and the generic constrained MPC problem formulations are summarized. Three key design elements used in MPC design: the prediction model, the constraints formulation and the objective cost function are discussed. The prediction models can be linear time invariant or time varying depending on the geometry of the orbit, whether it is circular or elliptic. The constraints can be given as linear inequalities for input or output constraints, which can be written in the same form. Moreover, the recent convexification techniques for the non-convex geometrical constraints (i.e., plume impingement, Field-of-View (FOV)) are presented in detail. Next, different objectives are provided in a mathematical framework and explained accordingly. Thirdly, because MPC implementation relies on finding in real-time the solution to constrained optimization problems, computational aspects are also examined. In particular, high-speed implementation capabilities and HIL challenges are presented towards representative space avionics. This covers an analysis of future space processors as well as the requirements of sensors and actuators on the HIL experiments outputs. The HIL tests are investigated for kinematic and dynamic tests where robotic arms and floating robots are used respectively. Eventually, the proposed algorithms and experimental setups are introduced and compared with the authors' previous work and future plans. The paper concludes with a conjecture that MPC paradigm is a promising framework at the crossroads of space applications while could be further advanced based on the challenges mentioned throughout the paper and the unaddressed gap.

Keywords: convex optimization, model predictive control, rendezvous and docking, spacecraft autonomy

Procedia PDF Downloads 102
7802 Seismic Reinforcement of Existing Japanese Wooden Houses Using Folded Exterior Thin Steel Plates

Authors: Jiro Takagi

Abstract:

Approximately 90 percent of the casualties in the near-fault-type Kobe earthquake in 1995 resulted from the collapse of wooden houses, although a limited number of collapses of this type of building were reported in the more recent off-shore-type Tohoku Earthquake in 2011 (excluding direct damage by the Tsunami). Kumamoto earthquake in 2016 also revealed the vulnerability of old wooden houses in Japan. There are approximately 24.5 million wooden houses in Japan and roughly 40 percent of them are considered to have the inadequate seismic-resisting capacity. Therefore, seismic strengthening of these wooden houses is an urgent task. However, it has not been quickly done for various reasons, including cost and inconvenience during the reinforcing work. Residents typically spend their money on improvements that more directly affect their daily housing environment (such as interior renovation, equipment renewal, and placement of thermal insulation) rather than on strengthening against extremely rare events such as large earthquakes. Considering this tendency of residents, a new approach to developing a seismic strengthening method for wooden houses is needed. The seismic reinforcement method developed in this research uses folded galvanized thin steel plates as both shear walls and the new exterior architectural finish. The existing finish is not removed. Because galvanized steel plates are aesthetic and durable, they are commonly used in modern Japanese buildings on roofs and walls. Residents could feel a physical change through the reinforcement, covering existing exterior walls with steel plates. Also, this exterior reinforcement can be installed with only outdoor work, thereby reducing inconvenience for residents since they would not be required to move out temporarily during construction. The Durability of the exterior is enhanced, and the reinforcing work can be done efficiently since perfect water protection is not required for the new finish. In this method, the entire exterior surface would function as shear walls and thus the pull-out force induced by seismic lateral load would be significantly reduced as compared with a typical reinforcement scheme of adding braces in selected frames. Consequently, reinforcing details of anchors to the foundations would be less difficult. In order to attach the exterior galvanized thin steel plates to the houses, new wooden beams are placed next to the existing beams. In this research, steel connections between the existing and new beams are developed, which contain a gap for the existing finish between the two beams. The thin steel plates are screwed to the new beams and the connecting vertical members. The seismic-resisting performance of the shear walls with thin steel plates is experimentally verified both for the frames and connections. It is confirmed that the performance is high enough for bracing general wooden houses.

Keywords: experiment, seismic reinforcement, thin steel plates, wooden houses

Procedia PDF Downloads 219
7801 Phi Thickening Induction as a Response to Abiotic Stress in the Orchid Miltoniopsis

Authors: Nurul Aliaa Idris, David A. Collings

Abstract:

Phi thickenings are specialized secondary cell wall thickenings that are found in the cortex of the roots in a wide range of plant species, including orchids. The role of phi thickenings in the root is still under debate through research have linked environmental conditions, particularly abiotic stresses such as water stress, heavy metal stress and salinity to their induction in the roots. It has also been suggested that phi thickenings may act as a barrier to regulate solute uptake, act as a physical barrier against fungal hyphal penetration due to its resemblance to the Casparian strip and play a mechanical role to support cortical cells. We have investigated phi thickening function in epiphytic orchids of the genus Miltoniopsis through induction experiment against factors such as soil compaction and water stress. The permeability of the phi thickenings in Miltoniopsis was tested through uptake experiments using the fluorescent tracer dyes Calcofluor white, Lucifer yellow and Propidium iodide then viewed with wide-field or confocal microscopy. To test whether phi thickening may prevent fungal colonization in the root cell, fungal re-infection experiment was conducted by inoculating isolated symbiotic fungus to sterile in vitro Miltoniopsis explants. As the movement of fluorescent tracers through the apoplast was not blocked by phi thickenings, and as phi thickenings developed in the roots of sterile cultures in the absence of fungus and did not prevent fungal colonization of cortical cells, the phi thickenings in Miltoniopsis do not function as a barrier. Phi thickenings were found to be absent in roots grown on agar and remained absent when plants were transplanted to moist soil. However, phi thickenings were induced when plants were transplanted to well-drained media, and by the application of water stress in all soils tested. It is likely that phi thickenings stabilize the root cortex during dehydration. Nevertheless, the varied induction responses present in different plant species suggest that the phi thickenings may play several adaptive roles, instead of just one, depending on species.

Keywords: abiotic stress, Miltoniopsis, orchid, phi thickening

Procedia PDF Downloads 138
7800 CdS Quantum Dots as Fluorescent Probes for Detection of Naphthalene

Authors: Zhengyu Yan, Yan Yu, Jianqiu Chen

Abstract:

A novel sensing system has been designed for naphthalene detection based on the quenched fluorescence signal of CdS quantum dots. The fluorescence intensity of the system reduced significantly after adding CdS quantum dots to the water pollution model because of the fluorescent static quenching f mechanism. Herein, we have demonstrated the facile methodology can offer a convenient and low analysis cost with the recovery rate as 97.43%-103.2%, which has potential application prospect.

Keywords: CdS quantum dots, modification, detection, naphthalene

Procedia PDF Downloads 480
7799 Static Test Pad for Solid Rocket Motors

Authors: Svanik Garg

Abstract:

Static Test Pads are stationary mechanisms that hold a solid rocket motor, measuring the different parameters of its operation including thrust and temperature to better calibrate it for launch. This paper outlines a specific STP designed to test high powered rocket motors with a thrust upwards of 4000N and limited to 6500N. The design includes a specific portable mechanism with cost an integral part of the design process to make it accessible to small scale rocket developers with limited resources. Using curved surfaces and an ergonomic design, the STP has a delicately engineered façade/case with a focus on stability and axial calibration of thrust. This paper describes the design, operation and working of the STP and its widescale uses given the growing market of aviation enthusiasts. Simulations on the CAD model in Fusion 360 provided promising results with a safety factor of 2 established and stress limited along with the load coefficient A PCB was also designed as part of the test pad design process to help obtain results, with visual output and various virtual terminals to collect data of different parameters. The circuitry was simulated using ‘proteus’ and a special virtual interface with auditory commands was also created for accessibility and wide-scale implementation. Along with this description of the design, the paper also emphasizes the design principle behind the STP including a description of its vertical orientation to maximize thrust accuracy along with a stable base to prevent micromovements. Given the rise of students and professionals alike building high powered rockets, the STP described in this paper is an appropriate option, with limited cost, portability, accuracy, and versatility. There are two types of STP’s vertical or horizontal, the one discussed in this paper is vertical to utilize the axial component of thrust.

Keywords: static test pad, rocket motor, thrust, load, circuit, avionics, drag

Procedia PDF Downloads 358
7798 Observation of Critical Sliding Velocity

Authors: Visar Baxhuku, Halil Demolli, Alishukri Shkodra

Abstract:

This paper presents the monitoring of vehicle movement, namely the developing of speed of vehicles during movement in a certain twist. The basic geometry data of twist are measured with the purpose of calculating the slide in border speed. During the research, measuring developed speed of passenger vehicles for the real conditions of the road surface, dry road with average damage, was realised. After setting values, the analysis was done in function security of movement in twist.

Keywords: critical sliding velocity, moving velocity, curve, passenger vehicles

Procedia PDF Downloads 400
7797 Electromagnetic Simulation Based on Drift and Diffusion Currents for Real-Time Systems

Authors: Alexander Norbach

Abstract:

The script in this paper describes the use of advanced simulation environment using electronic systems (Microcontroller, Operational Amplifiers, and FPGA). The simulation may be used for all dynamic systems with the diffusion and the ionisation behaviour also. By additionally required observer structure, the system works with parallel real-time simulation based on diffusion model and the state-space representation for other dynamics. The proposed deposited model may be used for electrodynamic effects, including ionising effects and eddy current distribution also. With the script and proposed method, it is possible to calculate the spatial distribution of the electromagnetic fields in real-time. For further purpose, the spatial temperature distribution may be used also. With upon system, the uncertainties, unknown initial states and disturbances may be determined. This provides the estimation of the more precise system states for the required system, and additionally, the estimation of the ionising disturbances that occur due to radiation effects. The results have shown that a system can be also developed and adopted specifically for space systems with the real-time calculation of the radiation effects only. Electronic systems can take damage caused by impacts with charged particle flux in space or radiation environment. In order to be able to react to these processes, it must be calculated within a shorter time that ionising radiation and dose is present. All available sensors shall be used to observe the spatial distributions. By measured value of size and known location of the sensors, the entire distribution can be calculated retroactively or more accurately. With the formation, the type of ionisation and the direct effect to the systems and thus possible prevent processes can be activated up to the shutdown. The results show possibilities to perform more qualitative and faster simulations independent of kind of systems space-systems and radiation environment also. The paper gives additionally an overview of the diffusion effects and their mechanisms. For the modelling and derivation of equations, the extended current equation is used. The size K represents the proposed charge density drifting vector. The extended diffusion equation was derived and shows the quantising character and has similar law like the Klein-Gordon equation. These kinds of PDE's (Partial Differential Equations) are analytically solvable by giving initial distribution conditions (Cauchy problem) and boundary conditions (Dirichlet boundary condition). For a simpler structure, a transfer function for B- and E- fields was analytically calculated. With known discretised responses g₁(k·Ts) and g₂(k·Ts), the electric current or voltage may be calculated using a convolution; g₁ is the direct function and g₂ is a recursive function. The analytical results are good enough for calculation of fields with diffusion effects. Within the scope of this work, a proposed model of the consideration of the electromagnetic diffusion effects of arbitrary current 'waveforms' has been developed. The advantage of the proposed calculation of diffusion is the real-time capability, which is not really possible with the FEM programs available today. It makes sense in the further course of research to use these methods and to investigate them thoroughly.

Keywords: advanced observer, electrodynamics, systems, diffusion, partial differential equations, solver

Procedia PDF Downloads 122
7796 Field Performance of Cement Treated Bases as a Reflective Crack Mitigation Technique for Flexible Pavements

Authors: Mohammad R. Bhuyan, Mohammad J. Khattak

Abstract:

Deterioration of flexible pavements due to crack reflection from its soil-cement base layer is a major concern around the globe. The service life of flexible pavement diminishes significantly because of the reflective cracks. Highway agencies are struggling for decades to prevent or mitigate these cracks in order to increase pavement service lives. The root cause of reflective cracks is the shrinkage crack which occurs in the soil-cement bases during the cement hydration process. The primary factor that causes the shrinkage is the cement content of the soil-cement mixture. With the increase of cement content, the soil-cement base gains strength and durability, which is necessary to withstand the traffic loads. But at the same time, higher cement content creates more shrinkage resulting in more reflective cracks in pavements. Historically, various states of USA have used the soil-cement bases for constructing flexile pavements. State of Louisiana (USA) had been using 8 to 10 percent of cement content to manufacture the soil-cement bases. Such traditional soil-cement bases yield 2.0 MPa (300 psi) 7-day compressive strength and are termed as cement stabilized design (CSD). As these CSD bases generate significant reflective cracks, another design of soil-cement base has been utilized by adding 4 to 6 percent of cement content called cement treated design (CTD), which yields 1.0 MPa (150 psi) 7-day compressive strength. The reduction of cement content in the CTD base is expected to minimize shrinkage cracks thus increasing pavement service lives. Hence, this research study evaluates the long-term field performance of CTD bases with respect to CSD bases used in flexible pavements. Pavement Management System of the state of Louisiana was utilized to select flexible pavement projects with CSD and CTD bases that had good historical record and time-series distress performance data. It should be noted that the state collects roughness and distress data for 1/10th mile section every 2-year period. In total, 120 CSD and CTD projects were analyzed in this research, where more than 145 miles (CTD) and 175 miles (CSD) of roadways data were accepted for performance evaluation and benefit-cost analyses. Here, the service life extension and area based on distress performance were considered as benefits. It was found that CTD bases increased 1 to 5 years of pavement service lives based on transverse cracking as compared to CSD bases. On the other hand, the service lives based on longitudinal and alligator cracking, rutting and roughness index remain the same. Hence, CTD bases provide some service life extension (2.6 years, on average) to the controlling distress; transverse cracking, but it was inexpensive due to its lesser cement content. Consequently, CTD bases become 20% more cost-effective than the traditional CSD bases, when both bases were compared by net benefit-cost ratio obtained from all distress types.

Keywords: cement treated base, cement stabilized base, reflective cracking , service life, flexible pavement

Procedia PDF Downloads 161
7795 An Evaluation on the Effectiveness of a 3D Printed Composite Compression Mold

Authors: Peng Hao Wang, Garam Kim, Ronald Sterkenburg

Abstract:

The applications of composite materials within the aviation industry has been increasing at a rapid pace.  However, the growing applications of composite materials have also led to growing demand for more tooling to support its manufacturing processes. Tooling and tooling maintenance represents a large portion of the composite manufacturing process and cost. Therefore, the industry’s adaptability to new techniques for fabricating high quality tools quickly and inexpensively will play a crucial role in composite material’s growing popularity in the aviation industry. One popular tool fabrication technique currently being developed involves additive manufacturing such as 3D printing. Although additive manufacturing and 3D printing are not entirely new concepts, the technique has been gaining popularity due to its ability to quickly fabricate components, maintain low material waste, and low cost. In this study, a team of Purdue University School of Aviation and Transportation Technology (SATT) faculty and students investigated the effectiveness of a 3D printed composite compression mold. A 3D printed composite compression mold was fabricated by 3D scanning a steel valve cover of an aircraft reciprocating engine. The 3D printed composite compression mold was used to fabricate carbon fiber versions of the aircraft reciprocating engine valve cover. The 3D printed composite compression mold was evaluated for its performance, durability, and dimensional stability while the fabricated carbon fiber valve covers were evaluated for its accuracy and quality. The results and data gathered from this study will determine the effectiveness of the 3D printed composite compression mold in a mass production environment and provide valuable information for future understanding, improvements, and design considerations of 3D printed composite molds.

Keywords: additive manufacturing, carbon fiber, composite tooling, molds

Procedia PDF Downloads 191
7794 Promoting Early Learning of Children under Five Years in an Economically Disadvantaged Community in Sri Lanka through Health Promotion Approach

Authors: Najith Duminda Galmangoda Guruge, Nadeeka Rathnayake, Vinodani Wimalasena, Dinesha Wijesooriya

Abstract:

Investing in Early Learning can improve children’ interests for education and makes them ready for school. Children in economically disadvantaged communities may have reduced readiness for schools. Health Promotion approach enables communities including disadvantaged to control over their health. Mothers of children under the age five in ‘Alapathwewa’ community (n=40) were selected as the sample with the aim to promote early learning of children to improve their school readiness. Mothers in ‘Morakeewa’ community (n=40) were the control. Interventions were for a period of 2 years and children of these mothers were followed up to school entry. Importance of early learning and possibility of providing quality learning environments for children at a low cost was discussed with mothers in an experimental setting by facilitators. Mothers were enabled to make age-appropriate baby rooms which provide learning opportunities. Collective community playhouses and play areas were developed by mothers to provide opportunities for children to interact and learn with each other. Mothers started discussing with each other and sharing experiences. The progress was monitored by mothers at regular intervals. Data regarding school competencies of children were obtained from school teachers. School teachers measured thirteen competencies of children on a scale of ‘very good, good, moderate and weak’. All children in the experimental group were in ‘very good’ level in two competencies, ‘communicate friendly with others’ and ‘express ideas well’. Children in the experimental group reported a significantly higher achievement of all thirteen competencies (p < .05) than children in control. Providing quality early learning environments for children even in economically disadvantaged settings makes them ready for schools. Through a Health Promotion approach, early learning experiences for children can be provided at a low cost.

Keywords: disadvantaged, early learning, economically, health promotion

Procedia PDF Downloads 247
7793 Realization and Characterizations of Conducting Ceramics Based on ZnO Doped by TiO₂, Al₂O₃ and MgO

Authors: Qianying Sun, Abdelhadi Kassiba, Guorong Li

Abstract:

ZnO with wurtzite structure is a well-known semiconducting oxide (SCO), being applied in thermoelectric devices, varistors, gas sensors, transparent electrodes, solar cells, liquid crystal displays, piezoelectric and electro-optical devices. Intrinsically, ZnO is weakly n-type SCO due to native defects (Znⱼ, Vₒ). However, the substitutional doping by metallic elements as (Al, Ti) gives rise to a high n-type conductivity ensured by donor centers. Under CO+N₂ sintering atmosphere, Schottky barriers of ZnO ceramics will be suppressed by lowering the concentration of acceptors at grain boundaries and then inducing a large increase in the Hall mobility, thereby increasing the conductivity. The presented work concerns ZnO based ceramics, which are fabricated with doping by TiO₂ (0.50mol%), Al₂O₃ (0.25mol%) and MgO (1.00mol%) and sintering in different atmospheres (Air (A), N₂ (N), CO+N₂(C)). We obtained uniform, dense ceramics with ZnO as the main phase and Zn₂TiO₄ spinel as a secondary and minor phase. An important increase of the conductivity was shown for the samples A, N, and C which were sintered under different atmospheres. The highest conductivity (σ = 1.52×10⁵ S·m⁻¹) was obtained under the reducing atmosphere (CO). The role of doping was investigated with the aim to identify the local environment and valence states of the doping elements. Thus, Electron paramagnetic spectroscopy (EPR) determines the concentration of defects and the effects of charge carriers in ZnO ceramics as a function of the sintering atmospheres. The relation between conductivity and defects concentration shows the opposite behavior between these parameters suggesting that defects act as traps for charge carriers. For Al ions, nuclear magnetic resonance (NMR) technique was used to identify the involved local coordination of these ions. Beyond the six and forth coordinated Al, an additional NMR signature of ZnO based TCO requires analysis taking into account the grain boundaries and the conductivity through the Knight shift effects. From the thermal evolution of the conductivity as a function of the sintering atmosphere, we succeed in defining the conditions to realize ZnO based TCO ceramics with an important thermal coefficient of resistance (TCR) which is promising for electrical safety of devices.

Keywords: ceramics, conductivity, defects, TCO, ZnO

Procedia PDF Downloads 182
7792 Reliability Levels of Reinforced Concrete Bridges Obtained by Mixing Approaches

Authors: Adrián D. García-Soto, Alejandro Hernández-Martínez, Jesús G. Valdés-Vázquez, Reyna A. Vizguerra-Alvarez

Abstract:

Reinforced concrete bridges designed by code are intended to achieve target reliability levels adequate for the geographical environment where the code is applicable. Several methods can be used to estimate such reliability levels. Many of them require the establishment of an explicit limit state function (LSF). When such LSF is not available as a close-form expression, the simulation techniques are often employed. The simulation methods are computing intensive and time consuming. Note that if the reliability of real bridges designed by code is of interest, numerical schemes, the finite element method (FEM) or computational mechanics could be required. In these cases, it can be quite difficult (or impossible) to establish a close-form of the LSF, and the simulation techniques may be necessary to compute reliability levels. To overcome the need for a large number of simulations when no explicit LSF is available, the point estimate method (PEM) could be considered as an alternative. It has the advantage that only the probabilistic moments of the random variables are required. However, in the PEM, fitting of the resulting moments of the LSF to a probability density function (PDF) is needed. In the present study, a very simple alternative which allows the assessment of the reliability levels when no explicit LSF is available and without the need of extensive simulations is employed. The alternative includes the use of the PEM, and its applicability is shown by assessing reliability levels of reinforced concrete bridges in Mexico when a numerical scheme is required. Comparisons with results by using the Monte Carlo simulation (MCS) technique are included. To overcome the problem of approximating the probabilistic moments from the PEM to a PDF, a well-known distribution is employed. The approach mixes the PEM and other classic reliability method (first order reliability method, FORM). The results in the present study are in good agreement whit those computed with the MCS. Therefore, the alternative of mixing the reliability methods is a very valuable option to determine reliability levels when no close form of the LSF is available, or if numerical schemes, the FEM or computational mechanics are employed.

Keywords: structural reliability, reinforced concrete bridges, combined approach, point estimate method, monte carlo simulation

Procedia PDF Downloads 344
7791 Evaluation of Solid-Gas Separation Efficiency in Natural Gas Cyclones

Authors: W. I. Mazyan, A. Ahmadi, M. Hoorfar

Abstract:

Objectives/Scope: This paper proposes a mathematical model for calculating the solid-gas separation efficiency in cyclones. This model provides better agreement with experimental results compared to existing mathematical models. Methods: The separation ratio efficiency, ϵsp, is evaluated by calculating the outlet to inlet count ratio. Similar to mathematical derivations in the literature, the inlet and outlet particle count were evaluated based on Eulerian approach. The model also includes the external forces acting on the particle (i.e., centrifugal and drag forces). In addition, the proposed model evaluates the exact length that the particle travels inside the cyclone for the evaluation of number of turns inside the cyclone. The separation efficiency model derivation using Stoke’s law considers the effect of the inlet tangential velocity on the separation performance. In cyclones, the inlet velocity is a very important factor in determining the performance of the cyclone separation. Therefore, the proposed model provides accurate estimation of actual cyclone separation efficiency. Results/Observations/Conclusion: The separation ratio efficiency, ϵsp, is studied to evaluate the performance of the cyclone for particles ranging from 1 microns to 10 microns. The proposed model is compared with the results in the literature. It is shown that the proposed mathematical model indicates an error of 7% between its efficiency and the efficiency obtained from the experimental results for 1 micron particles. At the same time, the proposed model gives the user the flexibility to analyze the separation efficiency at different inlet velocities. Additive Information: The proposed model determines the separation efficiency accurately and could also be used to optimize the separation efficiency of cyclones at low cost through trial and error testing, through dimensional changes to enhance separation and through increasing the particle centrifugal forces. Ultimately, the proposed model provides a powerful tool to optimize and enhance existing cyclones at low cost.

Keywords: cyclone efficiency, solid-gas separation, mathematical model, models error comparison

Procedia PDF Downloads 383
7790 Practices of Waterwise Circular Economy in Water Protection: A Case Study on Pyhäjärvi, SW Finland

Authors: Jari Koskiaho, Teija Kirkkala, Jani Salminen, Sarianne Tikkanen, Sirkka Tattari

Abstract:

Here, phosphorus (P) loading to the lake Pyhäjärvi (SW Finland) was reviewed, load reduction targets were determined, and different measures of waterwise circular economy to reach the targets were evaluated. In addition to the P loading from the lake’s catchment, there is a significant amount of internal P loading occurring in the lake. There are no point source emissions into the lake. Thus, the most important source of external nutrient loading is agriculture. According to the simulations made with LLR-model, the chemical state of the lake is at the border of the classes ‘Satisfactory’ and ‘Good’. The LLR simulations suggest that a reduction of some hundreds of kilograms in annual P loading would be needed to reach an unquestionably ‘Good’ state. Evaluation of the measures of the waterwise circular economy suggested that they possess great potential in reaching the target P load reduction. If they were applied extensively and in a versatile, targeted manner in the catchment, their combined effect would reach the target reduction. In terms of cost-effectiveness, the waterwise measures were ranked as follows: The best: Fishing, 2nd best: Recycling of vegetation of reed beds, wetlands and buffer zones, 3rd best: Recycling field drainage waters stored in wetlands and ponds for irrigation, 4th best: Controlled drainage and irrigation, and 5th best: Recycling of the sediments of wetlands and ponds for soil enrichment. We also identified various waterwise nutrient recycling measures to decrease the P content of arable land. The cost-effectiveness of such measures may be very good. Solutions are needed to Finnish water protection in general, and particularly for regions like lake Pyhäjärvi catchment with intensive domestic animal production, of which the ‘P-hotspots’ are a crucial issue.

Keywords: circular economy, lake protection, mitigation measures, phosphorus

Procedia PDF Downloads 101
7789 Use Cloud-Based Watson Deep Learning Platform to Train Models Faster and More Accurate

Authors: Susan Diamond

Abstract:

Machine Learning workloads have traditionally been run in high-performance computing (HPC) environments, where users log in to dedicated machines and utilize the attached GPUs to run training jobs on huge datasets. Training of large neural network models is very resource intensive, and even after exploiting parallelism and accelerators such as GPUs, a single training job can still take days. Consequently, the cost of hardware is a barrier to entry. Even when upfront cost is not a concern, the lead time to set up such an HPC environment takes months from acquiring hardware to set up the hardware with the right set of firmware, software installed and configured. Furthermore, scalability is hard to achieve in a rigid traditional lab environment. Therefore, it is slow to react to the dynamic change in the artificial intelligent industry. Watson Deep Learning as a service, a cloud-based deep learning platform that mitigates the long lead time and high upfront investment in hardware. It enables robust and scalable sharing of resources among the teams in an organization. It is designed for on-demand cloud environments. Providing a similar user experience in a multi-tenant cloud environment comes with its own unique challenges regarding fault tolerance, performance, and security. Watson Deep Learning as a service tackles these challenges and present a deep learning stack for the cloud environments in a secure, scalable and fault-tolerant manner. It supports a wide range of deep-learning frameworks such as Tensorflow, PyTorch, Caffe, Torch, Theano, and MXNet etc. These frameworks reduce the effort and skillset required to design, train, and use deep learning models. Deep Learning as a service is used at IBM by AI researchers in areas including machine translation, computer vision, and healthcare. 

Keywords: deep learning, machine learning, cognitive computing, model training

Procedia PDF Downloads 199
7788 Temporal Progression of Episodic Memory as Function of Encoding Condition and Age: Further Investigation of Action Memory in School-Aged Children

Authors: Farzaneh Badinlou, Reza Kormi-Nouri, Monika Knopf

Abstract:

Studies of adults' episodic memory have found that enacted encoding not only improve recall performance but also retrieve faster during the recall period. The current study focused on exploring the temporal progression of different encoding conditions in younger and older school children. 204 students from two age group of 8 and 14 participated in this study. During the study phase, we studied action encoding in two forms; participants performed the phrases by themselves (SPT), and observed the performance of the experimenter (EPT), which were compared with verbal encoding; participants listened to verbal action phrases (VT). At test phase, we used immediate and delayed free recall tests. We observed significant differences in memory performance as function of age group, and encoding conditions in both immediate and delayed free recall tests. Moreover, temporal progression of recall was faster in older children when compared with younger ones. The interaction of age-group and encoding condition was only significant in delayed recall displaying that younger children performed better in EPT whereas older children outperformed in SPT. It was proposed that enactment effect in form of SPT enhances item-specific processing, whereas EPT improves relational information processing and this differential processes are responsible for the results achieved in younger and older children. The role of memory strategies and information processing methods in younger and older children were considered in this study. Moreover, the temporal progression of recall was faster in action encoding in the form of SPT and EPT compared with verbal encoding in both immediate and delayed free recall and size of enactment effect was constantly increased throughout the recall period. The results of the present study provide further evidence that the action memory is explained with an emphasis on the notion of information processing and strategic views. These results also reveal the temporal progression of recall as a new dimension of episodic memory in children.

Keywords: action memory, enactment effect, episodic memory, school-aged children, temporal progression

Procedia PDF Downloads 263
7787 Requirement Engineering for Intrusion Detection Systems in Wireless Sensor Networks

Authors: Afnan Al-Romi, Iman Al-Momani

Abstract:

The urge of applying the Software Engineering (SE) processes is both of vital importance and a key feature in critical, complex large-scale systems, for example, safety systems, security service systems, and network systems. Inevitably, associated with this are risks, such as system vulnerabilities and security threats. The probability of those risks increases in unsecured environments, such as wireless networks in general and in Wireless Sensor Networks (WSNs) in particular. WSN is a self-organizing network of sensor nodes connected by wireless links. WSNs consist of hundreds to thousands of low-power, low-cost, multi-function sensor nodes that are small in size and communicate over short-ranges. The distribution of sensor nodes in an open environment that could be unattended in addition to the resource constraints in terms of processing, storage and power, make such networks in stringent limitations such as lifetime (i.e. period of operation) and security. The importance of WSN applications that could be found in many militaries and civilian aspects has drawn the attention of many researchers to consider its security. To address this important issue and overcome one of the main challenges of WSNs, security solution systems have been developed by researchers. Those solutions are software-based network Intrusion Detection Systems (IDSs). However, it has been witnessed, that those developed IDSs are neither secure enough nor accurate to detect all malicious behaviours of attacks. Thus, the problem is the lack of coverage of all malicious behaviours in proposed IDSs, leading to unpleasant results, such as delays in the detection process, low detection accuracy, or even worse, leading to detection failure, as illustrated in the previous studies. Also, another problem is energy consumption in WSNs caused by IDS. So, in other words, not all requirements are implemented then traced. Moreover, neither all requirements are identified nor satisfied, as for some requirements have been compromised. The drawbacks in the current IDS are due to not following structured software development processes by researches and developers when developing IDS. Consequently, they resulted in inadequate requirement management, process, validation, and verification of requirements quality. Unfortunately, WSN and SE research communities have been mostly impermeable to each other. Integrating SE and WSNs is a real subject that will be expanded as technology evolves and spreads in industrial applications. Therefore, this paper will study the importance of Requirement Engineering when developing IDSs. Also, it will study a set of existed IDSs and illustrate the absence of Requirement Engineering and its effect. Then conclusions are drawn in regard of applying requirement engineering to systems to deliver the required functionalities, with respect to operational constraints, within an acceptable level of performance, accuracy and reliability.

Keywords: software engineering, requirement engineering, Intrusion Detection System, IDS, Wireless Sensor Networks, WSN

Procedia PDF Downloads 313