Search results for: artificial intelligence and genetic algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5678

Search results for: artificial intelligence and genetic algorithms

158 Microalgae Technology for Nutraceuticals

Authors: Weixing Tan

Abstract:

Production of nutraceuticals from microalgae—a virtually untapped natural phyto-based source of which there are 200,000 to 1,000,000 species—offers a sustainable and healthy alternative to conventionally sourced nutraceuticals for the market. Microalgae can be grown organically using only natural sunlight, water and nutrients at an extremely fast rate, e.g. 10-100 times more efficiently than crops or trees. However, the commercial success of microalgae products at scale remains limited largely due to the lack of economically viable technologies. There are two major microalgae production systems or technologies currently available: 1) the open system as represented by open pond technology and 2) the closed system such as photobioreactors (PBR). Each carries its own unique features and challenges. Although an open system requires a lower initial capital investment relative to a PBR, it conveys many unavoidable drawbacks; for example, much lower productivity, difficulty in contamination control/cleaning, inconsistent product quality, inconvenience in automation, restriction in location selection, and unsuitability for cold areas – all directly linked to the system openness and flat underground design. On the other hand, a PBR system has characteristics almost entirely opposite to the open system, such as higher initial capital investment, better productivity, better contamination and environmental control, wider suitability in different climates, ease in automation, higher and consistent product quality, higher energy demand (particularly if using artificial lights), and variable operational expenses if not automated. Although closed systems like PBRs are not highly competitive yet in current nutraceutical supply market, technological advances can be made, in particular for the PBR technology, to narrow the gap significantly. One example is a readily scalable P2P Microalgae PBR Technology at Grande Prairie Regional College, Canada, developed over 11 years considering return on investment (ROI) for key production processes. The P2P PBR system is approaching economic viability at a pre-commercial stage due to five ROI-integrated major components. They include: (1) optimum use of free sunlight through attenuation (patented); (2) simple, economical, and chemical-free harvesting (patent ready to file); (3) optimum pH- and nutrient-balanced culture medium (published), (4) reliable water and nutrient recycling system (trade secret); and (5) low-cost automated system design (trade secret). These innovations have allowed P2P Microalgae Technology to increase daily yield to 106 g/m2/day of Chlorella vulgaris, which contains 50% proteins and 2-3% omega-3. Based on the current market prices and scale-up factors, this P2P PBR system presents as a promising microalgae technology for market competitive nutraceutical supply.

Keywords: microalgae technology, nutraceuticals, open pond, photobioreactor PBR, return on investment ROI, technological advances

Procedia PDF Downloads 157
157 Effects of Prescribed Surface Perturbation on NACA 0012 at Low Reynolds Number

Authors: Diego F. Camacho, Cristian J. Mejia, Carlos Duque-Daza

Abstract:

The recent widespread use of Unmanned Aerial Vehicles (UAVs) has fueled a renewed interest in efficiency and performance of airfoils, particularly for applications at low and moderate Reynolds numbers, typical of this kind of vehicles. Most of previous efforts in the aeronautical industry, regarding aerodynamic efficiency, had been focused on high Reynolds numbers applications, typical of commercial airliners and large size aircrafts. However, in order to increase the levels of efficiency and to boost the performance of these UAV, it is necessary to explore new alternatives in terms of airfoil design and application of drag reduction techniques. The objective of the present work is to carry out the analysis and comparison of performance levels between a standard NACA0012 profile against another one featuring a wall protuberance or surface perturbation. A computational model, based on the finite volume method, is employed to evaluate the effect of the presence of geometrical distortions on the wall. The performance evaluation is achieved in terms of variations of drag and lift coefficients for the given profile. In particular, the aerodynamic performance of the new design, i.e. the airfoil with a surface perturbation, is examined under conditions of incompressible and subsonic flow in transient state. The perturbation considered is a shaped protrusion prescribed as a small surface deformation on the top wall of the aerodynamic profile. The ultimate goal by including such a controlled smooth artificial roughness was to alter the turbulent boundary layer. It is shown in the present work that such a modification has a dramatic impact on the aerodynamic characteristics of the airfoil, and if properly adjusted, in a positive way. The computational model was implemented using the unstructured, FVM-based open source C++ platform OpenFOAM. A number of numerical experiments were carried out at Reynolds number 5x104, based on the length of the chord and the free-stream velocity, and angles of attack 6° and 12°. A Large Eddy Simulation (LES) approach was used, together with the dynamic Smagorinsky approach as subgrid scale (SGS) model, in order to account for the effect of the small turbulent scales. The impact of the surface perturbation on the performance of the airfoil is judged in terms of changes in the drag and lift coefficients, as well as in terms of alterations of the main characteristics of the turbulent boundary layer on the upper wall. A dramatic change in the whole performance can be appreciated, including an arguably large level of lift-to-drag coefficient ratio increase for all angles and a size reduction of laminar separation bubble (LSB) for a twelve-angle-of-attack.

Keywords: CFD, LES, Lift-to-drag ratio, LSB, NACA 0012 airfoil

Procedia PDF Downloads 386
156 A Finite Element Analysis of Hexagonal Double-Arrowhead Auxetic Structure with Enhanced Energy Absorption Characteristics and Stiffness

Authors: Keda Li, Hong Hu

Abstract:

Auxetic materials, as an emerging artificial designed metamaterial has attracted growing attention due to their promising negative Poisson’s ratio behaviors and tunable properties. The conventional auxetic lattice structures for which the deformation process is governed by a bending-dominated mechanism have faced the limitation of poor mechanical performance for many potential engineering applications. Recently, both load-bearing and energy absorption capabilities have become a crucial consideration in auxetic structure design. This study reports the finite element analysis of a class of hexagonal double-arrowhead auxetic structures with enhanced stiffness and energy absorption performance. The structure design was developed by extending the traditional double-arrowhead honeycomb to a hexagon frame, the stretching-dominated deformation mechanism was determined according to Maxwell’s stability criterion. The finite element (FE) models of 2D lattice structures established with stainless steel material were analyzed in ABAQUS/Standard for predicting in-plane structural deformation mechanism, failure process, and compressive elastic properties. Based on the computational simulation, the parametric analysis was studied to investigate the effect of the structural parameters on Poisson’s ratio and mechanical properties. The geometrical optimization was then implemented to achieve the optimal Poisson’s ratio for the maximum specific energy absorption. In addition, the optimized 2D lattice structure was correspondingly converted into a 3D geometry configuration by using the orthogonally splicing method. The numerical results of 2D and 3D structures under compressive quasi-static loading conditions were compared separately with the traditional double-arrowhead re-entrant honeycomb in terms of specific Young's moduli, Poisson's ratios, and specified energy absorption. As a result, the energy absorption capability and stiffness are significantly reinforced with a wide range of Poisson’s ratio compared to traditional double-arrowhead re-entrant honeycomb. The auxetic behaviors, energy absorption capability, and yield strength of the proposed structure are adjustable with different combinations of joint angle, struts thickness, and the length-width ratio of the representative unit cell. The numerical prediction in this study suggests the proposed concept of hexagonal double-arrowhead structure could be a suitable candidate for the energy absorption applications with a constant request of load-bearing capacity. For future research, experimental analysis is required for the validation of the numerical simulation.

Keywords: auxetic, energy absorption capacity, finite element analysis, negative Poisson's ratio, re-entrant hexagonal honeycomb

Procedia PDF Downloads 87
155 The Incidental Linguistic Information Processing and Its Relation to General Intellectual Abilities

Authors: Evgeniya V. Gavrilova, Sofya S. Belova

Abstract:

The present study was aimed at clarifying the relationship between general intellectual abilities and efficiency in free recall and rhymed words generation task after incidental exposure to linguistic stimuli. The theoretical frameworks stress that general intellectual abilities are based on intentional mental strategies. In this context, it seems to be crucial to examine the efficiency of incidentally presented information processing in cognitive task and its relation to general intellectual abilities. The sample consisted of 32 Russian students. Participants were exposed to pairs of words. Each pair consisted of two common nouns or two city names. Participants had to decide whether a city name was presented in each pair. Thus words’ semantics was processed intentionally. The city names were considered to be focal stimuli, whereas common nouns were considered to be peripheral stimuli. Along with that each pair of words could be rhymed or not be rhymed, but this phonemic aspect of stimuli’s characteristic (rhymed and non-rhymed words) was processed incidentally. Then participants were asked to produce as many rhymes as they could to new words. The stimuli presented earlier could be used as well. After that, participants had to retrieve all words presented earlier. In the end, verbal and non-verbal abilities were measured with number of special psychometric tests. As for free recall task intentionally processed focal stimuli had an advantage in recall compared to peripheral stimuli. In addition all the rhymed stimuli were recalled more effectively than non-rhymed ones. The inverse effect was found in words generation task where participants tended to use mainly peripheral stimuli compared to focal ones. Furthermore peripheral rhymed stimuli were most popular target category of stimuli that was used in this task. Thus the information that was processed incidentally had a supplemental influence on efficiency of stimuli processing as well in free recall as in word generation task. Different patterns of correlations between intellectual abilities and efficiency in different stimuli processing in both tasks were revealed. Non-verbal reasoning ability correlated positively with free recall of peripheral rhymed stimuli, but it was not related to performance on rhymed words’ generation task. Verbal reasoning ability correlated positively with free recall of focal stimuli. As for rhymed words generation task, verbal intelligence correlated negatively with generation of focal stimuli and correlated positively with generation of all peripheral stimuli. The present findings lead to two key conclusions. First, incidentally processed stimuli had an advantage in free recall and word generation task. Thus incidental information processing appeared to be crucial for subsequent cognitive performance. Secondly, it was demonstrated that incidentally processed stimuli were recalled more frequently by participants with high nonverbal reasoning ability and were more effectively used by participants with high verbal reasoning ability in subsequent cognitive tasks. That implies that general intellectual abilities could benefit from operating by different levels of information processing while cognitive problem solving. This research was supported by the “Grant of President of RF for young PhD scientists” (contract № is 14.Z56.17.2980- MK) and the Grant № 15-36-01348a2 of Russian Foundation for Humanities.

Keywords: focal and peripheral stimuli, general intellectual abilities, incidental information processing

Procedia PDF Downloads 231
154 Metabolic Changes during Reprogramming of Wheat and Triticale Microspores

Authors: Natalia Hordynska, Magdalena Szechynska-Hebda, Miroslaw Sobczak, Elzbieta Rozanska, Joanna Troczynska, Zofia Banaszak, Maria Wedzony

Abstract:

Albinism is a common problem encountered in wheat and triticale breeding programs, which require in vitro culture steps e.g. generation of doubled haploids via androgenesis process. Genetic factor is a major determinant of albinism, however, environmental conditions such as temperature and media composition influence the frequency of albino plant formation. Cold incubation of wheat and triticale spikes induced a switch from gametophytic to sporophytic development. Further, androgenic structures formed from anthers of the genotypes susceptible to androgenesis or treated with cold stress, had a pool of structurally primitive plastids, with small starch granules or swollen thylakoids. High temperature was a factor inducing andro-genesis of wheat and triticale, but at the same time, it was a factor favoring the formation of albino plants. In genotypes susceptible to albinism or after heat stress conditions, cells formed from anthers were vacuolated, and plastids were eliminated. Partial or complete loss of chlorophyll pigments and incomplete differentiation of chloroplast membranes result in formation of tissues or whole plant unable to perform photosynthesis. Indeed, susceptibility to the andro-genesis process was associated with an increase of total concentration of photosynthetic pigments in anthers, spikes and regenerated plants. The proper balance of the synthesis of various pigments, was the starting point for their proper incorporation into photosynthetic membranes. In contrast, genotypes resistant to the androgenesis process and those treated with heat, contained 100 times lower content of photosynthetic pigments. In particular, the synthesis of violaxanthin, zeaxanthin, lutein and chlorophyll b was limited. Furthermore, deregulation of starch and lipids synthesis, which led to the formation of very complex starch granules and an increased number of oleosomes, respectively, correlated with the reduction of the efficiency of androgenesis. The content of other sugars varied depending on the genotype and the type of stress. The highest content of various sugars was found for genotypes susceptible to andro-genesis, and highly reduced for genotypes resistant to androgenesis. The most important sugars seem to be glucose and fructose. They are involved in sugar sensing and signaling pathways, which affect the expression of various genes and regulate plant development. Sucrose, on the other hand, seems to have minor effect at each stage of the androgenesis. The sugar metabolism was related to metabolic activity of microspores. The genotypes susceptible to androgenesis process had much faster mitochondrium- and chloroplast-dependent energy conversion and higher heat production by tissues. Thus, the effectiveness of metabolic processes, their balance and the flexibility under the stress was a factor determining the direction of microspore development, and in the later stages of the androgenesis process, a factor supporting the induction of androgenic structures, chloroplast formation and the regeneration of green plants. The work was financed by Ministry of Agriculture and Rural Development within Program: ‘Biological Progress in Plant Production’, project no HOR.hn.802.15.2018.

Keywords: androgenesis, chloroplast, metabolism, temperature stress

Procedia PDF Downloads 260
153 Application of Deep Learning Algorithms in Agriculture: Early Detection of Crop Diseases

Authors: Manaranjan Pradhan, Shailaja Grover, U. Dinesh Kumar

Abstract:

Farming community in India, as well as other parts of the world, is one of the highly stressed communities due to reasons such as increasing input costs (cost of seeds, fertilizers, pesticide), droughts, reduced revenue leading to farmer suicides. Lack of integrated farm advisory system in India adds to the farmers problems. Farmers need right information during the early stages of crop’s lifecycle to prevent damage and loss in revenue. In this paper, we use deep learning techniques to develop an early warning system for detection of crop diseases using images taken by farmers using their smart phone. The research work leads to building a smart assistant using analytics and big data which could help the farmers with early diagnosis of the crop diseases and corrective actions. The classical approach for crop disease management has been to identify diseases at crop level. Recently, ImageNet Classification using the convolutional neural network (CNN) has been successfully used to identify diseases at individual plant level. Our model uses convolution filters, max pooling, dense layers and dropouts (to avoid overfitting). The models are built for binary classification (healthy or not healthy) and multi class classification (identifying which disease). Transfer learning is used to modify the weights of parameters learnt through ImageNet dataset and apply them on crop diseases, which reduces number of epochs to learn. One shot learning is used to learn from very few images, while data augmentation techniques are used to improve accuracy with images taken from farms by using techniques such as rotation, zoom, shift and blurred images. Models built using combination of these techniques are more robust for deploying in the real world. Our model is validated using tomato crop. In India, tomato is affected by 10 different diseases. Our model achieves an accuracy of more than 95% in correctly classifying the diseases. The main contribution of our research is to create a personal assistant for farmers for managing plant disease, although the model was validated using tomato crop, it can be easily extended to other crops. The advancement of technology in computing and availability of large data has made possible the success of deep learning applications in computer vision, natural language processing, image recognition, etc. With these robust models and huge smartphone penetration, feasibility of implementation of these models is high resulting in timely advise to the farmers and thus increasing the farmers' income and reducing the input costs.

Keywords: analytics in agriculture, CNN, crop disease detection, data augmentation, image recognition, one shot learning, transfer learning

Procedia PDF Downloads 119
152 The Challenges of Citizen Engagement in Urban Transformation: Key Learnings from Three European Cities

Authors: Idoia Landa Oregi, Itsaso Gonzalez Ochoantesana, Olatz Nicolas Buxens, Carlo Ferretti

Abstract:

The impact of citizens in urban transformations has become increasingly important in the pursuit of creating citizen-centered cities. Citizens at the forefront of the urban transformation process are key to establishing resilient, sustainable, and inclusive cities that cater to the needs of all residents. Therefore, collecting data and information directly from citizens is crucial for the sustainable development of cities. Within this context, public participation becomes a pillar for acquiring the necessary information from citizens. Public participation in urban transformation processes establishes a more responsive, equitable, and resilient urban environment. This approach cultivates a sense of shared responsibility and collective progress in building cities that truly serve the well-being of all residents. However, the implementation of public participation practices often overlooks strategies to effectively engage citizens in the processes, resulting in non-successful participatory outcomes. Therefore, this research focuses on identifying and analyzing the critical aspects of citizen engagement during the same participatory urban transformation process in different European contexts: Ermua (Spain), Elva (Estonia) and Matera (Italy). The participatory neighborhood regeneration process is divided into three main stages, to turn social districts into inclusive and smart neighborhoods: (i) the strategic level, (ii) the design level, and (iii) the implementation level. In the initial stage, the focus is on diagnosing the neighborhood and creating a shared vision with the community. The second stage centers around collaboratively designing various action plans to foster inclusivity and intelligence while pushing local economic development within the district. Finally, the third stage ensures the proper co-implementation of the designed actions in the neighborhood. To this date, the presented results critically analyze the key aspects of engagement in the first stage of the methodology, the strategic plan, in the three above-mentioned contexts. It is a multifaceted study that incorporates three case studies to shed light on the various perspectives and strategies adopted by each city. The results indicate that despite of the various cultural contexts, all cities face similar barriers when seeking to enhance engagement. Accordingly, the study identifies specific challenges within the participatory approach across the three cities such as the existence of discontented citizens, communication gaps, inconsistent participation, or administration resistance. Consequently, key learnings of the process indicate that a collaborative sphere needs to be cultivated, educating both citizens and administrations in the aspects of co-governance, giving these practices the appropriate space and their own communication channels. This study is part of the DROP project, funded by the European Union, which aims to develop a citizen-centered urban renewal methodology to transform the social districts into smart and inclusive neighborhoods.

Keywords: citizen-centred cities, engagement, public participation, urban transformation

Procedia PDF Downloads 67
151 Hybrid Living: Emerging Out of the Crises and Divisions

Authors: Yiorgos Hadjichristou

Abstract:

The paper will focus on the hybrid living typologies which are brought about due to the Global Crisis. Mixing of the generations and the groups of people, mingling the functions of living with working and socializing, merging the act of living in synergy with the urban realm and its constituent elements will be the springboard of proposing an essential sustainable housing approach and the respective urban development. The thematic will be based on methodologies developed both on the academic, educational environment including participation of students’ research and on the practical aspect of architecture including case studies executed by the author in the island of Cyprus. Both paths of the research will deal with the explorative understanding of the hybrid ways of living, testing the limits of its autonomy. The evolution of the living typologies into substantial hybrid entities, will deal with the understanding of new ways of living which include among others: re-introduction of natural phenomena, accommodation of the activity of work and services in the living realm, interchange of public and private, injections of communal events into the individual living territories. The issues and the binary questions raised by what is natural and artificial, what is private and what public, what is ephemeral and what permanent and all the in-between conditions are eloquently traced in the everyday life in the island. Additionally, given the situation of Cyprus with the eminent scar of the dividing ‘Green line’ and the waiting of the ‘ghost city’ of Famagusta to be resurrected, the conventional way of understanding the limits and the definitions of the properties is irreversibly shaken. The situation is further aggravated by the unprecedented phenomenon of the crisis on the island. All these observations set the premises of reexamining the urban development and the respective sustainable housing in a synergy where their characteristics start exchanging positions, merge into each other, contemporarily emerge and vanish, changing from permanent to ephemeral. This fluidity of conditions will attempt to render a future of the built- and unbuilt realm where the main focusing point will be redirected to the human and the social. Weather and social ritual scenographies together with ‘spontaneous urban landscapes’ of ‘momentary relationships’ will suggest a recipe for emerging urban environments and sustainable living. Thus, the paper will aim at opening a discourse on the future of the sustainable living merged in a sustainable urban development in relation to the imminent solution of the division of island, where the issue of property became the main obstacle to be overcome. At the same time, it will attempt to link this approach to the global need for a sustainable evolution of the urban and living realms.

Keywords: social ritual scenographies, spontaneous urban landscapes, substantial hybrid entities, re-introduction of natural phenomena

Procedia PDF Downloads 263
150 Designing Sustainable and Energy-Efficient Urban Network: A Passive Architectural Approach with Solar Integration and Urban Building Energy Modeling (UBEM) Tools

Authors: A. Maghoul, A. Rostampouryasouri, MR. Maghami

Abstract:

The development of an urban design and power network planning has been gaining momentum in recent years. The integration of renewable energy with urban design has been widely regarded as an increasingly important solution leading to climate change and energy security. Through the use of passive strategies and solar integration with Urban Building Energy Modeling (UBEM) tools, architects and designers can create high-quality designs that meet the needs of clients and stakeholders. To determine the most effective ways of combining renewable energy with urban development, we analyze the relationship between urban form and renewable energy production. The procedure involved in this practice include passive solar gain (in building design and urban design), solar integration, location strategy, and 3D models with a case study conducted in Tehran, Iran. The study emphasizes the importance of spatial and temporal considerations in the development of sector coupling strategies for solar power establishment in arid and semi-arid regions. The substation considered in the research consists of two parallel transformers, 13 lines, and 38 connection points. Each urban load connection point is equipped with 500 kW of solar PV capacity and 1 kWh of battery Energy Storage (BES) to store excess power generated from solar, injecting it into the urban network during peak periods. The simulations and analyses have occurred in EnergyPlus software. Passive solar gain involves maximizing the amount of sunlight that enters a building to reduce the need for artificial lighting and heating. Solar integration involves integrating solar photovoltaic (PV) power into smart grids to reduce emissions and increase energy efficiency. Location strategy is crucial to maximize the utilization of solar PV in an urban distribution feeder. Additionally, 3D models are made in Revit, and they are keys component of decision-making in areas including climate change mitigation, urban planning, and infrastructure. we applied these strategies in this research, and the results show that it is possible to create sustainable and energy-efficient urban environments. Furthermore, demand response programs can be used in conjunction with solar integration to optimize energy usage and reduce the strain on the power grid. This study highlights the influence of ancient Persian architecture on Iran's urban planning system, as well as the potential for reducing pollutants in building construction. Additionally, the paper explores the advances in eco-city planning and development and the emerging practices and strategies for integrating sustainability goals.

Keywords: energy-efficient urban planning, sustainable architecture, solar energy, sustainable urban design

Procedia PDF Downloads 76
149 Experimental Measurement of Equatorial Ring Current Generated by Magnetoplasma Sail in Three-Dimensional Spatial Coordinate

Authors: Masato Koizumi, Yuya Oshio, Ikkoh Funaki

Abstract:

Magnetoplasma Sail (MPS) is a future spacecraft propulsion that generates high levels of thrust by inducing an artificial magnetosphere to capture and deflect solar wind charged particles in order to transfer momentum to the spacecraft. By injecting plasma in the spacecraft’s magnetic field region, the ring current azimuthally drifts on the equatorial plane about the dipole magnetic field generated by the current flowing through the solenoid attached on board the spacecraft. This ring current results in magnetosphere inflation which improves the thrust performance of MPS spacecraft. In this present study, the ring current was experimentally measured using three Rogowski Current Probes positioned in a circular array about the laboratory model of MPS spacecraft. This investigation aims to determine the detailed structure of ring current through physical experimentation performed under two different magnetic field strengths engendered by varying the applied voltage on the solenoid with 300 V and 600 V. The expected outcome was that the three current probes would detect the same current since all three probes were positioned at equal radial distance of 63 mm from the center of the solenoid. Although experimental results were numerically implausible due to probable procedural error, the trends of the results revealed three pieces of perceptive evidence of the ring current behavior. The first aspect is that the drift direction of the ring current depended on the strength of the applied magnetic field. The second aspect is that the diamagnetic current developed at a radial distance not occupied by the three current probes under the presence of solar wind. The third aspect is that the ring current distribution varied along the circumferential path about the spacecraft’s magnetic field. Although this study yielded experimental evidence that differed from the original hypothesis, the three key findings of this study have informed two critical MPS design solutions that will potentially improve thrust performance. The first design solution is the positioning of the plasma injection point. Based on the implication of the first of the three aspects of ring current behavior, the plasma injection point must be located at a distance instead of at close proximity from the MPS Solenoid for the ring current to drift in the direction that will result in magnetosphere inflation. The second design solution, predicated by the third aspect of ring current behavior, is the symmetrical configuration of plasma injection points. In this study, an asymmetrical configuration of plasma injection points using one plasma source resulted in a non-uniform distribution of ring current along the azimuthal path. This distorts the geometry of the inflated magnetosphere which minimizes the deflection area for the solar wind. Therefore, to realize a ring current that best provides the maximum possible inflated magnetosphere, multiple plasma sources must be spaced evenly apart for the plasma to be injected evenly along its azimuthal path.

Keywords: Magnetoplasma Sail, magnetosphere inflation, ring current, spacecraft propulsion

Procedia PDF Downloads 310
148 Trophic Variations in Uptake and Assimilation of Cadmium, Manganese and Zinc: An Estuarine Food-Chain Radiotracer Experiment

Authors: K. O’Mara, T. Cresswell

Abstract:

Nearly half of the world’s population live near the coast, and as a result, estuaries and coastal bays in populated or industrialized areas often receive metal pollution. Heavy metals have a chemical affinity for sediment particles and can be stored in estuarine sediments and become biologically available under changing conditions. Organisms inhabiting estuaries can be exposed to metals from a variety of sources including metals dissolved in water, bound to sediment or within contaminated prey. Metal uptake and assimilation responses can vary even between species that are biologically similar, making pollution effects difficult to predict. A multi-trophic level experiment representing a common Eastern Australian estuarine food chain was used to study the sources for Cd, Mn and Zn uptake and assimilation in organisms occupying several trophic levels. Sand cockles (Katelysia scalarina), school prawns (Metapenaeus macleayi) and sand whiting (Sillago ciliata) were exposed to radiolabelled seawater, suspended sediment and food. Three pulse-chase trials on filter-feeding sand cockles were performed using radiolabelled phytoplankton (Tetraselmis sp.), benthic microalgae (Entomoneis sp.) and suspended sediment. Benthic microalgae had lower metal uptake than phytoplankton during labelling but higher cockle assimilation efficiencies (Cd = 51%, Mn = 42%, Zn = 63 %) than both phytoplankton (Cd = 21%, Mn = 32%, Zn = 33%) and suspended sediment (except Mn; (Cd = 38%, Mn = 42%, Zn = 53%)). Sand cockles were also sensitive to uptake of Cd, Mn and Zn dissolved in seawater. Uptake of these metals from the dissolved phase was negligible in prawns and fish, with prawns only accumulating metals during moulting, which were then lost with subsequent moulting in the depuration phase. Diet appears to be the main source of metal assimilation in school prawns, with 65%, 54% and 58% assimilation efficiencies from Cd, Mn and Zn respectively. Whiting fed contaminated prawns were able to exclude the majority of the metal activity through egestion, with only 10%, 23% and 11% assimilation efficiencies from Cd, Mn and Zn respectively. The findings of this study support previous studies that find diet to be the dominant accumulation source for higher level trophic organisms. These results show that assimilation efficiencies can vary depending on the source of exposure; sand cockles assimilated more Cd, Mn, and Zn from the benthic diatom than phytoplankton and assimilation was higher in sand whiting fed prawns compared to artificial pellets. The sensitivity of sand cockles to metal uptake and assimilation from a variety of sources poses concerns for metal availability to predators ingesting the clam tissue, including humans. The high tolerance of sand whiting to these metals is reflected in their widespread presence in Eastern Australian estuaries, including contaminated estuaries such as Botany Bay and Port Jackson.

Keywords: cadmium, food chain, metal, manganese, trophic, zinc

Procedia PDF Downloads 202
147 Correlation Studies and Heritability Estimates among Onion (Allium Cepa L.) Cultivars of North Western Nigeria

Authors: L. Abubakar, B. M. Sokoto, I. U. Mohammed, M. S. Na’allah, A. Mohammad, A. N. Garba, T. S. Bubuche

Abstract:

Onion (Allium cepa var. cepa L.), is the most important species of the Allium group belonging to family Alliaceae and genus Allium. It can be regarded as the single important vegetable species in the world after tomatoes. Despite the similarities, which bring the species together, the genus is a strikingly diverse one, with more than five hundred species, which are perennial and mostly bulbous plants. Out of these, only seven species are in cultivation, and five are the most important species of the cultivated Allium. However, Allium cepa (onion) and Allium sativum (Garlic) are the two major cultivated species grown all over the world of which the onion crop is the most important. Heritability defined as the proportion of the observed total variability that is genetic, and its estimates from variance components give more useful information of genotypic variation from the total phenotypic differences and environmental effects on the individuals or families. It therefore guide the breeder with respect to the ease with which selection of traits can be carried out. Heritability estimates guide the breeder with respect to ease of selection of traits while correlations suggest how selection among characters can be practiced. Correlations explain relationship between characters and suggest how selection among characters can be practiced in breeding programmes. Highly significant correlations have been reported, between yield, maturity, rings/bulb and storage loss in onions. Similarly significant positive correlation exists between total bulb yield and plant height, leaf number/plant, bulb diameter and bulb yield/plant. Moderate positive correlations have been observed between maturity date and yield, dry matter content was highly correlated with soluble solids, and higher correlations were also observed between storage loss and soluble solids. The objective of the study is to determine heritability estimates and correlations for characters among onion cultivars of North Western Nigeria. This is envisaged will assist in the breeding of superior onion cultivars within the zone. Thirteen onion cultivars were collected during an expedition covering north western Nigeria and Southern part of Niger Republic during 2013, which are areas noted for onion production. The cultivars were evaluated at two locations; Sokoto, in Sokoto State and Jega in Kebbi State all in Nigeria during the 2013/14 onion season (dry season) under irrigation. Combined analysis of the results revealed fresh bulb yield is highly significantly positively correlated with bulb height and cured bulb yield, and significant positive correlation with plant height and bulb diameter. It also recorded significant negative correlation with mean No. of leaves/plant and non significant negative correlation with bolting %. Cured bulb yield (marketable yield) had highly significant positive correlation with mean bulb weight and fresh bulb yield/ha, with significant positive correlation with bulb height. It also recorded highly significant negative correlation with No. of leaves/plant and significant negative correlation with bolting % and non significant positive correlation with plant height and non significant negative correlation with bulb diameter. High broad sense heritability estimates were recorded for plant height, fresh bulb yield, number of leaves/plant, bolting % and cured bulb yield. Medium to low broad sense heritabilities were also observed for mean bulb weight, plant height and bulb diameter.

Keywords: correlation, heritability, onions, North Western Nigeria

Procedia PDF Downloads 402
146 Integrated Approach Towards Safe Wastewater Reuse in Moroccan Agriculture

Authors: Zakia Hbellaq

Abstract:

The Mediterranean region is considered a hotbed for climate change. Morocco is a semi-arid Mediterranean country facing water shortages and poor water quality. Its limited water resources limit the activities of various economic sectors. Most of Morocco's territory is in arid and desert areas. The potential water resources are estimated at 22 billion m3, which is equivalent to about 700 m3/inhabitant/year, and Morocco is in a state of structural water stress. Strictly speaking, the Kingdom of Morocco is one of the “very riskiest” countries, according to the World Resources Institute (WRI), which oversees the calculation of water stress risk in 167 countries. The surprising results of the Institute (WRI) rank Morocco as one of the riskiest countries in terms of water scarcity, ranking 3.89 out of 5, thus occupying the 23rd place out of a total of 167 countries, which indicates that the demand for water exceeds the available resources. Agriculture with a score of 3.89 is most affected by water stress from irrigation and places a heavy burden on the water table. Irrigation is an unavoidable technical need and has undeniable economic and social benefits given the available resources and climatic conditions. Irrigation, and therefore the agricultural sector, currently uses 86% of its water resources, while industry uses 5.5%. Although its development has undeniable economic and social benefits, it also contributes to the overfishing of most groundwater resources and the surprising decline in levels and deterioration of water quality in some aquifers. In this context, REUSE is one of the proposed solutions to reduce the water footprint of the agricultural sector and alleviate the shortage of water resources. Indeed, wastewater reuse, also known as REUSE (reuse of treated wastewater), is a step forward not only for the circular economy but also for the future, especially in the context of climate change. In particular, water reuse provides an alternative to existing water supplies and can be used to improve water security, sustainability, and resilience. However, given the introduction of organic trace pollutants or, organic micro-pollutants, the absorption of emerging contaminants, and decreasing salinity, it is possible to tackle innovative capabilities to overcome these problems and ensure food and health safety. To this end, attention will be paid to the adoption of an integrated and attractive approach, based on the reinforcement and optimization of the treatments proposed for the elimination of the organic load with particular attention to the elimination of emerging pollutants, to achieve this goal. , membrane bioreactors (MBR) as stand-alone technologies are not able to meet the requirements of WHO guidelines. They will be combined with heterogeneous Fenton processes using persulfate or hydrogen peroxide oxidants. Similarly, adsorption and filtration are applied as tertiary treatment In addition, the evaluation of crop performance in terms of yield, productivity, quality, and safety, through the optimization of Trichoderma sp strains that will be used to increase crop resistance to abiotic stresses, as well as the use of modern omics tools such as transcriptomic analysis using RNA sequencing and methylation to identify adaptive traits and associated genetic diversity that is tolerant/resistant/resilient to biotic and abiotic stresses. Hence, ensuring this approach will undoubtedly alleviate water scarcity and, likewise, increase the negative and harmful impact of wastewater irrigation on the condition of crops and the health of their consumers.

Keywords: water scarcity, food security, irrigation, agricultural water footprint, reuse, emerging contaminants

Procedia PDF Downloads 160
145 Sensor and Sensor System Design, Selection and Data Fusion Using Non-Deterministic Multi-Attribute Tradespace Exploration

Authors: Matthew Yeager, Christopher Willy, John Bischoff

Abstract:

The conceptualization and design phases of a system lifecycle consume a significant amount of the lifecycle budget in the form of direct tasking and capital, as well as the implicit costs associated with unforeseeable design errors that are only realized during downstream phases. Ad hoc or iterative approaches to generating system requirements oftentimes fail to consider the full array of feasible systems or product designs for a variety of reasons, including, but not limited to: initial conceptualization that oftentimes incorporates a priori or legacy features; the inability to capture, communicate and accommodate stakeholder preferences; inadequate technical designs and/or feasibility studies; and locally-, but not globally-, optimized subsystems and components. These design pitfalls can beget unanticipated developmental or system alterations with added costs, risks and support activities, heightening the risk for suboptimal system performance, premature obsolescence or forgone development. Supported by rapid advances in learning algorithms and hardware technology, sensors and sensor systems have become commonplace in both commercial and industrial products. The evolving array of hardware components (i.e. sensors, CPUs, modular / auxiliary access, etc…) as well as recognition, data fusion and communication protocols have all become increasingly complex and critical for design engineers during both concpetualization and implementation. This work seeks to develop and utilize a non-deterministic approach for sensor system design within the multi-attribute tradespace exploration (MATE) paradigm, a technique that incorporates decision theory into model-based techniques in order to explore complex design environments and discover better system designs. Developed to address the inherent design constraints in complex aerospace systems, MATE techniques enable project engineers to examine all viable system designs, assess attribute utility and system performance, and better align with stakeholder requirements. Whereas such previous work has been focused on aerospace systems and conducted in a deterministic fashion, this study addresses a wider array of system design elements by incorporating both traditional tradespace elements (e.g. hardware components) as well as popular multi-sensor data fusion models and techniques. Furthermore, statistical performance features to this model-based MATE approach will enable non-deterministic techniques for various commercial systems that range in application, complexity and system behavior, demonstrating a significant utility within the realm of formal systems decision-making.

Keywords: multi-attribute tradespace exploration, data fusion, sensors, systems engineering, system design

Procedia PDF Downloads 183
144 Seismotectonics and Seismology the North of Algeria

Authors: Djeddi Mabrouk

Abstract:

The slow coming together between the Afro-Eurasia plates seems to be the main cause of the active deformation in the whole of North Africa which in consequence come true in Algeria with a large zone of deformation in an enough large limited band, southern through Saharan atlas and northern through tell atlas. Maghrebin and Atlassian Chain along North Africa are the consequence of this convergence. In junction zone, we have noticed a compressive regime NW-SE with a creases-faults structure and structured overthrust. From a geological point of view the north part of Algeria is younger then Saharan platform, it’s changing so unstable and constantly in movement, it’s characterized by creases openly reversed, overthrusts and reversed faults, and undergo perpetually complex movement vertically and horizontally. On structural level the north of Algeria it's a part of erogenous alpine peri-Mediterranean and essentially the tertiary age It’s spread from east to the west of Algeria over 1200 km.This oogenesis is extended from east to west on broadband of 100 km.The alpine chain is shaped by 3 domains: tell atlas in north, high plateaus in mid and Saharan atlas in the south In extreme south we find the Saharan platform which is made of Precambrian bedrock recovered by Paleozoic practically not deformed. The Algerian north and the Saharan platform are separated by an important accident along of 2000km from Agadir (Morocco) to Gabes (Tunisian). The seismic activity is localized essentially in a coastal band in the north of Algeria shaped by tell atlas, high plateaus, Saharan atlas. Earthquakes are limited in the first 20km of the earth's crust; they are caused by movements along faults of inverted orientation NE-SW or sliding tectonic plates. The center region characterizes Strong Earthquake Activity who locates mainly in the basin of Mitidja (age Neogene).The southern periphery (Atlas Blidéen) constitutes the June, more Important seism genic sources in the city of Algiers and east (Boumerdes region). The North East Region is also part of the tellian area, but it is characterized by a different strain in other parts of northern Algeria. The deformation is slow and low to moderate seismic activity. Seismic activity is related to the tectonic-slip earthquake. The most pronounced is that of 27 October 1985 (Constantine) of seismic moment magnitude Mw = 5.9. North-West region is quite active and also artificial seismic hypocenters which do not exceed 20km. The deep seismicity is concentrated mainly a narrow strip along the edge of Quaternary and Neogene basins Intra Mountains along the coast. The most violent earthquakes in this region are the earthquake of Oran in 1790 and earthquakes Orléansville (El Asnam in 1954 and 1980).

Keywords: alpine chain, seismicity north Algeria, earthquakes in Algeria, geophysics, Earth

Procedia PDF Downloads 407
143 Measuring Firms’ Patent Management: Conceptualization, Validation, and Interpretation

Authors: Mehari Teshome, Lara Agostini, Anna Nosella

Abstract:

The current knowledge-based economy extends intellectual property rights (IPRs) legal research themes into a more strategic and organizational perspectives. From the diverse types of IPRs, patents are the strongest and well-known form of legal protection that influences commercial success and market value. Indeed, from our pilot survey, we understood that firms are less likely to manage their patents and actively used it as a tool for achieving competitive advantage rather they invest resource and efforts for patent application. To this regard, the literature also confirms that insights into how firms manage their patents from a holistic, strategic perspective, and how the portfolio value of patents can be optimized are scarce. Though patent management is an important business tool and there exist few scales to measure some dimensions of patent management, at the best of our knowledge, no systematic attempt has been made to develop a valid and comprehensive measure of it. Considering this theoretical and practical point of view, the aim of this article is twofold: to develop a framework for patent management encompassing all relevant dimensions with their respective constructs and measurement items, and to validate the measurement using survey data from practitioners. Methodology: We used six-step methodological approach (i.e., specify the domain of construct, item generation, scale purification, internal consistency assessment, scale validation, and replication). Accordingly, we carried out a systematic review of 182 articles on patent management, from ISI Web of Science. For each article, we mapped relevant constructs, their definition, and associated features, as well as items used to measure these constructs, when provided. This theoretical analysis was complemented by interviews with experts in patent management to get feedbacks that are more practical on how patent management is carried out in firms. Afterwards, we carried out a questionnaire survey to purify our scales and statistical validation. Findings: The analysis allowed us to design a framework for patent management, identifying its core dimensions (i.e., generation, portfolio-management, exploitation and enforcement, intelligence) and support dimensions (i.e., strategy and organization). Moreover, we identified the relevant activities for each dimension, as well as the most suitable items to measure them. For example, the core dimension generation includes constructs as: state-of-the-art analysis, freedom-to-operate analysis, patent watching, securing freedom-to-operate, patent potential and patent-geographical-scope. Originality and the Study Contribution: This study represents a first step towards the development of sound scales to measure patent management with an overarching approach, thus laying the basis for developing a recognized landmark within the research area of patent management. Practical Implications: The new scale can be used to assess the level of sophistication of the patent management of a company and compare it with other firms in the industry to evaluate their ability to manage the different activities involved in patent management. In addition, the framework resulting from this analysis can be used as a guide that supports managers to improve patent management in firms.

Keywords: patent, management, scale, development, intellectual property rights (IPRs)

Procedia PDF Downloads 147
142 Evaluation of Modern Natural Language Processing Techniques via Measuring a Company's Public Perception

Authors: Burak Oksuzoglu, Savas Yildirim, Ferhat Kutlu

Abstract:

Opinion mining (OM) is one of the natural language processing (NLP) problems to determine the polarity of opinions, mostly represented on a positive-neutral-negative axis. The data for OM is usually collected from various social media platforms. In an era where social media has considerable control over companies’ futures, it’s worth understanding social media and taking actions accordingly. OM comes to the fore here as the scale of the discussion about companies increases, and it becomes unfeasible to gauge opinion on individual levels. Thus, the companies opt to automize this process by applying machine learning (ML) approaches to their data. For the last two decades, OM or sentiment analysis (SA) has been mainly performed by applying ML classification algorithms such as support vector machines (SVM) and Naïve Bayes to a bag of n-gram representations of textual data. With the advent of deep learning and its apparent success in NLP, traditional methods have become obsolete. Transfer learning paradigm that has been commonly used in computer vision (CV) problems started to shape NLP approaches and language models (LM) lately. This gave a sudden rise to the usage of the pretrained language model (PTM), which contains language representations that are obtained by training it on the large datasets using self-supervised learning objectives. The PTMs are further fine-tuned by a specialized downstream task dataset to produce efficient models for various NLP tasks such as OM, NER (Named-Entity Recognition), Question Answering (QA), and so forth. In this study, the traditional and modern NLP approaches have been evaluated for OM by using a sizable corpus belonging to a large private company containing about 76,000 comments in Turkish: SVM with a bag of n-grams, and two chosen pre-trained models, multilingual universal sentence encoder (MUSE) and bidirectional encoder representations from transformers (BERT). The MUSE model is a multilingual model that supports 16 languages, including Turkish, and it is based on convolutional neural networks. The BERT is a monolingual model in our case and transformers-based neural networks. It uses a masked language model and next sentence prediction tasks that allow the bidirectional training of the transformers. During the training phase of the architecture, pre-processing operations such as morphological parsing, stemming, and spelling correction was not used since the experiments showed that their contribution to the model performance was found insignificant even though Turkish is a highly agglutinative and inflective language. The results show that usage of deep learning methods with pre-trained models and fine-tuning achieve about 11% improvement over SVM for OM. The BERT model achieved around 94% prediction accuracy while the MUSE model achieved around 88% and SVM did around 83%. The MUSE multilingual model shows better results than SVM, but it still performs worse than the monolingual BERT model.

Keywords: BERT, MUSE, opinion mining, pretrained language model, SVM, Turkish

Procedia PDF Downloads 146
141 Multicenter Evaluation of the ACCESS HBsAg and ACCESS HBsAg Confirmatory Assays on the DxI 9000 ACCESS Immunoassay Analyzer, for the Detection of Hepatitis B Surface Antigen

Authors: Vanessa Roulet, Marc Turini, Juliane Hey, Stéphanie Bord-Romeu, Emilie Bonzom, Mahmoud Badawi, Mohammed-Amine Chakir, Valérie Simon, Vanessa Viotti, Jérémie Gautier, Françoise Le Boulaire, Catherine Coignard, Claire Vincent, Sandrine Greaume, Isabelle Voisin

Abstract:

Background: Beckman Coulter, Inc. has recently developed fully automated assays for the detection of HBsAg on a new immunoassay platform. The objective of this European multicenter study was to evaluate the performance of the ACCESS HBsAg and ACCESS HBsAg Confirmatory assays† on the recently CE-marked DxI 9000 ACCESS Immunoassay Analyzer. Methods: The clinical specificity of the ACCESS HBsAg and HBsAg Confirmatory assays was determined using HBsAg-negative samples from blood donors and hospitalized patients. The clinical sensitivity was determined using presumed HBsAg-positive samples. Sample HBsAg status was determined using a CE-marked HBsAg assay (Abbott ARCHITECT HBsAg Qualitative II, Roche Elecsys HBsAg II, or Abbott PRISM HBsAg assay) and a CE-marked HBsAg confirmatory assay (Abbott ARCHITECT HBsAg Qualitative II Confirmatory or Abbott PRISM HBsAg Confirmatory assay) according to manufacturer package inserts and pre-determined testing algorithms. False initial reactive rate was determined on fresh hospitalized patient samples. The sensitivity for the early detection of HBV infection was assessed internally on thirty (30) seroconversion panels. Results: Clinical specificity was 99.95% (95% CI, 99.86 – 99.99%) on 6047 blood donors and 99.71% (95%CI, 99.15 – 99.94%) on 1023 hospitalized patient samples. A total of six (6) samples were found false positive with the ACCESS HBsAg assay. None were confirmed for the presence of HBsAg with the ACCESS HBsAg Confirmatory assay. Clinical sensitivity on 455 HBsAg-positive samples was 100.00% (95% CI, 99.19 – 100.00%) for the ACCESS HBsAg assay alone and for the ACCESS HBsAg Confirmatory assay. The false initial reactive rate on 821 fresh hospitalized patient samples was 0.24% (95% CI, 0.03 – 0.87%). Results obtained on 30 seroconversion panels demonstrated that the ACCESS HBsAg assay had equivalent sensitivity performances compared to the Abbott ARCHITECT HBsAg Qualitative II assay with an average bleed difference since first reactive bleed of 0.13. All bleeds found reactive in ACCESS HBsAg assay were confirmed in ACCESS HBsAg Confirmatory assay. Conclusion: The newly developed ACCESS HBsAg and ACCESS HBsAg Confirmatory assays from Beckman Coulter have demonstrated high clinical sensitivity and specificity, equivalent to currently marketed HBsAg assays, as well as a low false initial reactive rate. †Pending achievement of CE compliance; not yet available for in vitro diagnostic use. 2023-11317 Beckman Coulter and the Beckman Coulter product and service marks mentioned herein are trademarks or registered trademarks of Beckman Coulter, Inc. in the United States and other countries. All other trademarks are the property of their respective owners.

Keywords: dxi 9000 access immunoassay analyzer, hbsag, hbv, hepatitis b surface antigen, hepatitis b virus, immunoassay

Procedia PDF Downloads 90
140 3D Classification Optimization of Low-Density Airborne Light Detection and Ranging Point Cloud by Parameters Selection

Authors: Baha Eddine Aissou, Aichouche Belhadj Aissa

Abstract:

Light detection and ranging (LiDAR) is an active remote sensing technology used for several applications. Airborne LiDAR is becoming an important technology for the acquisition of a highly accurate dense point cloud. A classification of airborne laser scanning (ALS) point cloud is a very important task that still remains a real challenge for many scientists. Support vector machine (SVM) is one of the most used statistical learning algorithms based on kernels. SVM is a non-parametric method, and it is recommended to be used in cases where the data distribution cannot be well modeled by a standard parametric probability density function. Using a kernel, it performs a robust non-linear classification of samples. Often, the data are rarely linearly separable. SVMs are able to map the data into a higher-dimensional space to become linearly separable, which allows performing all the computations in the original space. This is one of the main reasons that SVMs are well suited for high-dimensional classification problems. Only a few training samples, called support vectors, are required. SVM has also shown its potential to cope with uncertainty in data caused by noise and fluctuation, and it is computationally efficient as compared to several other methods. Such properties are particularly suited for remote sensing classification problems and explain their recent adoption. In this poster, the SVM classification of ALS LiDAR data is proposed. Firstly, connected component analysis is applied for clustering the point cloud. Secondly, the resulting clusters are incorporated in the SVM classifier. Radial basic function (RFB) kernel is used due to the few numbers of parameters (C and γ) that needs to be chosen, which decreases the computation time. In order to optimize the classification rates, the parameters selection is explored. It consists to find the parameters (C and γ) leading to the best overall accuracy using grid search and 5-fold cross-validation. The exploited LiDAR point cloud is provided by the German Society for Photogrammetry, Remote Sensing, and Geoinformation. The ALS data used is characterized by a low density (4-6 points/m²) and is covering an urban area located in residential parts of the city Vaihingen in southern Germany. The class ground and three other classes belonging to roof superstructures are considered, i.e., a total of 4 classes. The training and test sets are selected randomly several times. The obtained results demonstrated that a parameters selection can orient the selection in a restricted interval of (C and γ) that can be further explored but does not systematically lead to the optimal rates. The SVM classifier with hyper-parameters is compared with the most used classifiers in literature for LiDAR data, random forest, AdaBoost, and decision tree. The comparison showed the superiority of the SVM classifier using parameters selection for LiDAR data compared to other classifiers.

Keywords: classification, airborne LiDAR, parameters selection, support vector machine

Procedia PDF Downloads 147
139 Optimization of Structures with Mixed Integer Non-linear Programming (MINLP)

Authors: Stojan Kravanja, Andrej Ivanič, Tomaž Žula

Abstract:

This contribution focuses on structural optimization in civil engineering using mixed integer non-linear programming (MINLP). MINLP is characterized as a versatile method that can handle both continuous and discrete optimization variables simultaneously. Continuous variables are used to optimize parameters such as dimensions, stresses, masses, or costs, while discrete variables represent binary decisions to determine the presence or absence of structural elements within a structure while also calculating discrete materials and standard sections. The optimization process is divided into three main steps. First, a mechanical superstructure with a variety of different topology-, material- and dimensional alternatives. Next, a MINLP model is formulated to encapsulate the optimization problem. Finally, an optimal solution is searched in the direction of the defined objective function while respecting the structural constraints. The economic or mass objective function of the material and labor costs of a structure is subjected to the constraints known from structural analysis. These constraints include equations for the calculation of internal forces and deflections, as well as equations for the dimensioning of structural components (in accordance with the Eurocode standards). Given the complex, non-convex and highly non-linear nature of optimization problems in civil engineering, the Modified Outer-Approximation/Equality-Relaxation (OA/ER) algorithm is applied. This algorithm alternately solves subproblems of non-linear programming (NLP) and main problems of mixed-integer linear programming (MILP), in this way gradually refines the solution space up to the optimal solution. The NLP corresponds to the continuous optimization of parameters (with fixed topology, discrete materials and standard dimensions, all determined in the previous MILP), while the MILP involves a global approximation to the superstructure of alternatives, where a new topology, materials, standard dimensions are determined. The optimization of a convex problem is stopped when the MILP solution becomes better than the best NLP solution. Otherwise, it is terminated when the NLP solution can no longer be improved. While the OA/ER algorithm, like all other algorithms, does not guarantee global optimality due to the presence of non-convex functions, various modifications, including convexity tests, are implemented in OA/ER to mitigate these difficulties. The effectiveness of the proposed MINLP approach is demonstrated by its application to various structural optimization tasks, such as mass optimization of steel buildings, cost optimization of timber halls, composite floor systems, etc. Special optimization models have been developed for the optimization of these structures. The MINLP optimizations, facilitated by the user-friendly software package MIPSYN, provide insights into a mass or cost-optimal solutions, optimal structural topologies, optimal material and standard cross-section choices, confirming MINLP as a valuable method for the optimization of structures in civil engineering.

Keywords: MINLP, mixed-integer non-linear programming, optimization, structures

Procedia PDF Downloads 46
138 An Evidence-Based Laboratory Medicine (EBLM) Test to Help Doctors in the Assessment of the Pancreatic Endocrine Function

Authors: Sergio J. Calleja, Adria Roca, José D. Santotoribio

Abstract:

Pancreatic endocrine diseases include pathologies like insulin resistance (IR), prediabetes, and type 2 diabetes mellitus (DM2). Some of them are highly prevalent in the U.S.—40% of U.S. adults have IR, 38% of U.S. adults have prediabetes, and 12% of U.S. adults have DM2—, as reported by the National Center for Biotechnology Information (NCBI). Building upon this imperative, the objective of the present study was to develop a non-invasive test for the assessment of the patient’s pancreatic endocrine function and to evaluate its accuracy in detecting various pancreatic endocrine diseases, such as IR, prediabetes, and DM2. This approach to a routine blood and urine test is based around serum and urine biomarkers. It is made by the combination of several independent public algorithms, such as the Adult Treatment Panel III (ATP-III), triglycerides and glucose (TyG) index, homeostasis model assessment-insulin resistance (HOMA-IR), HOMA-2, and the quantitative insulin-sensitivity check index (QUICKI). Additionally, it incorporates essential measurements such as the creatinine clearance, estimated glomerular filtration rate (eGFR), urine albumin-to-creatinine ratio (ACR), and urinalysis, which are helpful to achieve a full image of the patient’s pancreatic endocrine disease. To evaluate the estimated accuracy of this test, an iterative process was performed by a machine learning (ML) algorithm, with a training set of 9,391 patients. The sensitivity achieved was 97.98% and the specificity was 99.13%. Consequently, the area under the receiver operating characteristic (AUROC) curve, the positive predictive value (PPV), and the negative predictive value (NPV) were 92.48%, 99.12%, and 98.00%, respectively. The algorithm was validated with a randomized controlled trial (RCT) with a target sample size (n) of 314 patients. However, 50 patients were initially excluded from the study, because they had ongoing clinically diagnosed pathologies, symptoms or signs, so the n dropped to 264 patients. Then, 110 patients were excluded because they didn’t show up at the clinical facility for any of the follow-up visits—this is a critical point to improve for the upcoming RCT, since the cost of each patient is very high and for this RCT almost a third of the patients already tested were lost—, so the new n consisted of 154 patients. After that, 2 patients were excluded, because some of their laboratory parameters and/or clinical information were wrong or incorrect. Thus, a final n of 152 patients was achieved. In this validation set, the results obtained were: 100.00% sensitivity, 100.00% specificity, 100.00% AUROC, 100.00% PPV, and 100.00% NPV. These results suggest that this approach to a routine blood and urine test holds promise in providing timely and accurate diagnoses of pancreatic endocrine diseases, particularly among individuals aged 40 and above. Given the current epidemiological state of these type of diseases, these findings underscore the significance of early detection. Furthermore, they advocate for further exploration, prompting the intention to conduct a clinical trial involving 26,000 participants (from March 2025 to December 2026).

Keywords: algorithm, diabetes, laboratory medicine, non-invasive

Procedia PDF Downloads 32
137 Assessing the Efficiency of Pre-Hospital Scoring System with Conventional Coagulation Tests Based Definition of Acute Traumatic Coagulopathy

Authors: Venencia Albert, Arulselvi Subramanian, Hara Prasad Pati, Asok K. Mukhophadhyay

Abstract:

Acute traumatic coagulopathy in an endogenous dysregulation of the intrinsic coagulation system in response to the injury, associated with three-fold risk of poor outcome, and is more amenable to corrective interventions, subsequent to early identification and management. Multiple definitions for stratification of the patients' risk for early acute coagulopathy have been proposed, with considerable variations in the defining criteria, including several trauma-scoring systems based on prehospital data. We aimed to develop a clinically relevant definition for acute coagulopathy of trauma based on conventional coagulation assays and to assess its efficacy in comparison to recently established prehospital prediction models. Methodology: Retrospective data of all trauma patients (n = 490) presented to our level I trauma center, in 2014, was extracted. Receiver operating characteristic curve analysis was done to establish cut-offs for conventional coagulation assays for identification of patients with acute traumatic coagulopathy was done. Prospectively data of (n = 100) adult trauma patients was collected and cohort was stratified by the established definition and classified as "coagulopathic" or "non-coagulopathic" and correlated with the Prediction of acute coagulopathy of trauma score and Trauma-Induced Coagulopathy Clinical Score for identifying trauma coagulopathy and subsequent risk for mortality. Results: Data of 490 trauma patients (average age 31.85±9.04; 86.7% males) was extracted. 53.3% had head injury, 26.6% had fractures, 7.5% had chest and abdominal injury. Acute traumatic coagulopathy was defined as international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s. Of the 100 adult trauma patients (average age 36.5±14.2; 94% males), 63% had early coagulopathy based on our conventional coagulation assay definition. Overall prediction of acute coagulopathy of trauma score was 118.7±58.5 and trauma-induced coagulopathy clinical score was 3(0-8). Both the scores were higher in coagulopathic than non-coagulopathic patients (prediction of acute coagulopathy of trauma score 123.2±8.3 vs. 110.9±6.8, p-value = 0.31; trauma-induced coagulopathy clinical score 4(3-8) vs. 3(0-8), p-value = 0.89), but not statistically significant. Overall mortality was 41%. Mortality rate was significantly higher in coagulopathic than non-coagulopathic patients (75.5% vs. 54.2%, p-value = 0.04). High prediction of acute coagulopathy of trauma score also significantly associated with mortality (134.2±9.95 vs. 107.8±6.82, p-value = 0.02), whereas trauma-induced coagulopathy clinical score did not vary be survivors and non-survivors. Conclusion: Early coagulopathy was seen in 63% of trauma patients, which was significantly associated with mortality. Acute traumatic coagulopathy defined by conventional coagulation assays (international normalized ratio ≥ 1.19; prothrombin time ≥ 15.5 s; activated partial thromboplastin time ≥ 29 s) demonstrated good ability to identify coagulopathy and subsequent mortality, in comparison to the prehospital parameter-based scoring systems. Prediction of acute coagulopathy of trauma score may be more suited for predicting mortality rather than early coagulopathy. In emergency trauma situations, where immediate corrective measures need to be taken, complex multivariable scoring algorithms may cause delay, whereas coagulation parameters and conventional coagulation tests will give highly specific results.

Keywords: trauma, coagulopathy, prediction, model

Procedia PDF Downloads 176
136 Development of an Interface between BIM-model and an AI-based Control System for Building Facades with Integrated PV Technology

Authors: Moser Stephan, Lukasser Gerald, Weitlaner Robert

Abstract:

Urban structures will be used more intensively in the future through redensification or new planned districts with high building densities. Especially, to achieve positive energy balances like requested for Positive Energy Districts (PED) the single use of roofs is not sufficient for dense urban areas. However, the increasing share of window significantly reduces the facade area available for use in PV generation. Through the use of PV technology at other building components, such as external venetian blinds, onsite generation can be maximized and standard functionalities of this product can be positively extended. While offering advantages in terms of infrastructure, sustainability in the use of resources and efficiency, these systems require an increased optimization in planning and control strategies of buildings. External venetian blinds with PV technology require an intelligent control concept to meet the required demands such as maximum power generation, glare prevention, high daylight autonomy, avoidance of summer overheating but also use of passive solar gains in wintertime. Today, geometric representation of outdoor spaces and at the building level, three-dimensional geometric information is available for planning with Building Information Modeling (BIM). In a research project, a web application which is called HELLA DECART was developed to provide this data structure to extract the data required for the simulation from the BIM models and to make it usable for the calculations and coupled simulations. The investigated object is uploaded as an IFC file to this web application and includes the object as well as the neighboring buildings and possible remote shading. This tool uses a ray tracing method to determine possible glare from solar reflections of a neighboring building as well as near and far shadows per window on the object. Subsequently, an annual estimate of the sunlight per window is calculated by taking weather data into account. This optimized daylight assessment per window provides the ability to calculate an estimation of the potential power generation at the integrated PV on the venetian blind but also for the daylight and solar entry. As a next step, these results of the calculations as well as all necessary parameters for the thermal simulation can be provided. The overall aim of this workflow is to advance the coordination between the BIM model and coupled building simulation with the resulting shading and daylighting system with the artificial lighting system and maximum power generation in a control system. In the research project Powershade, an AI based control concept for PV integrated façade elements with coupled simulation results is investigated. The developed automated workflow concept in this paper is tested by using an office living lab at the HELLA company.

Keywords: BIPV, building simulation, optimized control strategy, planning tool

Procedia PDF Downloads 110
135 Prevention of Preterm Birth and Management of Uterine Contractions with Traditional Korean Medicine: Integrative Approach

Authors: Eun-Seop Kim, Eun-Ha Jang, Rana R. Kim, Sae-Byul Jang

Abstract:

Objective: Preterm labor is the most common antecedent of preterm birth(PTB), which is characterized by regular uterine contraction before 37 weeks of pregnancy and cervical change. In acute preterm labor, tocolytics are administered as the first-line medication to suppress uterine contractions but rarely delay pregnancy to 37 weeks of gestation. On the other hand, according to the Korean Traditional Medicine, PTB is caused by the deficiency of Qi and unnecessary energy in the body of the mother. The aim of this study was to demonstrate the benefit of Traditional Korean Medicine as an adjuvant therapy in management of early uterine contractions and the prevention of PTB. Methods: It is a case report of a 38-year-old woman (0-0-6-0) hospitalized for irregular uterine contractions and cervical change at 33+3/7 weeks of gestation. Past history includes chemical pregnancies achieved by Artificial Rroductive Technology(ART), one stillbirth (at 7 weeks) and a laparoscopic surgery for endometriosis. After seven trials of IVF and articificial insemination, she had succeeded in conception via in-vitro fertilization (IVF) with help of Traditional Korean Medicine (TKM) treatments. Due to irregular uterine contractions and cervical changes, 2 TKM were prescribed: Gami-Dangguisan, and Antae-eum, known to nourish blood and clear away heat. 120ml of Gami-Dangguisan was given twice a day monring and evening along with same amount of Antae-eum once a day from 31 August 2013 to 28 November 2013. Tocolytics (Ritodrine) was administered as a first aid for maintenance of pregnancy. Information regarding progress until the delivery was collected during the patient’s visit. Results: On admission, the cervix of 15mm in length and cervical os with 0.5cm-dilated were observed via ultrasonography. 50% cervical effacement was also detected in physical examination. Tocolysis had been temporarily maintained. As a supportive therapy, TKM herbal preparations(gami-dangguisan and Antae-eum) were concomitantly given. As of 34+2/7 weeks of gestation, however intermittent uterine contractions appeared (5-12min) on cardiotocography and vaginal bleeding was also smeared at 34+3/7 weeks. However, enhanced tocolytics and continuous administration of herbal medicine sustained the pregnancy to term. At 37+2/7 weeks, no sign of labor with restored cervical length was confirmed. The woman gave a term birth to a healthy infant via vaginal delivery at 39+3/7 gestational weeks. Conclusions: This is the first successful case report about a preter labor patient administered with conventional tocolytic agents as well as TKM herbal decoctions, delaying delivery to term. This case deserves attention considering it is rare to maintain gestation to term only with tocolytic intervention. Our report implies the potential of herbal medicine as an adjuvant therapy for preterm labor treatment. Further studies are needed to assess the safety and efficacy of TKM herbal medicine as a therapeutic alternative for curing preterm birth.

Keywords: preterm labor, traditional Korean medicine, herbal medicine, integrative treatment, complementary and alternative medicine

Procedia PDF Downloads 371
134 Enhancing the Performance of Automatic Logistic Centers by Optimizing the Assignment of Material Flows to Workstations and Flow Racks

Authors: Sharon Hovav, Ilya Levner, Oren Nahum, Istvan Szabo

Abstract:

In modern large-scale logistic centers (e.g., big automated warehouses), complex logistic operations performed by human staff (pickers) need to be coordinated with the operations of automated facilities (robots, conveyors, cranes, lifts, flow racks, etc.). The efficiency of advanced logistic centers strongly depends on optimizing picking technologies in synch with the facility/product layout, as well as on optimal distribution of material flows (products) in the system. The challenge is to develop a mathematical operations research (OR) tool that will optimize system cost-effectiveness. In this work, we propose a model that describes an automatic logistic center consisting of a set of workstations located at several galleries (floors), with each station containing a known number of flow racks. The requirements of each product and the working capacity of stations served by a given set of workers (pickers) are assumed as predetermined. The goal of the model is to maximize system efficiency. The proposed model includes two echelons. The first is the setting of the (optimal) number of workstations needed to create the total processing/logistic system, subject to picker capacities. The second echelon deals with the assignment of the products to the workstations and flow racks, aimed to achieve maximal throughputs of picked products over the entire system given picker capacities and budget constraints. The solutions to the problems at the two echelons interact to balance the overall load in the flow racks and maximize overall efficiency. We have developed an operations research model within each echelon. In the first echelon, the problem of calculating the optimal number of workstations is formulated as a non-standard bin-packing problem with capacity constraints for each bin. The problem arising in the second echelon is presented as a constrained product-workstation-flow rack assignment problem with non-standard mini-max criteria in which the workload maximum is calculated across all workstations in the center and the exterior minimum is calculated across all possible product-workstation-flow rack assignments. The OR problems arising in each echelon are proved to be NP-hard. Consequently, we find and develop heuristic and approximation solution algorithms based on exploiting and improving local optimums. The LC model considered in this work is highly dynamic and is recalculated periodically based on updated demand forecasts that reflect market trends, technological changes, seasonality, and the introduction of new items. The suggested two-echelon approach and the min-max balancing scheme are shown to work effectively on illustrative examples and real-life logistic data.

Keywords: logistics center, product-workstation, assignment, maximum performance, load balancing, fast algorithm

Procedia PDF Downloads 228
133 Exploring the Dose-Response Association of Lifestyle Behaviors and Mental Health among High School Students in the US: A Secondary Analysis of 2021 Adolescent Behaviors and Experiences Survey Data

Authors: Layla Haidar, Shari Esquenazi-Karonika

Abstract:

Introduction: Mental health includes one’s emotional, psychological, and interpersonal well-being; it ranges from “good” to “poor” on a continuum. At the individual-level, it affects how a person thinks, feels, and acts. Moreover, it determines how they cope with stress, relate to others, and interface with their surroundings. Research has yielded that mental health is directly related with short- and long-term physical health (including chronic disease), health risk behaviors, education-level, employment, and social relationships. As is the case with physical conditions like diabetes, heart disease, and cancer, mitigating the behavioral and genetic risks of debilitating mental health conditions like anxiety and depression can nurture a healthier quality of mental health throughout one’s life. In order to maximize the benefits of prevention, it is important to identify modifiable risks and develop protective habits earlier in life. Methods: The Adolescent Behaviors and Experiences Survey (ABES) dataset was used for this study. The ABES survey was administered to high school students (9th-12th grade) during January 2021- June 2021 by the Centers for Disease Control and Prevention (CDC). The data was analyzed to identify any associations between feelings of sadness, hopelessness, or increased suicidality among high school students with relation to their participation on one or more sports teams and their average daily consumed screen time. Data was analyzed using descriptive and multivariable analytic techniques. A multinomial logistic regression of each variable was conducted to examine if there was an association, while controlling for grade-level, sex, and race. Results: The findings from this study are insightful for administrators and policymakers who wish to address mounting concerns related to student mental health. The study revealed that compared to a student who participated on zero sports teams, students who participated in 1 or more sports teams showed a significantly increased risk of depression (p<0.05). Conversely, the rate of depression in students was significantly less in those who consumed 5 or more hours of screen time per day, compared to those who consumed less than 1 hour per day of screen time (p<0.05). Conclusion: These findings are informative and highlight the importance of understanding the nuances of student participation on sports teams (e.g., physical exertion, social dynamics of team, and the level of competitiveness within the sport). Likewise, the context of an individual’s screen time (e.g., social media, engaging in team-based video games, or watching television) can inform parental or school-based policies about screen time activity. Although physical activity has been proven to be important for emotional and physical well-being of youth, playing on multiple teams could have negative consequences on the emotional state of high school students potentially due to fatigue, overtraining, and injuries. Existing literature has highlighted the negative effects of screen time; however, further research needs to consider the type of screen-based consumption to better understand its effects on mental health.

Keywords: behavioral science, mental health, adolescents, prevention

Procedia PDF Downloads 105
132 Sheep Pox Virus Recombinant Proteins To Develop Subunit Vaccines

Authors: Olga V. Chervyakova, Elmira T. Tailakova, Vitaliy M. Strochkov, Kulyaisan T. Sultankulova, Nurlan T. Sandybayev, Lev G. Nemchinov, Rosemarie W. Hammond

Abstract:

Sheep pox is a highly contagious infection that OIE regards to be one of the most dangerous animal diseases. It causes enormous economic losses because of death and slaughter of infected animals, lower productivity, cost of veterinary and sanitary as well as quarantine measures. To control spread of sheep pox infection the attenuated vaccines are widely used in the Republic of Kazakhstan and other Former Soviet Union countries. In spite of high efficiency of live vaccines, the possible presence of the residual virulence, potential genetic instability restricts their use in disease-free areas that leads to necessity to exploit new approaches in vaccine development involving recombinant DNA technology. Vaccines on the basis of recombinant proteins are the newest generation of prophylactic preparations. The main advantage of these vaccines is their low reactogenicity and this fact makes them widely used in medical and veterinary practice for vaccination of humans and farm animals. The objective of the study is to produce recombinant immunogenic proteins for development of the high-performance means for sheep pox prophylaxis. The SPV proteins were chosen for their homology with the known immunogenic vaccinia virus proteins. Assay of nucleotide and amino acid sequences of the target SPV protein genes. It has been shown that four proteins SPPV060 (ortholog L1), SPPV074 (ortholog H3), SPPV122 (ortholog A33) and SPPV141 (ortholog B5) possess transmembrane domains at N- or C-terminus while in amino acid sequences of SPPV095 (ortholog А 4) and SPPV117 (ortholog А 27) proteins these domains were absent. On the basis of these findings the primers were constructed. Target genes were amplified and subsequently cloned into the expression vector рЕТ26b(+) or рЕТ28b(+). Six constructions (pSPPV060ΔТМ, pSPPV074ΔТМ, pSPPV095, pSPPV117, pSPPV122ΔТМ and pSPPV141ΔТМ) were obtained for expression of the SPV genes under control of T7 promoter in Escherichia coli. To purify and detect recombinant proteins the amino acid sequences were modified by adding six histidine molecules at C-terminus. Induction of gene expression by IPTG was resulted in production of the proteins with molecular weights corresponding to the estimated values for SPPV060, SPPV074, SPPV095, SPPV117, SPPV122 and SPPV141, i.e. 22, 30, 20, 19, 17 and 22 kDa respectively. Optimal protocol of expression for each gene that ensures high yield of the recombinant protein was identified. Assay of cellular lysates by western blotting confirmed expression of the target proteins. Recombinant proteins bind specifically with antibodies to polyhistidine. Moreover all produced proteins are specifically recognized by the serum from experimentally SPV-infected sheep. The recombinant proteins SPPV060, SPPV074, SPPV117, SPPV122 and SPPV141 were also shown to induce formation of antibodies with virus-neutralizing activity. The results of the research will help to develop a new-generation high-performance means for specific sheep pox prophylaxis that is one of key moments in animal health protection. The research was conducted under the International project ISTC # K-1704 “Development of methods to construct recombinant prophylactic means for sheep pox with use of transgenic plants” and under the Grant Project RK MES G.2015/0115RK01983 "Recombinant vaccine for sheep pox prophylaxis".

Keywords: prophylactic preparation, recombinant protein, sheep pox virus, subunit vaccine

Procedia PDF Downloads 242
131 A Smart Sensor Network Approach Using Affordable River Water Level Sensors

Authors: Dian Zhang, Brendan Heery, Maria O’Neill, Ciprian Briciu-Burghina, Noel E. O’Connor, Fiona Regan

Abstract:

Recent developments in sensors, wireless data communication and the cloud computing have brought the sensor web to a whole new generation. The introduction of the concept of ‘Internet of Thing (IoT)’ has brought the sensor research into a new level, which involves the developing of long lasting, low cost, environment friendly and smart sensors; new wireless data communication technologies; big data analytics algorithms and cloud based solutions that are tailored to large scale smart sensor network. The next generation of smart sensor network consists of several layers: physical layer, where all the smart sensors resident and data pre-processes occur, either on the sensor itself or field gateway; data transmission layer, where data and instructions exchanges happen; the data process layer, where meaningful information is extracted and organized from the pre-process data stream. There are many definitions of smart sensor, however, to summarize all these definitions, a smart sensor must be Intelligent and Adaptable. In future large scale sensor network, collected data are far too large for traditional applications to send, store or process. The sensor unit must be intelligent that pre-processes collected data locally on board (this process may occur on field gateway depends on the sensor network structure). In this case study, three smart sensing methods, corresponding to simple thresholding, statistical model and machine learning based MoPBAS method, are introduced and their strength and weakness are discussed as an introduction to the smart sensing concept. Data fusion, the integration of data and knowledge from multiple sources, are key components of the next generation smart sensor network. For example, in the water level monitoring system, weather forecast can be extracted from external sources and if a heavy rainfall is expected, the server can send instructions to the sensor notes to, for instance, increase the sampling rate or switch on the sleeping mode vice versa. In this paper, we describe the deployment of 11 affordable water level sensors in the Dublin catchment. The objective of this paper is to use the deployed river level sensor network at the Dodder catchment in Dublin, Ireland as a case study to give a vision of the next generation of a smart sensor network for flood monitoring to assist agencies in making decisions about deploying resources in the case of a severe flood event. Some of the deployed sensors are located alongside traditional water level sensors for validation purposes. Using the 11 deployed river level sensors in a network as a case study, a vision of the next generation of smart sensor network is proposed. Each key component of the smart sensor network is discussed, which hopefully inspires the researchers who are working in the sensor research domain.

Keywords: smart sensing, internet of things, water level sensor, flooding

Procedia PDF Downloads 381
130 Contribution to the Study of Automatic Epileptiform Pattern Recognition in Long Term EEG Signals

Authors: Christine F. Boos, Fernando M. Azevedo

Abstract:

Electroencephalogram (EEG) is a record of the electrical activity of the brain that has many applications, such as monitoring alertness, coma and brain death; locating damaged areas of the brain after head injury, stroke and tumor; monitoring anesthesia depth; researching physiology and sleep disorders; researching epilepsy and localizing the seizure focus. Epilepsy is a chronic condition, or a group of diseases of high prevalence, still poorly explained by science and whose diagnosis is still predominantly clinical. The EEG recording is considered an important test for epilepsy investigation and its visual analysis is very often applied for clinical confirmation of epilepsy diagnosis. Moreover, this EEG analysis can also be used to help define the types of epileptic syndrome, determine epileptiform zone, assist in the planning of drug treatment and provide additional information about the feasibility of surgical intervention. In the context of diagnosis confirmation the analysis is made using long term EEG recordings with at least 24 hours long and acquired by a minimum of 24 electrodes in which the neurophysiologists perform a thorough visual evaluation of EEG screens in search of specific electrographic patterns called epileptiform discharges. Considering that the EEG screens usually display 10 seconds of the recording, the neurophysiologist has to evaluate 360 screens per hour of EEG or a minimum of 8,640 screens per long term EEG recording. Analyzing thousands of EEG screens in search patterns that have a maximum duration of 200 ms is a very time consuming, complex and exhaustive task. Because of this, over the years several studies have proposed automated methodologies that could facilitate the neurophysiologists’ task of identifying epileptiform discharges and a large number of methodologies used neural networks for the pattern classification. One of the differences between all of these methodologies is the type of input stimuli presented to the networks, i.e., how the EEG signal is introduced in the network. Five types of input stimuli have been commonly found in literature: raw EEG signal, morphological descriptors (i.e. parameters related to the signal’s morphology), Fast Fourier Transform (FFT) spectrum, Short-Time Fourier Transform (STFT) spectrograms and Wavelet Transform features. This study evaluates the application of these five types of input stimuli and compares the classification results of neural networks that were implemented using each of these inputs. The performance of using raw signal varied between 43 and 84% efficiency. The results of FFT spectrum and STFT spectrograms were quite similar with average efficiency being 73 and 77%, respectively. The efficiency of Wavelet Transform features varied between 57 and 81% while the descriptors presented efficiency values between 62 and 93%. After simulations we could observe that the best results were achieved when either morphological descriptors or Wavelet features were used as input stimuli.

Keywords: Artificial neural network, electroencephalogram signal, pattern recognition, signal processing

Procedia PDF Downloads 528
129 The Effect of Soil-Structure Interaction on the Post-Earthquake Fire Performance of Structures

Authors: A. T. Al-Isawi, P. E. F. Collins

Abstract:

The behaviour of structures exposed to fire after an earthquake is not a new area of engineering research, but there remain a number of areas where further work is required. Such areas relate to the way in which seismic excitation is applied to a structure, taking into account the effect of soil-structure interaction (SSI) and the method of analysis, in addition to identifying the excitation load properties. The selection of earthquake data input for use in nonlinear analysis and the method of analysis are still challenging issues. Thus, realistic artificial ground motion input data must be developed to certify that site properties parameters adequately describe the effects of the nonlinear inelastic behaviour of the system and that the characteristics of these parameters are coherent with the characteristics of the target parameters. Conversely, ignoring the significance of some attributes, such as frequency content, soil site properties and earthquake parameters may lead to misleading results, due to the misinterpretation of required input data and the incorrect synthesise of analysis hypothesis. This paper presents a study of the post-earthquake fire (PEF) performance of a multi-storey steel-framed building resting on soft clay, taking into account the effects of the nonlinear inelastic behaviour of the structure and soil, and the soil-structure interaction (SSI). Structures subjected to an earthquake may experience various levels of damage; the geometrical damage, which indicates the change in the initial structure’s geometry due to the residual deformation as a result of plastic behaviour, and the mechanical damage which identifies the degradation of the mechanical properties of the structural elements involved in the plastic range of deformation. Consequently, the structure presumably experiences partial structural damage but is then exposed to fire under its new residual material properties, which may result in building failure caused by a decrease in fire resistance. This scenario would be more complicated if SSI was also considered. Indeed, most earthquake design codes ignore the probability of PEF as well as the effect that SSI has on the behaviour of structures, in order to simplify the analysis procedure. Therefore, the design of structures based on existing codes which neglect the importance of PEF and SSI can create a significant risk of structural failure. In order to examine the criteria for the behaviour of a structure under PEF conditions, a two-dimensional nonlinear elasto-plastic model is developed using ABAQUS software; the effects of SSI are included. Both geometrical and mechanical damages have been taken into account after the earthquake analysis step. For comparison, an identical model is also created, which does not include the effects of soil-structure interaction. It is shown that damage to structural elements is underestimated if SSI is not included in the analysis, and the maximum percentage reduction in fire resistance is detected in the case when SSI is included in the scenario. The results are validated using the literature.

Keywords: Abaqus Software, Finite Element Analysis, post-earthquake fire, seismic analysis, soil-structure interaction

Procedia PDF Downloads 121