Search results for: Energy Management System (EMS)
966 Low-Cost, Portable Optical Sensor with Regression Algorithm Models for Accurate Monitoring of Nitrites in Environments
Authors: David X. Dong, Qingming Zhang, Meng Lu
Abstract:
Nitrites enter waterways as runoff from croplands and are discharged from many industrial sites. Excessive nitrite inputs to water bodies lead to eutrophication. On-site rapid detection of nitrite is of increasing interest for managing fertilizer application and monitoring water source quality. Existing methods for detecting nitrites use spectrophotometry, ion chromatography, electrochemical sensors, ion-selective electrodes, chemiluminescence, and colorimetric methods. However, these methods either suffer from high cost or provide low measurement accuracy due to their poor selectivity to nitrites. Therefore, it is desired to develop an accurate and economical method to monitor nitrites in environments. We report a low-cost optical sensor, in conjunction with a machine learning (ML) approach to enable high-accuracy detection of nitrites in water sources. The sensor works under the principle of measuring molecular absorptions of nitrites at three narrowband wavelengths (295 nm, 310 nm, and 357 nm) in the ultraviolet (UV) region. These wavelengths are chosen because they have relatively high sensitivity to nitrites; low-cost light-emitting devices (LEDs) and photodetectors are also available at these wavelengths. A regression model is built, trained, and utilized to minimize cross-sensitivities of these wavelengths to the same analyte, thus achieving precise and reliable measurements with various interference ions. The measured absorbance data is input to the trained model that can provide nitrite concentration prediction for the sample. The sensor is built with i) a miniature quartz cuvette as the test cell that contains a liquid sample under test, ii) three low-cost UV LEDs placed on one side of the cell as light sources, with each LED providing a narrowband light, and iii) a photodetector with a built-in amplifier and an analog-to-digital converter placed on the other side of the test cell to measure the power of transmitted light. This simple optical design allows measuring the absorbance data of the sample at the three wavelengths. To train the regression model, absorbances of nitrite ions and their combination with various interference ions are first obtained at the three UV wavelengths using a conventional spectrophotometer. Then, the spectrophotometric data are inputs to different regression algorithm models for training and evaluating high-accuracy nitrite concentration prediction. Our experimental results show that the proposed approach enables instantaneous nitrite detection within several seconds. The sensor hardware costs about one hundred dollars, which is much cheaper than a commercial spectrophotometer. The ML algorithm helps to reduce the average relative errors to below 3.5% over a concentration range from 0.1 ppm to 100 ppm of nitrites. The sensor has been validated to measure nitrites at three sites in Ames, Iowa, USA. This work demonstrates an economical and effective approach to the rapid, reagent-free determination of nitrites with high accuracy. The integration of the low-cost optical sensor and ML data processing can find a wide range of applications in environmental monitoring and management.Keywords: optical sensor, regression model, nitrites, water quality
Procedia PDF Downloads 72965 Generation of Roof Design Spectra Directly from Uniform Hazard Spectra
Authors: Amin Asgarian, Ghyslaine McClure
Abstract:
Proper seismic evaluation of Non-Structural Components (NSCs) mandates an accurate estimation of floor seismic demands (i.e. acceleration and displacement demands). Most of the current international codes incorporate empirical equations to calculate equivalent static seismic force for which NSCs and their anchorage system must be designed. These equations, in general, are functions of component mass and peak seismic acceleration to which NSCs are subjected to during the earthquake. However, recent studies have shown that these recommendations are suffered from several shortcomings such as neglecting the higher mode effect, tuning effect, NSCs damping effect, etc. which cause underestimation of the component seismic acceleration demand. This work is aimed to circumvent the aforementioned shortcomings of code provisions as well as improving them by proposing a simplified, practical, and yet accurate approach to generate acceleration Floor Design Spectra (FDS) directly from corresponding Uniform Hazard Spectra (UHS) (i.e. design spectra for structural components). A database of 27 Reinforced Concrete (RC) buildings in which Ambient Vibration Measurements (AVM) have been conducted. The database comprises 12 low-rise, 10 medium-rise, and 5 high-rise buildings all located in Montréal, Canada and designated as post-disaster buildings or emergency shelters. The buildings are subjected to a set of 20 compatible seismic records and Floor Response Spectra (FRS) in terms of pseudo acceleration are derived using the proposed approach for every floor of the building in both horizontal directions considering 4 different damping ratios of NSCs (i.e. 2, 5, 10, and 20% viscous damping). Several effective parameters on NSCs response are evaluated statistically. These parameters comprise NSCs damping ratios, tuning of NSCs natural period with one of the natural periods of supporting structure, higher modes of supporting structures, and location of NSCs. The entire spectral region is divided into three distinct segments namely short-period, fundamental period, and long period region. The derived roof floor response spectra for NSCs with 5% damping are compared with the 5% damping UHS and procedure are proposed to generate roof FDS for NSCs with 5% damping directly from 5% damped UHS in each spectral region. The generated FDS is a powerful, practical, and accurate tool for seismic design and assessment of acceleration-sensitive NSCs particularly in existing post-critical buildings which have to remain functional even after the earthquake and cannot tolerate any damage to NSCs.Keywords: earthquake engineering, operational and functional components (OFCs), operational modal analysis (OMA), seismic assessment and design
Procedia PDF Downloads 236964 A World Map of Seabed Sediment Based on 50 Years of Knowledge
Authors: T. Garlan, I. Gabelotaud, S. Lucas, E. Marchès
Abstract:
Production of a global sedimentological seabed map has been initiated in 1995 to provide the necessary tool for searches of aircraft and boats lost at sea, to give sedimentary information for nautical charts, and to provide input data for acoustic propagation modelling. This original approach had already been initiated one century ago when the French hydrographic service and the University of Nancy had produced maps of the distribution of marine sediments of the French coasts and then sediment maps of the continental shelves of Europe and North America. The current map of the sediment of oceans presented was initiated with a UNESCO's general map of the deep ocean floor. This map was adapted using a unique sediment classification to present all types of sediments: from beaches to the deep seabed and from glacial deposits to tropical sediments. In order to allow good visualization and to be adapted to the different applications, only the granularity of sediments is represented. The published seabed maps are studied, if they present an interest, the nature of the seabed is extracted from them, the sediment classification is transcribed and the resulted map is integrated in the world map. Data come also from interpretations of Multibeam Echo Sounder (MES) imagery of large hydrographic surveys of deep-ocean. These allow a very high-quality mapping of areas that until then were represented as homogeneous. The third and principal source of data comes from the integration of regional maps produced specifically for this project. These regional maps are carried out using all the bathymetric and sedimentary data of a region. This step makes it possible to produce a regional synthesis map, with the realization of generalizations in the case of over-precise data. 86 regional maps of the Atlantic Ocean, the Mediterranean Sea, and the Indian Ocean have been produced and integrated into the world sedimentary map. This work is permanent and permits a digital version every two years, with the integration of some new maps. This article describes the choices made in terms of sediment classification, the scale of source data and the zonation of the variability of the quality. This map is the final step in a system comprising the Shom Sedimentary Database, enriched by more than one million punctual and surface items of data, and four series of coastal seabed maps at 1:10,000, 1:50,000, 1:200,000 and 1:1,000,000. This step by step approach makes it possible to take into account the progresses in knowledge made in the field of seabed characterization during the last decades. Thus, the arrival of new classification systems for seafloor has improved the recent seabed maps, and the compilation of these new maps with those previously published allows a gradual enrichment of the world sedimentary map. But there is still a lot of work to enhance some regions, which are still based on data acquired more than half a century ago.Keywords: marine sedimentology, seabed map, sediment classification, world ocean
Procedia PDF Downloads 232963 Influence of the Use of Fruits Byproducts on the Lipid Profile of Hermetia illucens, Tenebrio molitor and Zophoba morio Larvae
Authors: Rebeca P Ramos-Bueno, Maria Jose Gonzalez-Fernandez, Rosa M. Moreno-Zamora, Antonia Barros Heras, Yolanda Serrano Alonso, Carolina Sanchez Barranco
Abstract:
Insects are a new source of fatty acids (FA), so they are considered a sustainable and environmentally friendly alternative for both animal feed and the human diet, and furthermore, their harvesting/rearing require a low-tech and low capital investment. For that reason, lipids obtained by insect breeding open interesting possibilities with alimentary and industrial purposes, i.e., the production of biodiesel. Particularly, certain insect species, especially during the larval stage, contain high proportions of fat which is highly dependent on their feed and stage of development. Among them, Hermetia illucens larvae can be bred on food wastes to produce fat- and protein-rich raw materials for food by-product management. So, insects can act as excellent bioconverters of organic waste to nutrient-rich materials. In this regard, the aim of the study was to evaluate the effects of fruit byproducts on the FA compositions of Tenebrio molitor, Zophoba morio, and H. illucens larvae. Firstly, oil was extracted with the green solvent ethyl acetate, and FA methyl ester was obtained and analyzed by GC to show the FA profile. In addition, the triacylglycerol (TAG) profile was obtained by HPLC. Dehydrated watermelon, tomato, and papaya by-products, as well as wheat-based control feed, were assayed. High FA content was reached by Z. morio larvae fed with all fruits; however, no differences were shown in lipid profile with any change. It is worth highlighting that both Z. morio and H. illucens could be selected as the best candidates for biodiesel production due to their high content of saturated FA. On the other hand, T. molitor larvae showed a higher content of monounsaturated FA than control larvae, whereas the n-6 polyunsaturated FA content decreased in larvae fed with fruits. This result indicates that the improvement of the FA profile of Tenebrio can depend on both the type of feeding and the intended use. The lipid profile of H. illucens larvae fed with papaya and tomato showed a slight increase in the content of α-linoleic acid (ALA, 18:3n3). This FA is the precursor of docosahexaenoic acid (DHA, 22:6n3), which plays an important role as a component of structural lipids in cell membranes as well as in the synthesis of eicosanoids, protecting and resolving. Also, it was evaluated the TAG profile of Z. morio larvae due to their highest oil content. The results showed a high oleic acid (OA, 18:1n9) content, which displays modulatory effects in a wide range of physiological functions, having anti-inflammatory and anti-atherogenic properties. In conclusion, this study clearly shows that Z. morio and H. illucens larvae constitute an alternative source of OA- and ALA-rich oils, respectively, which can be devoted for food use, as well as for using in the food and pharmaceutical industries, with agronomic implications. Finally, although the profile of Z. morio was not improved with fruit feeding, this kind of feeding could be used due to its low environmental impact.Keywords: fatty acids, fruit byproducts, Hermetia illucens, Zophoba morio, Tenebrio molitor, insect rearing
Procedia PDF Downloads 147962 Artificial Intelligence and Robotics in the Eye of Private Law with Special Regards to Intellectual Property and Liability Issues
Authors: Barna Arnold Keserű
Abstract:
In the last few years (what is called by many scholars the big data era) artificial intelligence (hereinafter AI) get more and more attention from the public and from the different branches of sciences as well. What previously was a mere science-fiction, now starts to become reality. AI and robotics often walk hand in hand, what changes not only the business and industrial life, but also has a serious impact on the legal system. The main research of the author focuses on these impacts in the field of private law, with special regards to liability and intellectual property issues. Many questions arise in these areas connecting to AI and robotics, where the boundaries are not sufficiently clear, and different needs are articulated by the different stakeholders. Recognizing the urgent need of thinking the Committee on Legal Affairs of the European Parliament adopted a Motion for a European Parliament Resolution A8-0005/2017 (of January 27th, 2017) in order to take some recommendations to the Commission on civil law rules on robotics and AI. This document defines some crucial usage of AI and/or robotics, e.g. the field of autonomous vehicles, the human job replacement in the industry or smart applications and machines. It aims to give recommendations to the safe and beneficial use of AI and robotics. However – as the document says – there are no legal provisions that specifically apply to robotics or AI in IP law, but that existing legal regimes and doctrines can be readily applied to robotics, although some aspects appear to call for specific consideration, calls on the Commission to support a horizontal and technologically neutral approach to intellectual property applicable to the various sectors in which robotics could be employed. AI can generate some content what worth copyright protection, but the question came up: who is the author, and the owner of copyright? The AI itself can’t be deemed author because it would mean that it is legally equal with the human persons. But there is the programmer who created the basic code of the AI, or the undertaking who sells the AI as a product, or the user who gives the inputs to the AI in order to create something new. Or AI generated contents are so far from humans, that there isn’t any human author, so these contents belong to public domain. The same questions could be asked connecting to patents. The research aims to answer these questions within the current legal framework and tries to enlighten future possibilities to adapt these frames to the socio-economical needs. In this part, the proper license agreements in the multilevel-chain from the programmer to the end-user become very important, because AI is an intellectual property in itself what creates further intellectual property. This could collide with data-protection and property rules as well. The problems are similar in the field of liability. We can use different existing forms of liability in the case when AI or AI led robotics cause damages, but it is unsure that the result complies with economical and developmental interests.Keywords: artificial intelligence, intellectual property, liability, robotics
Procedia PDF Downloads 203961 Bioinformatic Strategies for the Production of Glycoproteins in Algae
Authors: Fadi Saleh, Çığdem Sezer Zhmurov
Abstract:
Biopharmaceuticals represent one of the wildest developing fields within biotechnology, and the biological macromolecules being produced inside cells have a variety of applications for therapies. In the past, mammalian cells, especially CHO cells, have been employed in the production of biopharmaceuticals. This is because these cells can achieve human-like completion of PTM. These systems, however, carry apparent disadvantages like high production costs, vulnerability to contamination, and limitations in scalability. This research is focused on the utilization of microalgae as a bioreactor system for the synthesis of biopharmaceutical glycoproteins in relation to PTMs, particularly N-glycosylation. The research points to a growing interest in microalgae as a potential substitute for more conventional expression systems. A number of advantages exist in the use of microalgae, including rapid growth rates, the lack of common human pathogens, controlled scalability in bioreactors, and the ability of some PTMs to take place. Thus, the potential of microalgae to produce recombinant proteins with favorable characteristics makes this a promising platform in order to produce biopharmaceuticals. The study focuses on the examination of the N-glycosylation pathways across different species of microalgae. This investigation is important as N-glycosylation—the process by which carbohydrate groups are linked to proteins—profoundly influences the stability, activity, and general performance of glycoproteins. Additionally, bioinformatics methodologies are employed to explain the genetic pathways implicated in N-glycosylation within microalgae, with the intention of modifying these organisms to produce glycoproteins suitable for human consumption. In this way, the present comparative analysis of the N-glycosylation pathway in humans and microalgae can be used to bridge both systems in order to produce biopharmaceuticals with humanized glycosylation profiles within the microalgal organisms. The results of the research underline microalgae's potential to help improve some of the limitations associated with traditional biopharmaceutical production systems. The study may help in the creation of a cost-effective and scale-up means of producing quality biopharmaceuticals by modifying microalgae genetically to produce glycoproteins with N-glycosylation that is compatible with humans. Improvements in effectiveness will benefit biopharmaceutical production and the biopharmaceutical sector with this novel, green, and efficient expression platform. This thesis, therefore, is thorough research into the viability of microalgae as an efficient platform for producing biopharmaceutical glycoproteins. Based on the in-depth bioinformatic analysis of microalgal N-glycosylation pathways, a platform for their engineering to produce human-compatible glycoproteins is set out in this work. The findings obtained in this research will have significant implications for the biopharmaceutical industry by opening up a new way of developing safer, more efficient, and economically more feasible biopharmaceutical manufacturing platforms.Keywords: microalgae, glycoproteins, post-translational modification, genome
Procedia PDF Downloads 24960 Autophagy in the Midgut Epithelium of Spodoptera exigua Hübner (Lepidoptera: Noctuidae) Larvae Exposed to Various Cadmium Concentration - 6-Generational Exposure
Authors: Magdalena Maria Rost-Roszkowska, Alina Chachulska-Żymełka, Monika Tarnawska, Maria Augustyniak, Alina Kafel, Agnieszka Babczyńska
Abstract:
Autophagy is a form of cell remodeling in which an internalization of organelles into vacuoles that are called autophagosomes occur. Autophagosomes are the targets of lysosomes, thus causing digestion of cytoplasmic components. Eventually, it can lead to the death of the entire cell. However, in response to several stress factors, e.g., starvation, heavy metals (e.g., cadmium) autophagy can also act as a pro-survival factor, protecting the cell against its death. The main aim of our studies was to check if the process of autophagy, which could appear in the midgut epithelium after Cd treatment, can be fixed during the following generations of insects. As a model animal, we chose the beet armyworm Spodoptera exigua Hübner (Lepidoptera: Noctuidae), a well-known polyphagous pest of many vegetable crops. We analyzed specimens at final larval stage (5th larval stage), due to its hyperfagy, resulting in great amount of cadmium assimilate. The culture consisted of two strains: a control strain (K) fed a standard diet, and a cadmium strain (Cd), fed on standard diet supplemented with cadmium (44 mg Cd per kg of dry weight of food) for 146 generations, both strains. In addition, the control insects were transferred to the Cd supplemented diet (5 mg Cd per kg of dry weight of food, 10 mg Cd per kg of dry weight of food, 20 mg Cd per kg of dry weight of food, 44 mg Cd per kg of dry weight of food). Therefore, we obtained Cd1, Cd2, Cd3 and KCd experimental groups. Autophagy has been examined using transmission electron microscope. During this process, degenerated organelles were surrounded by a membranous phagophore and enclosed in an autophagosome. Eventually, after the autophagosome fused with a lysosome, an autolysosome was formed and the process of the digestion of organelles began. During the 1st year of the experiment, we analyzed specimens of 6 generations in all the lines. The intensity of autophagy depends significantly on the generation, tissue and cadmium concentration in the insect rearing medium. In the Ist, IInd, IIIrd, IVth, Vth and VIth generation the intensity of autophagy in the midguts from cadmium-exposed strains decreased gradually according to the following order of strains: Cd1, Cd2, Cd3 and KCd. The higher amount of cells with autophagy was observed in Cd1 and Cd2. However, it was still higher than the percentage of cells with autophagy in the same tissues of the insects from the control and multigenerational cadmium strain. This may indicate that during 6-generational exposure to various Cd concentration, a preserved tolerance to cadmium was not maintained. The study has been financed by the National Science Centre Poland, grant no 2016/21/B/NZ8/00831.Keywords: autophagy, cell death, digestive system, ultrastructure
Procedia PDF Downloads 233959 Groundwater Contamination and Fluorosis: A Comprehensive Analysis
Authors: Rajkumar Ghosh, Bhabani Prasad Mukhopadhay
Abstract:
Groundwater contamination with fluoride has emerged as a global concern affecting millions of people, leading to the widespread occurrence of fluorosis. It affects bones and teeth, leading to dental and skeletal fluorosis. This study presents a comprehensive analysis of the relationship between groundwater contamination and fluorosis. It delves into the causes of fluoride contamination in groundwater, its spatial distribution, and adverse health impacts of fluorosis on affected communities. Fluoride contamination in groundwater can be attributed to both natural and anthropogenic sources. Geogenic sources involve the dissolution of fluoride-rich minerals present in the aquifer materials. On the other hand, anthropogenic activities such as industrial discharges, agricultural practices, and improper disposal of fluoride-containing waste contribute to the contamination of groundwater. The spatial distribution of fluoride contamination varies widely across different regions and geological formations. High fluoride levels are commonly observed in areas with fluorine-rich geological deposits. Additionally, agricultural and industrial centres often exhibit elevated fluoride concentrations due to anthropogenic contributions. Excessive fluoride ingestion during tooth development leads to dental fluorosis, characterized by enamel defects, discoloration, and dental caries. The severity of dental fluorosis varies based on fluoride exposure levels during tooth development. Long-term consumption of fluoride-contaminated water causes skeletal fluorosis, resulting in bone and joint pain, decreased joint mobility, and skeletal deformities. In severe cases, skeletal fluorosis can lead to disability and reduced quality of life. Various defluoridation techniques such as activated alumina, bone char, and reverse osmosis have been employed to reduce fluoride concentrations in drinking water. These methods effectively remove fluoride, but their implementation requires careful consideration of cost, maintenance, and sustainability. Diversifying water sources, such as rainwater harvesting and surface water supply, can reduce the reliance on fluoride-contaminated groundwater, especially in regions with high fluoride concentrations. Groundwater contamination with fluoride remains a significant public health challenge, leading to the widespread occurrence of fluorosis globally. This scientific report emphasizes the importance of understanding the relationship between groundwater contamination and fluorosis. Implementing effective mitigation strategies and preventive measures is crucial to combat fluorosis and ensure sustainable access to safe drinking water for communities worldwide. Collaborative efforts between government agencies, local communities, and scientific researchers are essential to address this issue and safeguard the health of vulnerable populations. Additionally, the report explores various mitigation strategies and preventive measures to address the issue and offers recommendations for sustainable management of groundwater resources to combat fluorosis effectively.Keywords: fluorosis, fluoride contamination, groundwater contamination, groundwater resources
Procedia PDF Downloads 96958 Architectural Design as Knowledge Production: A Comparative Science and Technology Study of Design Teaching and Research at Different Architecture Schools
Authors: Kim Norgaard Helmersen, Jan Silberberger
Abstract:
Questions of style and reproducibility in relation to architectural design are not only continuously debated; the very concepts can seem quite provocative to architects, who like to think of architectural design as depending on intuition, ideas, and individual personalities. This standpoint - dominant in architectural discourse - is challenged in the present paper presenting early findings from a comparative STS-inspired research study of architectural design teaching and research at different architecture schools in varying national contexts. In philosophy of science framework, the paper reflects empirical observations of design teaching at the Royal Academy of Fine Arts in Copenhagen and presents a tentative theoretical framework for the on-going research project. The framework suggests that architecture – as a field of knowledge production – is mainly dominated by three epistemological positions, which will be presented and discussed. Besides serving as a loosely structured framework for future data analysis, the proposed framework brings forth the argument that architecture can be roughly divided into different schools of thought, like the traditional science disciplines. Without reducing the complexity of the discipline, describing its main intellectual positions should prove fruitful for the future development of architecture as a theoretical discipline, moving an architectural critique beyond discussions of taste preferences. Unlike traditional science disciplines, there is a lack of a community-wide, shared pool of codified references in architecture, with architects instead referencing art projects, buildings, and famous architects, when positioning their standpoints. While these inscriptions work as an architectural reference system, to be compared to codified theories in academic writing of traditional research, they are not used systematically in the same way. As a result, architectural critique is often reduced to discussions of taste and subjectivity rather than epistemological positioning. Architects are often criticized as judges of taste and accused that their rationality is rooted in cultural-relative aesthetical concepts of taste closely linked to questions of style, but arguably their supposedly subjective reasoning, in fact, forms part of larger systems of thought. Putting architectural ‘styles’ under a loop, and tracing their philosophical roots, can potentially open up a black box in architectural theory. Besides ascertaining and recognizing the existence of specific ‘styles’ and thereby schools of thought in current architectural discourse, the study could potentially also point at some mutations of the conventional – something actually ‘new’ – of potentially high value for architectural design education.Keywords: architectural theory, design research, science and technology studies (STS), sociology of architecture
Procedia PDF Downloads 130957 Pulsed-Wave Doppler Ultrasonographic Assessment of the Maximum Blood Velocity in Common Carotid Artery in Horses after Administration of Ketamine and Acepromazine
Authors: Saman Ahani, Aboozar Dehghan, Roham Vali, Hamid Salehian, Amin Ebrahimi
Abstract:
Pulsed-wave (PW) doppler ultrasonography is a non-invasive, relatively accurate imaging technique that can measure blood speed. The imaging could be obtained via the common carotid artery, as one of the main vessels supplying the blood of vital organs. In horses, factors such as susceptibility to depression of the cardiovascular system and their large muscular mass have rendered them vulnerable to changes in blood speed. One of the most important factors causing blood velocity changes is the administration of anesthetic drugs, including Ketamine and Acepromazine. Thus, in this study, the Pulsed-wave doppler technique was performed to assess the highest blood velocity in the common carotid artery following administration of Ketamine and Acepromazine. Six male and six female healthy Kurdish horses weighing 351 ± 46 kg (mean ± SD) and aged 9.2 ± 1.7 years (mean ± SD) were housed under animal welfare guidelines. After fasting for six hours, the normal blood flow velocity in the common carotid artery was measured using a Pulsed-wave doppler ultrasonography machine (BK Medical, Denmark), and a high-frequency linear transducer (12 MHz) without applying any sedative drugs as a control group. The same procedure was repeated after each individual received the following medications: 1.1, 2.2 mg/kg Ketamine (Pfizer, USA), and 0.5, 1 mg/kg Acepromizine (RACEHORSE MEDS, Ukraine), with an interval of 21 days between the administration of each dose and/or drug. The ultrasonographic study was done five (T5) and fifteen (T15) minutes after injecting each dose intravenously. Lastly, the statistical analysis was performed using SPSS software version 22 for Windows and a P value less than 0.05 was considered to be statistically significant. Five minutes after administration of Ketamine (1.1, 2.2 mg/kg) in both male and female horses, the blood velocity decreased to 38.44, 34.53 cm/s in males, and 39.06, 34.10 cm/s in females in comparison to the control group (39.59 and 40.39 cm/s in males and females respectively) while administration of 0.5 mg/kg Acepromazine led to a significant rise (73.15 and 55.80 cm/s in males and females respectively) (p<0.05). It means that the most drastic change in blood velocity, regardless of gender, refers to the latter dose/drug. In both medications and both genders, the increase in doses led to a decrease in blood velocity compared to the lower dose of the same drug. In all experiments in this study, the blood velocity approached its normal value at T15. In another study comparing the blood velocity changes affected by Ketamine and Acepromazine through femoral arteries, the most drastic changes were attributed to Ketamine; however, in this experiment, the maximum blood velocity was observed following administration of Acepromazine via the common carotid artery. Therefore, further experiments using the same medications are suggested using Pulsed-wave doppler measuring the blood velocity changes in both femoral and common carotid arteries simultaneously.Keywords: Acepromazine, common carotid artery, horse, ketamine, pulsed-wave doppler ultrasonography
Procedia PDF Downloads 128956 Nonlinear Response of Tall Reinforced Concrete Shear Wall Buildings under Wind Loads
Authors: Mahtab Abdollahi Sarvi, Siamak Epackachi, Ali Imanpour
Abstract:
Reinforced concrete shear walls are commonly used as the lateral load-resisting system of mid- to high-rise office or residential buildings around the world. Design of such systems is often governed by wind rather than seismic effects, in particular in low-to-moderate seismic regions. The current design philosophy as per the majority of building codes under wind loads require elastic response of lateral load-resisting systems including reinforced concrete shear walls when subjected to the rare design wind load, resulting in significantly large wall sections needed to meet strength requirements and drift limits. The latter can highly influence the design in upper stories due to stringent drift limits specified by building codes, leading to substantial added costs to the construction of the wall. However, such walls may offer limited to moderate over-strength and ductility due to their large reserve capacity provided that they are designed and detailed to appropriately develop such over-strength and ductility under extreme wind loads. This would significantly contribute to reducing construction time and costs, while maintaining structural integrity under gravity and frequently-occurring and less frequent wind events. This paper aims to investigate the over-strength and ductility capacity of several imaginary office buildings located in Edmonton, Canada with a glance at earthquake design philosophy. Selected models are 10- to 25-story buildings with three types of reinforced concrete shear wall configurations including rectangular, barbell, and flanged. The buildings are designed according to National Building Code of Canada. Then fiber-based numerical models of the walls are developed in Perform 3D and by conducting nonlinear static (pushover) analysis, lateral nonlinear behavior of the walls are evaluated. Ductility and over-strength of the structures are obtained based on the results of the pushover analyses. The results confirmed moderate nonlinear capacity of reinforced concrete shear walls under extreme wind loads. This is while lateral displacements of the walls pass the serviceability limit states defined in Pre standard for Performance-Based Wind Design (ASCE). The results indicate that we can benefit the limited nonlinear response observed in the reinforced concrete shear walls to economize the design of such systems under wind loads.Keywords: concrete shear wall, high-rise buildings, nonlinear static analysis, response modification factor, wind load
Procedia PDF Downloads 107955 Sentiment Analysis of Tourist Online Reviews Concerning Lisbon Cultural Patrimony, as a Contribute to the City Attractiveness Evaluation
Authors: Joao Ferreira Do Rosario, Maria De Lurdes Calisto, Ana Teresa Machado, Nuno Gustavo, Rui Gonçalves
Abstract:
The tourism sector is increasingly important to the economic performance of countries and a relevant theme to academic research, increasing the importance of understanding how and why tourists evaluate tourism locations. The city of Lisbon is currently a tourist destination of excellence in the European and world-wide panorama, registering a significant growth of the economic weight of its tourist activities in the Gross Added Value of the region. Although there is research on the feedback of those who visit tourist sites and different methodologies for studying tourist sites have been applied, this research seeks to be innovative in the objective of obtaining insights on the competitiveness in terms of attractiveness of the city of Lisbon as a tourist destination, based the feedback of tourists in the Facebook pages of the most visited museums and monuments of Lisbon, an interpretation that is relevant in the development of strategies of tourist attraction. The intangible dimension of the tourism offer, due to its unique condition of simultaneous production and consumption, makes eWOM particularly relevant. The testimony of consumers is thus a decisive factor in the decision-making and buying process in tourism. Online social networks are one of the most used platforms for tourists to evaluate the attractiveness's points of a tourism destination (e.g. cultural and historical heritage), with this user-generated feedback enabling relevant information about the customer-tourists. This information is related to the tourist experience representing the true voice of the customer. Furthermore, this voice perceived by others as genuine, opposite to marketing messages, may have a powerful word-of-mouth influence on other potential tourists. The relevance of online reviews sharing, however, becomes particularly complex, considering social media users’ different profiles or the possible and different sources of information available, as well as their associated reputation associated with each source. In the light of these trends, our research focuses on the tourists’ feedback on Facebook pages of the most visited museums and monuments of Lisbon that contribute to its attractiveness as a tourism destination. Sentiment Analysis is the methodology selected for this research, using public available information in the online context, which was deemed as an appropriate non-participatory observation method. Data will be collected from two museums (Museu dos Coches and Museu de Arte Antiga) and three monuments ((Mosteiro dos Jerónimos, Torre de Belém and Panteão Nacional) Facebook pages during a period of one year. The research results will help in the evaluation of the considered places by the tourists, their contribution to the city attractiveness and present insights helpful for the management decisions regarding this museums and monuments. The results of this study will also contribute to a better knowledge of the tourism sector, namely the identification of attributes in the evaluation and choice of the city of Lisbon as a tourist destination. Further research will evaluate the Lisbon attraction points for tourists in different categories beyond museums and monuments, will also evaluate the tourist feedback from other sources like TripAdvisor and apply the same methodology in other cities and country regions.Keywords: Lisbon tourism, opinion mining, sentiment analysis, tourism location attractiveness evaluation
Procedia PDF Downloads 238954 Material Use and Life Cycle GHG Emissions of Different Electrification Options for Long-Haul Trucks
Authors: Nafisa Mahbub, Hajo Ribberink
Abstract:
Electrification of long-haul trucks has been in discussion as a potential strategy to decarbonization. These trucks will require large batteries because of their weight and long daily driving distances. Around 245 million battery electric vehicles are predicted to be on the road by the year 2035. This huge increase in the number of electric vehicles (EVs) will require intensive mining operations for metals and other materials to manufacture millions of batteries for the EVs. These operations will add significant environmental burdens and there is a significant risk that the mining sector will not be able to meet the demand for battery materials, leading to higher prices. Since the battery is the most expensive component in the EVs, technologies that can enable electrification with smaller batteries sizes have substantial potential to reduce the material usage and associated environmental and cost burdens. One of these technologies is an ‘electrified road’ (eroad), where vehicles receive power while they are driving, for instance through an overhead catenary (OC) wire (like trolleybuses and electric trains), through wireless (inductive) chargers embedded in the road, or by connecting to an electrified rail in or on the road surface. This study assessed the total material use and associated life cycle GHG emissions of two types of eroads (overhead catenary and in-road wireless charging) for long-haul trucks in Canada and compared them to electrification using stationary plug-in fast charging. As different electrification technologies require different amounts of materials for charging infrastructure and for the truck batteries, the study included the contributions of both for the total material use. The study developed a bottom-up approach model comparing the three different charging scenarios – plug in fast chargers, overhead catenary and in-road wireless charging. The investigated materials for charging technology and batteries were copper (Cu), steel (Fe), aluminium (Al), and lithium (Li). For the plug-in fast charging technology, different charging scenarios ranging from overnight charging (350 kW) to megawatt (MW) charging (2 MW) were investigated. A 500 km of highway (1 lane of in-road charging per direction) was considered to estimate the material use for the overhead catenary and inductive charging technologies. The study considered trucks needing an 800 kWh battery under the plug-in charger scenario but only a 200 kWh battery for the OC and inductive charging scenarios. Results showed that overall the inductive charging scenario has the lowest material use followed by OC and plug-in charger scenarios respectively. The materials use for the OC and plug-in charger scenarios were 50-70% higher than for the inductive charging scenarios for the overall system including the charging infrastructure and battery. The life cycle GHG emissions from the construction and installation of the charging technology material were also investigated.Keywords: charging technology, eroad, GHG emissions, material use, overhead catenary, plug in charger
Procedia PDF Downloads 51953 Collateral Impact of Water Resources Development in an Arsenic Affected Village of Patna District
Authors: Asrarul H. Jeelani
Abstract:
Arsenic contamination of groundwater and its’ health implications in lower Gangetic plain of Indian states started reporting in the 1980s. The same period was declared as the first water decade (1981-1990) to achieve ‘water for all.’ To fulfill the aim, the Indian government, with the support of international agencies installed millions of hand-pumps through water resources development programs. The hand-pumps improve the accessibility if the groundwater, but over-extraction of it increases the chances of mixing of trivalent arsenic which is more toxic than pentavalent arsenic of dug well water in Gangetic plain and has different physical manifestations. Now after three decades, Bihar (middle Gangetic plain) is also facing arsenic contamination of groundwater and its’ health implications. Objective: This interdisciplinary research attempts to understand the health and social implications of arsenicosis among different castes in Haldi Chhapra village and to find the association of ramifications with water resources development. Methodology: The Study used concurrent quantitative dominant mix method (QUAN+qual). The researcher had employed household survey, social mapping, interviews, and participatory interactions. However, the researcher used secondary data for retrospective analysis of hand-pumps and implications of arsenicosis. Findings: The study found 88.5% (115) household have hand-pumps as a source of water however 13.8% uses purified supplied water bottle and 3.6% uses combinations of hand-pump, bottled water and dug well water for drinking purposes. Among the population, 3.65% of individuals have arsenicosis, and 2.72% of children between the age group of 5 to 15 years are affected. The caste variable has also emerged through quantitative as well as geophysical locations analysis as 5.44% of arsenicosis manifested individual belong to scheduled caste (SC), 3.89% to extremely backward caste (EBC), 2.57% to backward caste (BC) and 3% to other. Among three clusters of arsenic poisoned locations, two belong to SC and EBC. The village as arsenic affected is being discriminated, whereas the affected individual is also facing discrimination, isolation, stigma, and problem in getting married. The forceful intervention to install hand-pumps in the first water decades and later restructuring of the dug well destroyed a conventional method of dug well cleaning. Conclusion: The common manifestation of arsenicosis has increased by 1.3% within six years of span in the village. This raised the need for setting up a proper surveillance system in the village. It is imperative to consider the social structure for arsenic mitigation program as this research reveals caste as a significant factor. The health and social implications found in the study; retrospectively analyzed as the collateral impact of water resource development programs in the village.Keywords: arsenicosis, caste, collateral impact, water resources
Procedia PDF Downloads 108952 Convention Refugees in New Zealand: Being Trapped in Immigration Limbo without the Right to Obtain a Visa
Authors: Saska Alexandria Hayes
Abstract:
Multiple Convention Refugees in New Zealand are stuck in a state of immigration limbo due to a lack of defined immigration policies. The Refugee Convention of 1951 does not give the right to be issued a permanent right to live and work in the country of asylum. A gap in New Zealand's immigration law and policy has left Convention Refugees without the right to obtain a resident or temporary entry visa. The significant lack of literature on this topic suggests that the lack of visa options for Convention Refugees in New Zealand is a widely unknown or unacknowledged issue. Refugees in New Zealand enjoy the right of non-refoulement contained in Article 33 of the Refugee Convention 1951, whether lawful or unlawful. However, a number of rights contained in the Refugee Convention 1951, such as the right to gainful employment and social security, are limited to refugees who maintain lawful immigration status. If a Convention Refugee is denied a resident visa, the only temporary entry visa a Convention Refugee can apply for in New Zealand is discretionary. The appeal cases heard at the Immigration Protection Tribunal establish that Immigration New Zealand has declined resident and discretionary temporary entry visa applications by Convention Refugees for failing to meet the health or character immigration instructions. The inability of a Convention Refugee to gain residency in New Zealand creates a dependence on the issue of discretionary temporary entry visas to maintain lawful status. The appeal cases record that this reliance has led to Convention Refugees' lawful immigration status being in question, temporarily depriving them of the rights contained in the Refugee Convention 1951 of lawful refugees. In one case, the process of applying for a discretionary temporary entry visa led to a lawful Convention Refugee being temporarily deprived of the right to social security, breaching Article 24 of the Refugee Convention 1951. The judiciary has stated a constant reliance on the issue of discretionary temporary entry visas for Convention Refugees can lead to a breach of New Zealand's international obligations under Article 7 of the International Covenant on Civil and Political Rights. The appeal cases suggest that, despite successful judicial proceedings, at least three persons have been made to rely on the issue of discretionary temporary entry visas potentially indefinitely. The appeal cases establish that a Convention Refugee can be denied a discretionary temporary entry visa and become unlawful. Unlawful status could ultimately breach New Zealand's obligations under Article 33 of the Refugee Convention 1951 as it would procedurally deny Convention Refugees asylum. It would force them to choose between the right of non-refoulement or leaving New Zealand to seek the ability to access all the human rights contained in the Universal Declaration of Human Rights elsewhere. This paper discusses how the current system has given rise to these breaches and emphasizes a need to create a designated temporary entry visa category for Convention Refugees.Keywords: domestic policy, immigration, migration, New Zealand
Procedia PDF Downloads 102951 Methylphenidate Use by Canadian Children and Adolescents and the Associated Adverse Reactions
Authors: Ming-Dong Wang, Abigail F. Ruby, Michelle E. Ross
Abstract:
Methylphenidate is a first-line treatment drug for attention deficit hyperactivity disorder (ADHD), a common mental health disorder in children and adolescents. Over the last several decades, the rate of children and adolescents using ADHD medication has been increasing in many countries. A recent study found that the prevalence of ADHD medication use among children aged 3-18 years increased in 13 different world regions between 2001 and 2015, where the absolute increase ranged from 0.02 to 0.26% per year. The goal of this study was to examine the use of methylphenidate in Canadian children and its associated adverse reactions. Methylphenidate use information among young Canadians aged 0-14 years was extracted from IQVIA data on prescriptions dispensed by pharmacies between April 2014 and June 2020. The adverse reaction information associated with methylphenidate use was extracted from the Canada Vigilance database for the same time period. Methylphenidate use trends were analyzed based on sex, age group (0-4 years, 5-9 years, and 10-14 years), and geographical location (province). The common classes of adverse reactions associated with methylphenidate use were sorted, and the relative risks associated with methylphenidate use as compared with two second-line amphetamine medications for ADHD were estimated. This study revealed that among Canadians aged 0-14 years, every 100 people used about 25 prescriptions (or 23,000 mg) of methylphenidate per year during the study period, and the use increased with time. Boys used almost three times more methylphenidate than girls. The amount of drug used was inversely associated with age: Canadians aged 10-14 years used nearly three times as many drugs compared to those aged 5-9 years. Seasonal methylphenidate use patterns were apparent among young Canadians, but the seasonal trends differed among the three age groups. Methylphenidate use varied from region to region, and the highest methylphenidate use was observed in Quebec, where the use of methylphenidate was at least double that of any other province. During the study period, Health Canada received 304 adverse reaction reports associated with the use of methylphenidate for Canadians aged 0-14 years. The number of adverse reaction reports received for boys was 3.5 times higher than that for girls. The three most common adverse reaction classes were psychiatric disorders, nervous system disorders and injury, poisoning procedural complications. The number one commonly reported adverse reaction for boys was aggression (11.2%), while for girls, it was a tremor (9.6%). The safety profile in terms of adverse reaction classes associated with methylphenidate use was similar to that of the selected control products. Methylphenidate is a commonly used pharmaceutical product in young Canadians, particularly in the province of Quebec. Boys used approximately three times more of this product as compared to girls. Future investigation is needed to determine what factors are associated with the observed geographic variations in Canada.Keywords: adverse reaction risk, methylphenidate, prescription trend, use variation
Procedia PDF Downloads 160950 Forensic Investigation: The Impact of Biometric-Based Solution in Combatting Mobile Fraud
Authors: Mokopane Charles Marakalala
Abstract:
Research shows that mobile fraud has grown exponentially in South Africa during the lockdown caused by the COVID-19 pandemic. According to the South African Banking Risk Information Centre (SABRIC), fraudulent online banking and transactions resulted in a sharp increase in cybercrime since the beginning of the lockdown, resulting in a huge loss to the banking industry in South Africa. While the Financial Intelligence Centre Act, 38 of 2001, regulate financial transactions, it is evident that criminals are making use of technology to their advantage. Money-laundering ranks among the major crimes, not only in South Africa but worldwide. This paper focuses on the impact of biometric-based solutions in combatting mobile fraud at the South African Risk Information. SABRIC had the challenges of a successful mobile fraud; cybercriminals could hijack a mobile device and use it to gain access to sensitive personal data and accounts. Cybercriminals are constantly looting the depths of cyberspace in search of victims to attack. Millions of people worldwide use online banking to do their regular bank-related transactions quickly and conveniently. This was supported by the SABRIC, who regularly highlighted incidents of mobile fraud, corruption, and maladministration in SABRIC, resulting in a lack of secure their banking online; they are vulnerable to falling prey to fraud scams such as mobile fraud. Criminals have made use of digital platforms since the development of technology. In 2017, 13 438 instances involving banking apps, internet banking, and mobile banking caused the sector to suffer gross losses of more than R250,000,000. The final three parties are forced to point fingers at one another while the fraudster makes off with the money. A non-probability sampling (purposive sampling) was used in selecting these participants. These included telephone calls and virtual interviews. The results indicate that there is a relationship between remote online banking and the increase in money-laundering as the system allows transactions to take place with limited verification processes. This paper highlights the significance of considering the development of prevention mechanisms, capacity development, and strategies for both financial institutions as well as law enforcement agencies in South Africa to reduce crime such as money-laundering. The researcher recommends that strategies to increase awareness for bank staff must be harnessed through the provision of requisite training and to be provided adequate training.Keywords: biometric-based solution, investigation, cybercrime, forensic investigation, fraud, combatting
Procedia PDF Downloads 101949 A Research on the Improvement of Small and Medium-Sized City in Early-Modern China (1895-1927): Taking Southern Jiangsu as an Example
Authors: Xiaoqiang Fu, Baihao Li
Abstract:
In 1895, the failure of Sino-Japanese prompted the trend of comprehensive and systematic study of western pattern in China. In urban planning and construction, urban reform movement sprang up slowly, which aimed at renovating and reconstructing the traditional cities into modern cities similar to the concessions. During the movement, Chinese traditional city initiated a process of modern urban planning for its modernization. Meanwhile, the traditional planning morphology and system started to disintegrate, on the contrary, western form and technology had become the paradigm. Therefore, the improvement of existing cities had become the prototype of urban planning of early modern China. Currently, researches of the movement mainly concentrate on large cities, concessions, railway hub cities and some special cities resembling those. However, the systematic research about the large number of traditional small and medium-sized cities is still blank, up to now. This paper takes the improvement constructions of small and medium-sized cities in Southern region of Jiangsu Province as the research object. First of all, the criteria of small and medium-sized cities are based on the administrative levels of general office and cities at the county level. Secondly, the suitability of taking the Southern Jiangsu as the research object. The southern area of Jiangsu province called Southern Jiangsu for short, was the most economically developed region in Jiangsu, and also one of the most economically developed and the highest urbanization regions in China. As the most developed agricultural areas in ancient China, Southern Jiangsu formed a large number of traditional small and medium-sized cities. In early modern times, with the help of the Shanghai economic radiation, geographical advantage and powerful economic foundation, Southern Jiangsu became an important birthplace of Chinese national industry. Furthermore, the strong business atmosphere promoted the widespread urban improvement practices, which were incomparable of other regions. Meanwhile, the demonstration of Shanghai, Zhenjiang, Suzhou and other port cities became the improvement pattern of small and medium-sized city in Southern Jiangsu. This paper analyzes the reform movement of the small and medium-sized cities in Southern Jiangsu (1895-1927), including the subjects, objects, laws, technologies and the influence factors of politic and society, etc. At last, this paper reveals the formation mechanism and characteristics of urban improvement movement in early modern China. According to the paper, the improvement of small-medium city was a kind of gestation of the local city planning culture in early modern China,with a fusion of introduction and endophytism.Keywords: early modern China, improvement of small-medium city, southern region of Jiangsu province, urban planning history of China
Procedia PDF Downloads 260948 Improving Student Learning in a Math Bridge Course through Computer Algebra Systems
Authors: Alejandro Adorjan
Abstract:
Universities are motivated to understand the factor contributing to low retention of engineering undergraduates. While precollege students for engineering increases, the number of engineering graduates continues to decrease and attrition rates for engineering undergraduates remains high. Calculus 1 (C1) is the entry point of most undergraduate Engineering Science and often a prerequisite for Computing Curricula courses. Mathematics continues to be a major hurdle for engineering students and many students who drop out from engineering cite specifically Calculus as one of the most influential factors in that decision. In this context, creating course activities that increase retention and motivate students to obtain better final results is a challenge. In order to develop several competencies in our students of Software Engineering courses, Calculus 1 at Universidad ORT Uruguay focuses on developing several competencies such as capacity of synthesis, abstraction, and problem solving (based on the ACM/AIS/IEEE). Every semester we try to reflect on our practice and try to answer the following research question: What kind of teaching approach in Calculus 1 can we design to retain students and obtain better results? Since 2010, Universidad ORT Uruguay offers a six-week summer noncompulsory bridge course of preparatory math (to bridge the math gap between high school and university). Last semester was the first time the Department of Mathematics offered the course while students were enrolled in C1. Traditional lectures in this bridge course lead to just transcribe notes from blackboard. Last semester we proposed a Hands On Lab course using Geogebra (interactive geometry and Computer Algebra System (CAS) software) as a Math Driven Development Tool. Students worked in a computer laboratory class and developed most of the tasks and topics in Geogebra. As a result of this approach, several pros and cons were found. It was an excessive amount of weekly hours of mathematics for students and, as the course was non-compulsory; the attendance decreased with time. Nevertheless, this activity succeeds in improving final test results and most students expressed the pleasure of working with this methodology. This teaching technology oriented approach strengthens student math competencies needed for Calculus 1 and improves student performance, engagement, and self-confidence. It is important as a teacher to reflect on our practice, including innovative proposals with the objective of engaging students, increasing retention and obtaining better results. The high degree of motivation and engagement of participants with this methodology exceeded our initial expectations, so we plan to experiment with more groups during the summer so as to validate preliminary results.Keywords: calculus, engineering education, PreCalculus, Summer Program
Procedia PDF Downloads 290947 Assessment of Physical Learning Environments in ECE: Interdisciplinary and Multivocal Innovation for Chilean Kindergartens
Authors: Cynthia Adlerstein
Abstract:
Physical learning environment (PLE) has been considered, after family and educators, as the third teacher. There have been conflicting and converging viewpoints on the role of the physical dimensions of places to learn, in facilitating educational innovation and quality. Despite the different approaches, PLE has been widely recognized as a key factor in the quality of the learning experience , and in the levels of learning achievement in ECE . The conceptual frameworks of the field assume that PLE consists of a complex web of factors that shape the overall conditions for learning, and that much more interdisciplinary and complementary methodologies of research and development are required. Although the relevance of PLE attracts a broad international consensus, in Chile it remains under-researched and weakly regulated by public policy. Gaining deeper contextual understanding and more thoughtfully-designed recommendations require the use of innovative assessment tools that cross cultural and disciplinary boundaries to produce new hybrid approaches and improvements. When considering a PLE-based change process for ECE improvement, a central question is what dimensions, variables and indicators could allow a comprehensive assessment of PLE in Chilean kindergartens? Based on a grounded theory social justice inquiry, we adopted a mixed method design, that enabled a multivocal and interdisciplinary construction of data. By using in-depth interviews, discussion groups, questionnaires, and documental analysis, we elicited the PLE discourses of politicians, early childhood practitioners, experts in architectural design and ergonomics, ECE stakeholders, and 3 to 5 year olds. A constant comparison method enabled the construction of the dimensions, variables and indicators through which PLE assessment is possible. Subsequently, the instrument was applied in a sample of 125 early childhood classrooms, to test reliability (internal consistency) and validity (content and construct). As a result, an interdisciplinary and multivocal tool for assessing physical learning environments was constructed and validated, for Chilean kindergartens. The tool is structured upon 7 dimensions (wellbeing, flexible, empowerment, inclusiveness, symbolically meaningful, pedagogically intentioned, institutional management) 19 variables and 105 indicators that are assessed through observation and registration on a mobile app. The overall reliability of the instrument is .938 while the consistency of each dimension varies between .773 (inclusive) and .946 (symbolically meaningful). The validation process through expert opinion and factorial analysis (chi-square test) has shown that the dimensions of the assessment tool reflect the factors of physical learning environments. The constructed assessment tool for kindergartens highlights the significance of the physical environment in early childhood educational settings. The relevance of the instrument relies in its interdisciplinary approach to PLE and in its capability to guide innovative learning environments, based on educational habitability. Though further analysis are required for concurrent validation and standardization, the tool has been considered by practitioners and ECE stakeholders as an intuitive, accessible and remarkable instrument to arise awareness on PLE and on equitable distribution of learning opportunities.Keywords: Chilean kindergartens, early childhood education, physical learning environment, third teacher
Procedia PDF Downloads 357946 Through Additive Manufacturing. A New Perspective for the Mass Production of Made in Italy Products
Authors: Elisabetta Cianfanelli, Paolo Pupparo, Maria Claudia Coppola
Abstract:
The recent evolutions in the innovation processes and in the intrinsic tendencies of the product development process, lead to new considerations on the design flow. The instability and complexity that contemporary life describes, defines new problems in the production of products, stimulating at the same time the adoption of new solutions across the entire design process. The advent of Additive Manufacturing, but also of IOT and AI technologies, continuously puts us in front of new paradigms regarding design as a social activity. The totality of these technologies from the point of view of application describes a whole series of problems and considerations immanent to design thinking. Addressing these problems may require some initial intuition and the use of some provisional set of rules or plausible strategies, i.e., heuristic reasoning. At the same time, however, the evolution of digital technology and the computational speed of new design tools describe a new and contrary design framework in which to operate. It is therefore interesting to understand the opportunities and boundaries of the new man-algorithm relationship. The contribution investigates the man-algorithm relationship starting from the state of the art of the Made in Italy model, the most known fields of application are described and then focus on specific cases in which the mutual relationship between man and AI becomes a new driving force of innovation for entire production chains. On the other hand, the use of algorithms could engulf many design phases, such as the definition of shape, dimensions, proportions, materials, static verifications, and simulations. Operating in this context, therefore, becomes a strategic action, capable of defining fundamental choices for the design of product systems in the near future. If there is a human-algorithm combination within a new integrated system, quantitative values can be controlled in relation to qualitative and material values. The trajectory that is described therefore becomes a new design horizon in which to operate, where it is interesting to highlight the good practices that already exist. In this context, the designer developing new forms can experiment with ways still unexpressed in the project and can define a new synthesis and simplification of algorithms, so that each artifact has a signature in order to define in all its parts, emotional and structural. This signature of the designer, a combination of values and design culture, will be internal to the algorithms and able to relate to digital technologies, creating a generative dialogue for design purposes. The result that is envisaged indicates a new vision of digital technologies, no longer understood only as of the custodians of vast quantities of information, but also as a valid integrated tool in close relationship with the design culture.Keywords: decision making, design euristics, product design, product design process, design paradigms
Procedia PDF Downloads 119945 Extrudable Foamed Concrete: General Benefits in Prefabrication and Comparison in Terms of Fresh Properties and Compressive Strength with Classic Foamed Concrete
Authors: D. Falliano, G. Ricciardi, E. Gugliandolo
Abstract:
Foamed concrete belongs to the category of lightweight concrete. It is characterized by a density which is generally ranging from 200 to 2000 kg/m³ and typically comprises cement, water, preformed foam, fine sand and eventually fine particles such as fly ash or silica fume. The foam component mixed with the cement paste give rise to the development of a system of air-voids in the cementitious matrix. The peculiar characteristics of foamed concrete elements are summarized in the following aspects: 1) lightness which allows reducing the dimensions of the resisting frame structure and is advantageous in the scope of refurbishment or seismic retrofitting in seismically vulnerable areas; 2) thermal insulating properties, especially in the case of low densities; 3) the good resistance against fire as compared to ordinary concrete; 4) the improved workability; 5) cost-effectiveness due to the usage of rather simple constituting elements that are easily available locally. Classic foamed concrete cannot be extruded, as the dimensional stability is not permitted in the green state and this severely limits the possibility of industrializing them through a simple and cost-effective process, characterized by flexibility and high production capacity. In fact, viscosity enhancing agents (VEA) used to extrude traditional concrete, in the case of foamed concrete cause the collapsing of air bubbles, so that it is impossible to extrude a lightweight product. These requirements have suggested the study of a particular additive that modifies the rheology of foamed concrete fresh paste by increasing cohesion and viscosity and, at the same time, stabilizes the bubbles into the cementitious matrix, in order to allow the dimensional stability in the green state and, consequently, the extrusion of a lightweight product. There are plans to submit the additive’s formulation to patent. In addition to the general benefits of using the extrusion process, extrudable foamed concrete allow other limits to be exceeded: elimination of formworks, expanded application spectrum, due to the possibility of extrusion in a range varying between 200 and 2000 kg/m³, which allows the prefabrication of both structural and non-structural constructive elements. Besides, this contribution aims to present the significant differences regarding extrudable and classic foamed concrete fresh properties in terms of slump. Plastic air content, plastic density, hardened density and compressive strength have been also evaluated. The outcomes show that there are no substantial differences between extrudable and classic foamed concrete compression resistances.Keywords: compressive strength, extrusion, foamed concrete, fresh properties, plastic air content, slump.
Procedia PDF Downloads 174944 An Inquiry of the Impact of Flood Risk on Housing Market with Enhanced Geographically Weighted Regression
Authors: Lin-Han Chiang Hsieh, Hsiao-Yi Lin
Abstract:
This study aims to determine the impact of the disclosure of flood potential map on housing prices. The disclosure is supposed to mitigate the market failure by reducing information asymmetry. On the other hand, opponents argue that the official disclosure of simulated results will only create unnecessary disturbances on the housing market. This study identifies the impact of the disclosure of the flood potential map by comparing the hedonic price of flood potential before and after the disclosure. The flood potential map used in this study is published by Taipei municipal government in 2015, which is a result of a comprehensive simulation based on geographical, hydrological, and meteorological factors. The residential property sales data of 2013 to 2016 is used in this study, which is collected from the actual sales price registration system by the Department of Land Administration (DLA). The result shows that the impact of flood potential on residential real estate market is statistically significant both before and after the disclosure. But the trend is clearer after the disclosure, suggesting that the disclosure does have an impact on the market. Also, the result shows that the impact of flood potential differs by the severity and frequency of precipitation. The negative impact for a relatively mild, high frequency flood potential is stronger than that for a heavy, low possibility flood potential. The result indicates that home buyers are of more concern to the frequency, than the intensity of flood. Another contribution of this study is in the methodological perspective. The classic hedonic price analysis with OLS regression suffers from two spatial problems: the endogeneity problem caused by omitted spatial-related variables, and the heterogeneity concern to the presumption that regression coefficients are spatially constant. These two problems are seldom considered in a single model. This study tries to deal with the endogeneity and heterogeneity problem together by combining the spatial fixed-effect model and geographically weighted regression (GWR). A series of literature indicates that the hedonic price of certain environmental assets varies spatially by applying GWR. Since the endogeneity problem is usually not considered in typical GWR models, it is arguable that the omitted spatial-related variables might bias the result of GWR models. By combing the spatial fixed-effect model and GWR, this study concludes that the effect of flood potential map is highly sensitive by location, even after controlling for the spatial autocorrelation at the same time. The main policy application of this result is that it is improper to determine the potential benefit of flood prevention policy by simply multiplying the hedonic price of flood risk by the number of houses. The effect of flood prevention might vary dramatically by location.Keywords: flood potential, hedonic price analysis, endogeneity, heterogeneity, geographically-weighted regression
Procedia PDF Downloads 290943 The Sea Striker: The Relevance of Small Assets Using an Integrated Conception with Operational Performance Computations
Authors: Gaëtan Calvar, Christophe Bouvier, Alexis Blasselle
Abstract:
This paper presents the Sea Striker, a compact hydrofoil designed with the goal to address some of the issues raised by the recent evolutions of naval missions, threats and operation theatres in modern warfare. Able to perform a wide range of operations, the Sea Striker is a 40-meter stealth surface combatant equipped with a gas turbine and aft and forward foils to reach high speeds. The Sea Striker's stealthiness is enabled by the combination of composite structure, exterior design, and the advanced integration of sensors. The ship is fitted with a powerful and adaptable combat system, ensuring a versatile and efficient response to modern threats. Lightly Manned with a core crew of 10, this hydrofoil is highly automated and can be remoted pilote for special force operation or transit. Such a kind of ship is not new: it has been used in the past by different navies, for example, by the US Navy with the USS Pegasus. Nevertheless, the recent evolutions in science and technologies on the one hand, and the emergence of new missions, threats and operation theatres, on the other hand, put forward its concept as an answer to nowadays operational challenges. Indeed, even if multiples opinions and analyses can be given regarding the modern warfare and naval surface operations, general observations and tendencies can be drawn such as the major increase in the sensors and weapons types and ranges and, more generally, capacities; the emergence of new versatile and evolving threats and enemies, such as asymmetric groups, swarm drones or hypersonic missile; or the growing number of operation theatres located in more coastal and shallow waters. These researches were performed with a complete study of the ship after several operational performance computations in order to justify the relevance of using ships like the Sea Striker in naval surface operations. For the selected scenarios, the conception process enabled to measure the performance, namely a “Measure of Efficiency” in the NATO framework for 2 different kinds of models: A centralized, classic model, using large and powerful ships; and A distributed model relying on several Sea Strikers. After this stage, a was performed. Lethal, agile, stealth, compact and fitted with a complete set of sensors, the Sea Striker is a new major player in modern warfare and constitutes a very attractive response between the naval unit and the combat helicopter, enabling to reach high operational performances at a reduced cost.Keywords: surface combatant, compact, hydrofoil, stealth, velocity, lethal
Procedia PDF Downloads 117942 Implementation of Hybrid Curriculum in Canadian Dental Schools to Manage Child Abuse and Neglect
Authors: Priyajeet Kaur Kaleka
Abstract:
Introduction: A dentist is often the first responder in the battle for a patient’s healthy body and maybe the first health professional to observe signs of child abuse, be it physical, emotional, and/or sexual mistreatment. Therefore, it is an ethical responsibility for the dental clinician to detect and report suspected cases of child abuse and neglect (CAN). The main reasons for not reporting suspected cases of CAN, with special emphasis on the third: 1) Uncertainty of the diagnosis, 2) Lack of knowledge of the reporting procedure, and 3) Child abuse and neglect somewhat remained the subject of ignorance among dental professionals because of a lack of advance clinical training. Given these epidemic proportions, there is a scope of further research about dental school curriculum design. Purpose: This study aimed to assess the knowledge and attitude of dentists in Canada regarding signs and symptoms of child abuse and neglect (CAN), reporting procedures, and whether educational strategies followed by dental schools address this sensitive issue. In pursuit of that aim, this abstract summarizes the evidence related to this question. Materials and Methods: Data was collected through a specially designed questionnaire adapted and modified from the author’s previous cross-sectional study on (CAN), which was conducted in Pune, India, in 2016 and is available on the database of PubMed. Design: A random sample was drawn from the targeted population of registered dentists and dental students in Canada regarding their knowledge, professional responsibilities, and behavior concerning child abuse. Questionnaire data were distributed to 200 members. Out of which, a total number of 157 subjects were in the final sample for statistical analysis, yielding response of 78.5%. Results: Despite having theoretical information on signs and symptoms, 55% of the participants indicated they are not confident to detect child physical abuse cases. 90% of respondents believed that recognition and handling the CAN cases should be a part of undergraduate training. Only 4.5% of the participants have correctly identified all signs of abuse due to inadequate formal training in dental schools and workplaces. Although nearly 96.3% agreed that it is a dentist’s legal responsibility to report CAN, only a small percentage of the participants reported an abuse case in the past. While 72% stated that the most common factor that might prevent a dentist from reporting a case was doubt over the diagnosis. Conclusion: The goal is to motivate dental schools to deal with this critical issue and provide their students with consummate training to strengthen their capability to care for and protect children. The educational institutions should make efforts to spread awareness among dental students regarding the management and tackling of CAN. Clinical Significance: There should be modifications in the dental school curriculum focusing on problem-based learning models to assist graduates to fulfill their legal and professional responsibilities. CAN literacy should be incorporated into the dental curriculum, which will eventually benefit future dentists to break this intergenerational cycle of violence.Keywords: abuse, child abuse and neglect, dentist knowledge, dental school curriculum, problem-based learning
Procedia PDF Downloads 200941 The Influence of Nutritional and Immunological Status on the Prognosis of Head and Neck Cancer
Authors: Ching-Yi Yiu, Hui-Chen Hsu
Abstract:
Objectives: Head and neck cancer (HNC) is a big global health problem in the world. Despite the development of diagnosis and treatment, the overall survival of HNC is still low. The well recognition of the interaction of the host immune system and cancer cells has led to realizing the processes of tumor initiation, progression and metastasis. Many systemic inflammatory responses have been shown to play a crucial role in cancer progression. The pre and post-treatment nutritional and immunological status of HNC patients is a reliable prognostic indicator of tumor outcomes and survivors. Methods: Between July 2020 to June 2022, We have enrolled 60 HNC patients, including 59 males and 1 female, in Chi Mei Medical Center, Liouying, Taiwan. The age distribution was from 37 to 81 years old (y/o), with a mean age of 57.6 y/o. We evaluated the pre-and post-treatment nutritional and immunological status of these HNC patients with body weight, body weight loss, body mass index (BMI), whole blood count including hemoglobin (Hb), lymphocyte, neutrophil and platelet counts, biochemistry including prealbumin, albumin, c-reactive protein (CRP), with the time period of before treatment, post-treatment 3 and 6 months. We calculated the neutrophil-to-lymphocyte ratio (NLR) and platelet-to-lymphocyte ratio (PLR) to assess how these biomarkers influence the outcomes of HNC patients. Results: We have carcinoma of the hypopharynx in 21 cases with 35%, carcinoma of the larynx in 9 cases, carcinoma of the tonsil and tongue every 6 cases, carcinoma soft palate and tongue base every 5 cases, carcinoma of buccal mucosa, retromolar trigone and mouth floor every 2 cases, carcinoma of the hard palate and low lip each 1 case. There were stage I 15 cases, stage II 13 cases, stage III 6 cases, stage IVA 10 cases, and stage IVB 16 cases. All patients have received surgery, chemoradiation therapy or combined therapy. We have wound infection in 6 cases, 2 cases of pharyngocutaneous fistula, flap necrosis in 2 cases, and mortality in 6 cases. In the wound infection group, the average BMI is 20.4 kg/m2; the average Hb is 12.9 g/dL, the average albumin is 3.5 g/dL, the average NLR is 6.78, and the average PLR is 243.5. In the PC fistula and flap necrosis group, the average BMI is 21.65 kg/m2; the average Hb is 11.7 g/dL, the average albumin is 3.15 g/dL, average NLR is 13.28, average PLR is 418.84. In the mortality group, the average BMI is 22.3 kg/m2; the average Hb is 13.58 g/dL, the average albumin is 3.77 g/dL, the average NLR is 6.06, and the average PLR is 275.5. Conclusion: HNC is a big challenging public health problem worldwide, especially in the high prevalence of betel nut consumption area Taiwan. Besides the definite risk factors of smoking, drinking and betel nut related, the other biomarkers may play significant prognosticators in the HNC outcomes. We concluded that the average BMI is less than 22 kg/m2, the average Hb is low than 12.0 g/dL, the average albumin is low than 3.3 g/dL, the average NLR is low than 3, and the average PLR is more than 170, the surgical complications and mortality will be increased, and the prognosis is poor in HNC patients.Keywords: nutritional, immunological, neutrophil-to-lymphocyte ratio, paltelet-to-lymphocyte ratio.
Procedia PDF Downloads 79940 The Application of Raman Spectroscopy in Olive Oil Analysis
Authors: Silvia Portarena, Chiara Anselmi, Chiara Baldacchini, Enrico Brugnoli
Abstract:
Extra virgin olive oil (EVOO) is a complex matrix mainly composed by fatty acid and other minor compounds, among which carotenoids are well known for their antioxidative function that is a key mechanism of protection against cancer, cardiovascular diseases, and macular degeneration in humans. EVOO composition in terms of such constituents is generally the result of a complex combination of genetic, agronomical and environmental factors. To selectively improve the quality of EVOOs, the role of each factor on its biochemical composition need to be investigated. By selecting fruits from four different cultivars similarly grown and harvested, it was demonstrated that Raman spectroscopy, combined with chemometric analysis, is able to discriminate the different cultivars, also as a function of the harvest date, based on the relative content and composition of fatty acid and carotenoids. In particular, a correct classification up to 94.4% of samples, according to the cultivar and the maturation stage, was obtained. Moreover, by using gas chromatography and high-performance liquid chromatography as reference techniques, the Raman spectral features further allowed to build models, based on partial least squares regression, that were able to predict the relative amount of the main fatty acids and the main carotenoids in EVOO, with high coefficients of determination. Besides genetic factors, climatic parameters, such as light exposition, distance from the sea, temperature, and amount of precipitations could have a strong influence on EVOO composition of both major and minor compounds. This suggests that the Raman spectra could act as a specific fingerprint for the geographical discrimination and authentication of EVOO. To understand the influence of environment on EVOO Raman spectra, samples from seven regions along the Italian coasts were selected and analyzed. In particular, it was used a dual approach combining Raman spectroscopy and isotope ratio mass spectrometry (IRMS) with principal component and linear discriminant analysis. A correct classification of 82% EVOO based on their regional geographical origin was obtained. Raman spectra were obtained by Super Labram spectrometer equipped with an Argon laser (514.5 nm wavelenght). Analyses of stable isotope content ratio were performed using an isotope ratio mass spectrometer connected to an elemental analyzer and to a pyrolysis system. These studies demonstrate that RR spectroscopy is a valuable and useful technique for the analysis of EVOO. In combination with statistical analysis, it makes possible the assessment of specific samples’ content and allows for classifying oils according to their geographical and varietal origin.Keywords: authentication, chemometrics, olive oil, raman spectroscopy
Procedia PDF Downloads 332939 Safety Validation of Black-Box Autonomous Systems: A Multi-Fidelity Reinforcement Learning Approach
Authors: Jared Beard, Ali Baheri
Abstract:
As autonomous systems become more prominent in society, ensuring their safe application becomes increasingly important. This is clearly demonstrated with autonomous cars traveling through a crowded city or robots traversing a warehouse with heavy equipment. Human environments can be complex, having high dimensional state and action spaces. This gives rise to two problems. One being that analytic solutions may not be possible. The other is that in simulation based approaches, searching the entirety of the problem space could be computationally intractable, ruling out formal methods. To overcome this, approximate solutions may seek to find failures or estimate their likelihood of occurrence. One such approach is adaptive stress testing (AST) which uses reinforcement learning to induce failures in the system. The premise of which is that a learned model can be used to help find new failure scenarios, making better use of simulations. In spite of these failures AST fails to find particularly sparse failures and can be inclined to find similar solutions to those found previously. To help overcome this, multi-fidelity learning can be used to alleviate this overuse of information. That is, information in lower fidelity can simulations can be used to build up samples less expensively, and more effectively cover the solution space to find a broader set of failures. Recent work in multi-fidelity learning has passed information bidirectionally using “knows what it knows” (KWIK) reinforcement learners to minimize the number of samples in high fidelity simulators (thereby reducing computation time and load). The contribution of this work, then, is development of the bidirectional multi-fidelity AST framework. Such an algorithm, uses multi-fidelity KWIK learners in an adversarial context to find failure modes. Thus far, a KWIK learner has been used to train an adversary in a grid world to prevent an agent from reaching its goal; thus demonstrating the utility of KWIK learners in an AST framework. The next step is implementation of the bidirectional multi-fidelity AST framework described. Testing will be conducted in a grid world containing an agent attempting to reach a goal position and adversary tasked with intercepting the agent as demonstrated previously. Fidelities will be modified by adjusting the size of a time-step, with higher-fidelity effectively allowing for more responsive closed loop feedback. Results will compare the single KWIK AST learner with the multi-fidelity algorithm with respect to number of samples, distinct failure modes found, and relative effect of learning after a number of trials.Keywords: multi-fidelity reinforcement learning, multi-fidelity simulation, safety validation, falsification
Procedia PDF Downloads 157938 Family Firm Internationalization: Identification of Alternative Success Pathways
Authors: Sascha Kraus, Wolfgang Hora, Philipp Stieg, Thomas Niemand, Ferdinand Thies, Matthias Filser
Abstract:
In most countries, small and medium-sized enterprises (SME) are the backbone of the economy due to their impact on job creation, innovation and wealth creation. Moreover, the ongoing globalization makes it inevitable – even for SME that traditionally focused on their domestic markets – to internationalize their business activities to realize further growth and survive in international markets. Thus, internationalization has become one of the most common growth strategies for SME and has received increasing scholarly attention over the last two decades. One the downside internationalization can be also regarded as the most complex strategy that a firm can undertake. Particularly for family firms, that are often characterized by limited financial capital, a risk-averse nature and limited growth aspirations, it could be argued that family firms are more likely to face greater challenges when taking the pathway to internationalization. Especially the triangulation of family, ownership, and management (so-called ‘familiness’) manifests in a unique behavior and decision-making process which is often characterized by the importance given to noneconomic goals and distinguishes a family firm from other businesses. Taking this into account, the concept of socio-emotional wealth (SEW) has been evolved to describe the behavior of family firms. In order to investigate how different internal and external firm characteristics shape internationalization success of family firms, we drew on a sample consisting of 297 small and medium-sized family firms from Germany, Austria, Switzerland, and Liechtenstein. Thus, we include SEW as essential family firm characteristic and added the two major intra-organizational characteristics, entrepreneurial orientation (EO), absorptive capacity (AC) as well as collaboration intensity (CI) and relational knowledge (RK) as two major external network characteristics. Based on previous research we assume that these characteristics are important to explain internationalization success of family firm SME. Regarding the data analysis, we applied a Fuzzy Set Qualitative Comparative Analysis (fsQCA), an approach that allows identifying configurations of firm characteristics, specifically used to study complex causal relationships where traditional regression techniques reach their limits. Results indicate that several combinations of these family firm characteristics can lead to international success, with no permanently required key characteristic. Instead, there are many roads to walk down for family firms to achieve internationalization success. Consequently, our data states that family owned SME are heterogeneous and internationalization is a complex and dynamic process. Results further show that network related characteristics occur in all sets, thus represent an essential element in the internationalization process of family owned SME. The contribution of our study is twofold, as we investigate different forms of international expansion for family firms and how to improve them. First, we are able to broaden the understanding of the intersection between family firm and SME internationalization with respect to major intra-organizational and network-related variables. Second, from a practical perspective, we offer family firm owners a basis for setting up internal capabilities to achieve international success.Keywords: entrepreneurial orientation, family firm, fsQCA, internationalization, socio-emotional wealth
Procedia PDF Downloads 241937 Enhance Concurrent Design Approach through a Design Methodology Based on an Artificial Intelligence Framework: Guiding Group Decision Making to Balanced Preliminary Design Solution
Authors: Loris Franchi, Daniele Calvi, Sabrina Corpino
Abstract:
This paper presents a design methodology in which stakeholders are assisted with the exploration of a so-called negotiation space, aiming to the maximization of both group social welfare and single stakeholder’s perceived utility. The outcome results in less design iterations needed for design convergence while obtaining a higher solution effectiveness. During the early stage of a space project, not only the knowledge about the system but also the decision outcomes often are unknown. The scenario is exacerbated by the fact that decisions taken in this stage imply delayed costs associated with them. Hence, it is necessary to have a clear definition of the problem under analysis, especially in the initial definition. This can be obtained thanks to a robust generation and exploration of design alternatives. This process must consider that design usually involves various individuals, who take decisions affecting one another. An effective coordination among these decision-makers is critical. Finding mutual agreement solution will reduce the iterations involved in the design process. To handle this scenario, the paper proposes a design methodology which, aims to speed-up the process of pushing the mission’s concept maturity level. This push up is obtained thanks to a guided negotiation space exploration, which involves autonomously exploration and optimization of trade opportunities among stakeholders via Artificial Intelligence algorithms. The negotiation space is generated via a multidisciplinary collaborative optimization method, infused by game theory and multi-attribute utility theory. In particular, game theory is able to model the negotiation process to reach the equilibria among stakeholder needs. Because of the huge dimension of the negotiation space, a collaborative optimization framework with evolutionary algorithm has been integrated in order to guide the game process to efficiently and rapidly searching for the Pareto equilibria among stakeholders. At last, the concept of utility constituted the mechanism to bridge the language barrier between experts of different backgrounds and differing needs, using the elicited and modeled needs to evaluate a multitude of alternatives. To highlight the benefits of the proposed methodology, the paper presents the design of a CubeSat mission for the observation of lunar radiation environment. The derived solution results able to balance all stakeholders needs and guaranteeing the effectiveness of the selection mission concept thanks to its robustness in valuable changeability. The benefits provided by the proposed design methodology are highlighted, and further development proposed.Keywords: concurrent engineering, artificial intelligence, negotiation in engineering design, multidisciplinary optimization
Procedia PDF Downloads 136