Search results for: conventional neural network
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8662

Search results for: conventional neural network

5272 Investigations on Utilization of Chrome Sludge, Chemical Industry Waste, in Cement Manufacturing and Its Effect on Clinker Mineralogy

Authors: Suresh Vanguri, Suresh Palla, Prasad G., Ramaswamy V., Kalyani K. V., Chaturvedi S. K., Mohapatra B. N., Sunder Rao TBVN

Abstract:

The utilization of industrial waste materials and by-products in the cement industry helps in the conservation of natural resources besides avoiding the problems arising due to waste dumping. The use of non-carbonated materials as raw mix components in clinker manufacturing is identified as one of the key areas to reduce Green House Gas (GHG) emissions. Chrome sludge is a waste material generated from the manufacturing process of sodium dichromate. This paper aims to present studies on the use of chrome sludge in clinker manufacturing, its impact on the development of clinker mineral phases and on the cement properties. Chrome sludge was found to contain substantial amounts of CaO, Fe2O3 and Al2O3 and therefore was used to replace some conventional sources of alumina and iron in the raw mix. Different mixes were prepared by varying the chrome sludge content from 0 to 5 % and the mixes were evaluated for burnability. Laboratory prepared clinker samples were evaluated for qualitative and quantitative mineralogy using X-ray Diffraction Studies (XRD). Optical microscopy was employed to study the distribution of clinker phases, their granulometry and mineralogy. Since chrome sludge also contains considerable amounts of chromium, studies were conducted on the leachability of heavy elements in the chrome sludge as well as in the resultant cement samples. Estimation of heavy elements, including chromium was carried out using ICP-OES. Further, the state of chromium valence, Cr (III) & Cr (VI), was studied using conventional chemical analysis methods coupled with UV-VIS spectroscopy. Assimilation of chromium in the clinker phases was investigated using SEM-EDXA studies. Bulk cement was prepared from the clinker to study the effect of chromium sludge on the cement properties such as setting time, soundness, strength development against the control cement. Studies indicated that chrome sludge can be successfully utilized and its content needs to be optimized based on raw material characteristics.

Keywords: chrome sludge, leaching, mineralogy, non-carbonate materials

Procedia PDF Downloads 217
5271 Finding a Redefinition of the Relationship between Rural and Urban Knowledge

Authors: Bianca Maria Rulli, Lenny Valentino Schiaretti

Abstract:

The considerable recent urbanization has increasingly sharpened environmental and social problems all over the world. During the recent years, many answers to the alarming attitudes in modern cities have emerged: a drastic reduction in the rate of growth is becoming essential for future generations and small scale economies are considered more adaptive and sustainable. According to the concept of degrowth, cities should consider surpassing the centralization of urban living by redefining the relationship between rural and urban knowledge; growing food in cities fundamentally contributes to the increase of social and ecological resilience. Through an innovative approach, this research combines the benefits of urban agriculture (increase of biological diversity, shorter and thus more efficient supply chains, food security) and temporary land use. They stimulate collaborative practices to satisfy the changing needs of communities and stakeholders. The concept proposes a coherent strategy to create a sustainable development of urban spaces, introducing a productive green-network to link specific areas in the city. By shifting the current relationship between architecture and landscape, the former process of ground consumption is deeply revised. Temporary modules can be used as concrete tools to create temporal areas of innovation, transforming vacant or marginal spaces into potential laboratories for the development of the city. The only permanent ground traces, such as foundations, are minimized in order to allow future land re-use. The aim is to describe a new mindset regarding the quality of space in the metropolis which allows, in a completely flexible way, to bring back the green and the urban farming into the cities. The wide possibilities of the research are analyzed in two different case-studies. The first is a regeneration/connection project designated for social housing, the second concerns the use of temporary modules to answer to the potential needs of social structures. The intention of the productive green-network is to link the different vacant spaces to each other as well as to the entire urban fabric. This also generates a potential improvement of the current situation of underprivileged and disadvantaged persons.

Keywords: degrowth, green network, land use, temporary building, urban farming

Procedia PDF Downloads 503
5270 Scope of Virtualization

Authors: Pavneet Kaur, Palak Sharma

Abstract:

Virtualization is a term that basically describe creation of virtual version of something like operating system, network, etc. Virtualization is a technology which is in use from 1970, but with new developments and inventions, it is now useful in education, software development etc. This paper will describe basic introduction of virtualization, along with its various categories. It will also describe use of virtualization in software engineering, its various benefits and shortcomings.

Keywords: virtualization, hardware, software, os

Procedia PDF Downloads 369
5269 Improvements in Double Q-Learning for Anomalous Radiation Source Searching

Authors: Bo-Bin Xiaoa, Chia-Yi Liua

Abstract:

In the task of searching for anomalous radiation sources, personnel holding radiation detectors to search for radiation sources may be exposed to unnecessary radiation risk, and automated search using machines becomes a required project. The research uses various sophisticated algorithms, which are double Q learning, dueling network, and NoisyNet, of deep reinforcement learning to search for radiation sources. The simulation environment, which is a 10*10 grid and one shielding wall setting in it, improves the development of the AI model by training 1 million episodes. In each episode of training, the radiation source position, the radiation source intensity, agent position, shielding wall position, and shielding wall length are all set randomly. The three algorithms are applied to run AI model training in four environments where the training shielding wall is a full-shielding wall, a lead wall, a concrete wall, and a lead wall or a concrete wall appearing randomly. The 12 best performance AI models are selected by observing the reward value during the training period and are evaluated by comparing these AI models with the gradient search algorithm. The results show that the performance of the AI model, no matter which one algorithm, is far better than the gradient search algorithm. In addition, the simulation environment becomes more complex, the AI model which applied Double DQN combined Dueling and NosiyNet algorithm performs better.

Keywords: double Q learning, dueling network, NoisyNet, source searching

Procedia PDF Downloads 113
5268 Experimental Determination of Shear Strength Properties of Lightweight Expanded Clay Aggregates Using Direct Shear and Triaxial Tests

Authors: Mahsa Shafaei Bajestani, Mahmoud Yazdani, Aliakbar Golshani

Abstract:

Artificial lightweight aggregates have a wide range of applications in industry and engineering. Nowadays, the usage of this material in geotechnical activities, especially as backfill in retaining walls has been growing due to the specific characteristics which make it a competent alternative to the conventional geotechnical materials. In practice, a material with lower weight but higher shear strength parameters would be ideal as backfill behind retaining walls because of the important roles that these parameters play in decreasing the overall active lateral earth pressure. In this study, two types of Light Expanded Clay Aggregates (LECA) produced in the Leca factory are investigated. LECA is made in a rotary kiln by heating natural clay at different temperatures up to 1200 °C making quasi-spherical aggregates with different sizes ranged from 0 to 25 mm. The loose bulk density of these aggregates is between 300 and 700 kN/m3. The purpose of this research is to determine the stress-strain behavior, shear strength parameters, and the energy absorption of LECA materials. Direct shear tests were conducted at five normal stresses of 25, 50, 75, 100, and 200 kPa. In addition, conventional triaxial compression tests were operated at confining pressures of 50, 100, and 200 kPa to examine stress-strain behavior. The experimental results show a high internal angle of friction and even a considerable amount of nominal cohesion despite the granular structure of LECA. These desirable properties along with the intrinsic low density of these aggregates make LECA as a very proper material in geotechnical applications. Furthermore, the results demonstrate that lightweight aggregates may have high energy absorption that is excellent alternative material in seismic isolations.

Keywords: expanded clay, direct shear test, triaxial test, shear properties, energy absorption

Procedia PDF Downloads 166
5267 Extraction of Phycocyanin from Spirulina platensis by Isoelectric Point Precipitation and Salting Out for Scale Up Processes

Authors: Velasco-Rendón María Del Carmen, Cuéllar-Bermúdez Sara Paulina, Parra-Saldívar Roberto

Abstract:

Phycocyanin is a blue pigment protein with fluorescent activity produced by cyanobacteria. It has been recently studied to determine its anticancer, antioxidant and antiinflamatory potential. Since 2014 it was approved as a Generally Recognized As Safe (GRAS) proteic pigment for the food industry. Therefore, phycocyanin shows potential for the food, nutraceutical, pharmaceutical and diagnostics industry. Conventional phycocyanin extraction includes buffer solutions and ammonium sulphate followed by chromatography or ATPS for protein separation. Therefore, further purification steps are time-requiring, energy intensive and not suitable for scale-up processing. This work presents an alternative to conventional methods that also allows large scale application with commercially available equipment. The extraction was performed by exposing the dry biomass to mechanical cavitation and salting out with NaCl to use an edible reagent. Also, isoelectric point precipitation was used by addition of HCl and neutralization with NaOH. The results were measured and compared in phycocyanin concentration, purity and extraction yield. Results showed that the best extraction condition was the extraction by salting out with 0.20 M NaCl after 30 minutes cavitation, with a concentration in the supernatant of 2.22 mg/ml, a purity of 3.28 and recovery from crude extract of 81.27%. Mechanical cavitation presumably increased the solvent-biomass contact, making the crude extract visibly dark blue after centrifugation. Compared to other systems, our process has less purification steps, similar concentrations in the phycocyanin-rich fraction and higher purity. The contaminants present in our process edible NaCl or low pHs that can be neutralized. It also can be adapted to a semi-continuous process with commercially available equipment. This characteristics make this process an appealing alternative for phycocyanin extraction as a pigment for the food industry.

Keywords: extraction, phycocyanin, precipitation, scale-up

Procedia PDF Downloads 438
5266 Prediction of Road Accidents in Qatar by 2022

Authors: M. Abou-Amouna, A. Radwan, L. Al-kuwari, A. Hammuda, K. Al-Khalifa

Abstract:

There is growing concern over increasing incidences of road accidents and consequent loss of human life in Qatar. In light to the future planned event in Qatar, World Cup 2022; Qatar should put into consideration the future deaths caused by road accidents, and past trends should be considered to give a reasonable picture of what may happen in the future. Qatar roads should be arranged and paved in a way that accommodate high capacity of the population in that time, since then there will be a huge number of visitors from the world. Qatar should also consider the risk issues of road accidents raised in that period, and plan to maintain high level to safety strategies. According to the increase in the number of road accidents in Qatar from 1995 until 2012, an analysis of elements affecting and causing road accidents will be effectively studied. This paper aims to identify and criticize the factors that have high effect on causing road accidents in the state of Qatar, and predict the total number of road accidents in Qatar 2022. Alternative methods are discussed and the most applicable ones according to the previous researches are selected for further studies. The methods that satisfy the existing case in Qatar were the multiple linear regression model (MLR) and artificial neutral network (ANN). Those methods are analyzed and their findings are compared. We conclude that by using MLR the number of accidents in 2022 will become 355,226 accidents, and by using ANN 216,264 accidents. We conclude that MLR gave better results than ANN because the artificial neutral network doesn’t fit data with large range varieties.

Keywords: road safety, prediction, accident, model, Qatar

Procedia PDF Downloads 258
5265 Control of Oil Content of Fried Zucchini Slices by Partial Predrying and Process Optimization

Authors: E. Karacabey, Ş. G. Özçelik, M. S. Turan, C. Baltacıoğlu, E. Küçüköner

Abstract:

Main concern about deep-fat-fried food materials is their high final oil contents absorbed during frying process and/or after cooling period, since diet including high content of oil is accepted unhealthy by consumers. Different methods have been evaluated to decrease oil content of fried food stuffs. One promising method is partially drying of food material before frying. In the present study it was aimed to control and decrease the final oil content of zucchini slices by means of partial drying and to optimize process conditions. Conventional oven drying was used to decrease moisture content of zucchini slices at a certain extent. Process performance in terms of oil uptake was evaluated by comparing oil content of predried and then fried zucchini slices with those determined for directly fried ones. For predrying and frying processes, oven temperature and weight loss and frying oil temperature and time pairs were controlled variables, respectively. Zucchini slices were also directly fried for sensory evaluations revealing preferred properties of final product in terms of surface color, moisture content, texture and taste. These properties of directly fried zucchini slices taking the highest score at the end of sensory evaluation were determined and used as targets in optimization procedure. Response surface methodology was used for process optimization. The properties, determined after sensory evaluation, were selected as targets; meanwhile oil content was aimed to be minimized. Results indicated that final oil content of zucchini slices could be reduced from 58% to 46% by controlling conditions of predrying and frying processes. As a result, it was suggested that predrying could be one choose to reduce oil content of fried zucchini slices for health diet. This project (113R015) has been supported by TUBITAK.

Keywords: health process, optimization, response surface methodology, oil uptake, conventional oven

Procedia PDF Downloads 366
5264 Using Crowd-Sourced Data to Assess Safety in Developing Countries: The Case Study of Eastern Cairo, Egypt

Authors: Mahmoud Ahmed Farrag, Ali Zain Elabdeen Heikal, Mohamed Shawky Ahmed, Ahmed Osama Amer

Abstract:

Crowd-sourced data refers to data that is collected and shared by a large number of individuals or organizations, often through the use of digital technologies such as mobile devices and social media. The shortage in crash data collection in developing countries makes it difficult to fully understand and address road safety issues in these regions. In developing countries, crowd-sourced data can be a valuable tool for improving road safety, particularly in urban areas where the majority of road crashes occur. This study is -to our best knowledge- the first to develop safety performance functions using crowd-sourced data by adopting a negative binomial structure model and the Full Bayes model to investigate traffic safety for urban road networks and provide insights into the impact of roadway characteristics. Furthermore, as a part of the safety management process, network screening has been undergone through applying two different methods to rank the most hazardous road segments: PCR method (adopted in the Highway Capacity Manual HCM) as well as a graphical method using GIS tools to compare and validate. Lastly, recommendations were suggested for policymakers to ensure safer roads.

Keywords: crowdsourced data, road crashes, safety performance functions, Full Bayes models, network screening

Procedia PDF Downloads 52
5263 Homeless Population Modeling and Trend Prediction Through Identifying Key Factors and Machine Learning

Authors: Shayla He

Abstract:

Background and Purpose: According to Chamie (2017), it’s estimated that no less than 150 million people, or about 2 percent of the world’s population, are homeless. The homeless population in the United States has grown rapidly in the past four decades. In New York City, the sheltered homeless population has increased from 12,830 in 1983 to 62,679 in 2020. Knowing the trend on the homeless population is crucial at helping the states and the cities make affordable housing plans, and other community service plans ahead of time to better prepare for the situation. This study utilized the data from New York City, examined the key factors associated with the homelessness, and developed systematic modeling to predict homeless populations of the future. Using the best model developed, named HP-RNN, an analysis on the homeless population change during the months of 2020 and 2021, which were impacted by the COVID-19 pandemic, was conducted. Moreover, HP-RNN was tested on the data from Seattle. Methods: The methodology involves four phases in developing robust prediction methods. Phase 1 gathered and analyzed raw data of homeless population and demographic conditions from five urban centers. Phase 2 identified the key factors that contribute to the rate of homelessness. In Phase 3, three models were built using Linear Regression, Random Forest, and Recurrent Neural Network (RNN), respectively, to predict the future trend of society's homeless population. Each model was trained and tuned based on the dataset from New York City for its accuracy measured by Mean Squared Error (MSE). In Phase 4, the final phase, the best model from Phase 3 was evaluated using the data from Seattle that was not part of the model training and tuning process in Phase 3. Results: Compared to the Linear Regression based model used by HUD et al (2019), HP-RNN significantly improved the prediction metrics of Coefficient of Determination (R2) from -11.73 to 0.88 and MSE by 99%. HP-RNN was then validated on the data from Seattle, WA, which showed a peak %error of 14.5% between the actual and the predicted count. Finally, the modeling results were collected to predict the trend during the COVID-19 pandemic. It shows a good correlation between the actual and the predicted homeless population, with the peak %error less than 8.6%. Conclusions and Implications: This work is the first work to apply RNN to model the time series of the homeless related data. The Model shows a close correlation between the actual and the predicted homeless population. There are two major implications of this result. First, the model can be used to predict the homeless population for the next several years, and the prediction can help the states and the cities plan ahead on affordable housing allocation and other community service to better prepare for the future. Moreover, this prediction can serve as a reference to policy makers and legislators as they seek to make changes that may impact the factors closely associated with the future homeless population trend.

Keywords: homeless, prediction, model, RNN

Procedia PDF Downloads 121
5262 Assessment of Taiwan Railway Occurrences Investigations Using Causal Factor Analysis System and Bayesian Network Modeling Method

Authors: Lee Yan Nian

Abstract:

Safety investigation is different from an administrative investigation in that the former is conducted by an independent agency and the purpose of such investigation is to prevent accidents in the future and not to apportion blame or determine liability. Before October 2018, Taiwan railway occurrences were investigated by local supervisory authority. Characteristics of this kind of investigation are that enforcement actions, such as administrative penalty, are usually imposed on those persons or units involved in occurrence. On October 21, 2018, due to a Taiwan Railway accident, which caused 18 fatalities and injured another 267, establishing an agency to independently investigate this catastrophic railway accident was quickly decided. The Taiwan Transportation Safety Board (TTSB) was then established on August 1, 2019 to take charge of investigating major aviation, marine, railway and highway occurrences. The objective of this study is to assess the effectiveness of safety investigations conducted by the TTSB. In this study, the major railway occurrence investigation reports published by the TTSB are used for modeling and analysis. According to the classification of railway occurrences investigated by the TTSB, accident types of Taiwan railway occurrences can be categorized into: derailment, fire, Signal Passed at Danger and others. A Causal Factor Analysis System (CFAS) developed by the TTSB is used to identify the influencing causal factors and their causal relationships in the investigation reports. All terminologies used in the CFAS are equivalent to the Human Factors Analysis and Classification System (HFACS) terminologies, except for “Technical Events” which was added to classify causal factors resulting from mechanical failure. Accordingly, the Bayesian network structure of each occurrence category is established based on the identified causal factors in the CFAS. In the Bayesian networks, the prior probabilities of identified causal factors are obtained from the number of times in the investigation reports. Conditional Probability Table of each parent node is determined from domain experts’ experience and judgement. The resulting networks are quantitatively assessed under different scenarios to evaluate their forward predictions and backward diagnostic capabilities. Finally, the established Bayesian network of derailment is assessed using investigation reports of the same accident which was investigated by the TTSB and the local supervisory authority respectively. Based on the assessment results, findings of the administrative investigation is more closely tied to errors of front line personnel than to organizational related factors. Safety investigation can identify not only unsafe acts of individual but also in-depth causal factors of organizational influences. The results show that the proposed methodology can identify differences between safety investigation and administrative investigation. Therefore, effective intervention strategies in associated areas can be better addressed for safety improvement and future accident prevention through safety investigation.

Keywords: administrative investigation, bayesian network, causal factor analysis system, safety investigation

Procedia PDF Downloads 123
5261 A Complex Network Approach to Structural Inequality of Educational Deprivation

Authors: Harvey Sanchez-Restrepo, Jorge Louca

Abstract:

Equity and education are major focus of government policies around the world due to its relevance for addressing the sustainable development goals launched by Unesco. In this research, we developed a primary analysis of a data set of more than one hundred educational and non-educational factors associated with learning, coming from a census-based large-scale assessment carried on in Ecuador for 1.038.328 students, their families, teachers, and school directors, throughout 2014-2018. Each participating student was assessed by a standardized computer-based test. Learning outcomes were calibrated through item response theory with two-parameters logistic model for getting raw scores that were re-scaled and synthetized by a learning index (LI). Our objective was to develop a network for modelling educational deprivation and analyze the structure of inequality gaps, as well as their relationship with socioeconomic status, school financing, and student's ethnicity. Results from the model show that 348 270 students did not develop the minimum skills (prevalence rate=0.215) and that Afro-Ecuadorian, Montuvios and Indigenous students exhibited the highest prevalence with 0.312, 0.278 and 0.226, respectively. Regarding the socioeconomic status of students (SES), modularity class shows clearly that the system is out of equilibrium: the first decile (the poorest) exhibits a prevalence rate of 0.386 while rate for decile ten (the richest) is 0.080, showing an intense negative relationship between learning and SES given by R= –0.58 (p < 0.001). Another interesting and unexpected result is the average-weighted degree (426.9) for both private and public schools attending Afro-Ecuadorian students, groups that got the highest PageRank (0.426) and pointing out that they suffer the highest educational deprivation due to discrimination, even belonging to the richest decile. The model also found the factors which explain deprivation through the highest PageRank and the greatest degree of connectivity for the first decile, they are: financial bonus for attending school, computer access, internet access, number of children, living with at least one parent, books access, read books, phone access, time for homework, teachers arriving late, paid work, positive expectations about schooling, and mother education. These results provide very accurate and clear knowledge about the variables affecting poorest students and the inequalities that it produces, from which it might be defined needs profiles, as well as actions on the factors in which it is possible to influence. Finally, these results confirm that network analysis is fundamental for educational policy, especially linking reliable microdata with social macro-parameters because it allows us to infer how gaps in educational achievements are driven by students’ context at the time of assigning resources.

Keywords: complex network, educational deprivation, evidence-based policy, large-scale assessments, policy informatics

Procedia PDF Downloads 124
5260 Re-Conceptualizing the Indigenous Learning Space for Children in Bangladesh Placing Built Environment as Third Teacher

Authors: Md. Mahamud Hassan, Shantanu Biswas Linkon, Nur Mohammad Khan

Abstract:

Over the last three decades, the primary education system in Bangladesh has experienced significant improvement, but it has failed to cope with different social and cultural aspects, which present many challenges for children, families, and the public school system. Neglecting our own contextual learning environment, it is a matter of sorrow that much attention has been paid to the more physical outcome-focused model, which is nothing but mere infrastructural development, and less subtle to the environment that suits the child's psychology and improves their social, emotional, physical, and moral competency. In South Asia, the symbol of education was never the little red house of colonial architecture but “A Guru sitting under a tree", whereas a responsive and inclusive design approach could help to create more innovative learning environments. Such an approach incorporates how the built, natural, and cultural environment shapes the learner; in turn, learners shape the learning. This research will be conducted to, i) identify the major issues and drawbacks of government policy for primary education development programs; ii) explore and evaluate the morphology of the conventional model of school, and iii) propose an alternative model in a collaborative design process with the stakeholders for maximizing the relationship between the physical learning environments and learners by treating “the built environment” as “the third teacher.” Based on observation, this research will try to find out to what extent built, and natural environments can be utilized as a teaching tool for a more optimal learning environment. It should also be evident that there is a significant gap in the state policy, predetermined educational specifications, and implementation process in response to stakeholders’ involvement. The outcome of this research will contribute to a people-place sensitive design approach through a more thoughtful and responsive architectural process.

Keywords: built environment, conventional planning, indigenous learning space, responsive design

Procedia PDF Downloads 107
5259 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging

Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen

Abstract:

Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.

Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques

Procedia PDF Downloads 99
5258 A Levinasian Perspective on the Field of Applied Ethics

Authors: Payman Tajalli, Steven Segal

Abstract:

Applied ethics is an area of ethics which is looked upon most favorably as the most appropriate and useful for educational purposes; after all if ethics finds no application would any investment of time, effort and finance by the educational institutions be warranted? The current approaches to ethics in business and management often entail appealing to various types of moral theories and to this end almost every major philosophical approach has been enlisted. In this paper, we look at ethics through the philosophy of Emmanuel Levinas to argue that since ethics is ‘first philosophy’ it can neither be rule-based nor rule-governed, not something that can be worked out first and then applied to a given situation, hence the overwhelming emphasis on ‘applied ethics’ as a field of study in business and management education is unjustified. True ethics is not applied ethics. This assertion does not mean that teaching ethical theories and philosophies need to be abandoned rather it is the acceptance of the fact that an increase in cognitive awareness of such theories and ethical models and frameworks, or the mastering of techniques and procedures for ethical decision making, will not affect the desired ethical transformation in our students. Levinas himself argued for an ethics without a foundation, not one that required us to go ‘beyond good and evil’ as Nietzsche contended, rather an ethics which necessitates going ‘before good and evil'. Such an ethics does not provide us with a set of methods or techniques or a decision tree that enable us determine the rightness of an action and what we ought to do, rather it is about a way of being, an ethical posture or approach one takes in the inter-subjective relationship with the other that holds the promise of ethical conduct. Ethics in this Levinasian sense then is one of infinite and unconditional responsibility for the other person in relationship, an ethics which is not subject to negotiation, calculation or reciprocity, and as such it could neither be applied nor taught through conventional pedagogy with its focus on knowledge transfer from the teacher to student, and to this end Levinas offers a non-maieutic, non-conventional approach to pedagogy. The paper concludes that from a Levinasian perspective on ethics and education, we may need to guide our students to move away from the clear and objective professionalism of the management and applied ethics towards the murky individual spiritualism. For Levinas, this is ‘the Copernican revolution’ in ethics.

Keywords: business ethics, ethics education, Levinas, maieutic teaching, ethics without foundation

Procedia PDF Downloads 323
5257 A Comparative Study of Deep Learning Methods for COVID-19 Detection

Authors: Aishrith Rao

Abstract:

COVID 19 is a pandemic which has resulted in thousands of deaths around the world and a huge impact on the global economy. Testing is a huge issue as the test kits have limited availability and are expensive to manufacture. Using deep learning methods on radiology images in the detection of the coronavirus as these images contain information about the spread of the virus in the lungs is extremely economical and time-saving as it can be used in areas with a lack of testing facilities. This paper focuses on binary classification and multi-class classification of COVID 19 and other diseases such as pneumonia, tuberculosis, etc. Different deep learning methods such as VGG-19, COVID-Net, ResNET+ SVM, Deep CNN, DarkCovidnet, etc., have been used, and their accuracy has been compared using the Chest X-Ray dataset.

Keywords: deep learning, computer vision, radiology, COVID-19, ResNet, VGG-19, deep neural networks

Procedia PDF Downloads 160
5256 Resilience of Infrastructure Networks: Maintenance of Bridges in Mountainous Environments

Authors: Lorenza Abbracciavento, Valerio De Biagi

Abstract:

Infrastructures are key elements to ensure the operational functionality of the transport system. The collapse of a single bridge or, equivalently, a tunnel can leads an entire motorway to be considered completely inaccessible. As a consequence, the paralysis of the communications network determines several important drawbacks for the community. Recent chronicle events have demonstrated that ensuring the functional continuity of the strategic infrastructures during and after a catastrophic event makes a significant difference in terms of life and economical losses. Moreover, it has been observed that RC structures located in mountain environments show a worst state of conservation compared to the same typology and aging structures located in temperate climates. Because of its morphology, in fact, the mountain environment is particularly exposed to severe collapse and deterioration phenomena, generally: natural hazards, e.g. rock falls, and meteorological hazards, e.g. freeze-thaw cycles or heavy snows. For these reasons, deep investigation on the characteristics of these processes becomes of fundamental importance to provide smart and sustainable solutions and make the infrastructure system more resilient. In this paper, the design of a monitoring system in mountainous environments is presented and analyzed in its parts. The method not only takes into account the peculiar climatic conditions, but it is integrated and interacts with the environment surrounding.

Keywords: structural health monitoring, resilience of bridges, mountain infrastructures, infrastructural network, maintenance

Procedia PDF Downloads 77
5255 Translation Quality Assessment in Fansubbed English-Chinese Swearwords: A Corpus-Based Study of the Big Bang Theory

Authors: Qihang Jiang

Abstract:

Fansubbing, the combination of fan and subtitling, is one of the main branches of Audiovisual Translation (AVT) having kindled more and more interest of researchers into the AVT field in recent decades. In particular, the quality of so-called non-professional translation seems questionable due to the non-transparent qualification of subtitlers in a huge community network. This paper attempts to figure out how YYeTs aka 'ZiMuZu', the largest fansubbing group in China, translates swearwords from English to Chinese for its fans of the prevalent American sitcom The Big Bang Theory, taking cultural, social and political elements into account in the context of China. By building a bilingual corpus containing both the source and target texts, this paper found that most of the original swearwords were translated in a toned-down manner, probably due to Chinese audiences’ cultural and social network features as well as the strict censorship under the Chinese government. Additionally, House (2015)’s newly revised model of Translation Quality Assessment (TQA) was applied and examined. Results revealed that most of the subtitled swearwords achieved their pragmatic functions and exerted a communicative effect for audiences. In conclusion, this paper enriches the empirical research concerning House’s new TQA model, gives a full picture of the subtitling of swearwords in AVT field and provides a practical guide for the practitioners in their career of subtitling.

Keywords: corpus-based approach, fansubbing, pragmatic functions, swearwords, translation quality assessment

Procedia PDF Downloads 143
5254 Global Navigation Satellite System and Precise Point Positioning as Remote Sensing Tools for Monitoring Tropospheric Water Vapor

Authors: Panupong Makvichian

Abstract:

Global Navigation Satellite System (GNSS) is nowadays a common technology that improves navigation functions in our life. Additionally, GNSS is also being employed on behalf of an accurate atmospheric sensor these times. Meteorology is a practical application of GNSS, which is unnoticeable in the background of people’s life. GNSS Precise Point Positioning (PPP) is a positioning method that requires data from a single dual-frequency receiver and precise information about satellite positions and satellite clocks. In addition, careful attention to mitigate various error sources is required. All the above data are combined in a sophisticated mathematical algorithm. At this point, the research is going to demonstrate how GNSS and PPP method is capable to provide high-precision estimates, such as 3D positions or Zenith tropospheric delays (ZTDs). ZTDs combined with pressure and temperature information allows us to estimate the water vapor in the atmosphere as precipitable water vapor (PWV). If the process is replicated for a network of GNSS sensors, we can create thematic maps that allow extract water content information in any location within the network area. All of the above are possible thanks to the advances in GNSS data processing. Therefore, we are able to use GNSS data for climatic trend analysis and acquisition of the further knowledge about the atmospheric water content.

Keywords: GNSS, precise point positioning, Zenith tropospheric delays, precipitable water vapor

Procedia PDF Downloads 198
5253 Design and Tooth Contact Analysis of Face Gear Drive with Modified Tooth Surface in Helicopter Transmission

Authors: Kazumasa Kawasaki, Isamu Tsuji, Hiroshi Gunbara

Abstract:

A face gear drive is actually composed of a spur or helical pinion that is in mesh with a face gear and transfers power and motion between intersecting or skew axes. Due to the peculiarity of the face gear drive in shunt and confluence drive, it shows potential advantages in the application in the helicopter transmission. The advantages of such applications are the possibility of the split of the torque that appears to be significant where a pinion drives two face gears to provide an accurate division of power and motion. This mechanism greatly reduces the weight and cost compared to conventional design. Therefore, this has been led to revived interest and the face gear drive has been utilized in substitution for bevel and hypoid gears in limited cases. The face gear drive with a spur or a helical pinion is newly designed in order to determine an effective meshing area under the design parameters and specific design dimensions. The face gear has two unique dimensions which control the face width of the tooth, and the outside and inside diameters of the face gear. On the other hand, it is necessary to modify the tooth surfaces of face gear drive in order to avoid the influences of alignment errors on the tooth contact patterns in practical use. In this case, the pinion tooth surfaces are usually modified in the conventional method. However, it is hard to control the tooth contact pattern intentionally and adjust the position of the pinion axis in meshing of the gear pair. Therefore, a method of the modification of the tooth surfaces of the face gear is proposed. Moreover, based on tooth contact analysis, the tooth contact pattern and transmission errors of the designed face gear drive are analyzed, and the influences of alignment errors on the tooth contact patterns and transmission errors are investigated. These results showed that the tooth contact patterns and transmission errors were controllable and the face gear drive which is insensitive to alignment errors can be obtained.

Keywords: alignment error, face gear, gear design, helicopter transmission, tooth contact analysis

Procedia PDF Downloads 437
5252 Design of a Real Time Closed Loop Simulation Test Bed on a General Purpose Operating System: Practical Approaches

Authors: Pratibha Srivastava, Chithra V. J., Sudhakar S., Nitin K. D.

Abstract:

A closed-loop system comprises of a controller, a response system, and an actuating system. The controller, which is the system under test for us, excites the actuators based on feedback from the sensors in a periodic manner. The sensors should provide the feedback to the System Under Test (SUT) within a deterministic time post excitation of the actuators. Any delay or miss in the generation of response or acquisition of excitation pulses may lead to control loop controller computation errors, which can be catastrophic in certain cases. Such systems categorised as hard real-time systems that need special strategies. The real-time operating systems available in the market may be the best solutions for such kind of simulations, but they pose limitations like the availability of the X Windows system, graphical interfaces, other user tools. In this paper, we present strategies that can be used on a general purpose operating system (Bare Linux Kernel) to achieve a deterministic deadline and hence have the added advantages of a GPOS with real-time features. Techniques shall be discussed how to make the time-critical application run with the highest priority in an uninterrupted manner, reduced network latency for distributed architecture, real-time data acquisition, data storage, and retrieval, user interactions, etc.

Keywords: real time data acquisition, real time kernel preemption, scheduling, network latency

Procedia PDF Downloads 147
5251 Numerical Modelling and Experiment of a Composite Single-Lap Joint Reinforced by Multifunctional Thermoplastic Composite Fastener

Authors: Wenhao Li, Shijun Guo

Abstract:

Carbon fibre reinforced composites are progressively replacing metal structures in modern civil aircraft. This is because composite materials have large potential of weight saving compared with metal. However, the achievement to date of weight saving in composite structure is far less than the theoretical potential due to many uncertainties in structural integrity and safety concern. Unlike the conventional metallic structure, composite components are bonded together along the joints where structural integrity is a major concern. To ensure the safety, metal fasteners are used to reinforce the composite bonded joints. One of the solutions for a significant weight saving of composite structure is to develop an effective technology of on-board Structural Health Monitoring (SHM) System. By monitoring the real-life stress status of composite structures during service, the safety margin set in the structure design can be reduced with confidence. It provides a means of safeguard to minimize the need for programmed inspections and allow for maintenance to be need-driven, rather than usage-driven. The aim of this paper is to develop smart composite joint. The key technology is a multifunctional thermoplastic composite fastener (MTCF). The MTCF will replace some of the existing metallic fasteners in the most concerned locations distributed over the aircraft composite structures to reinforce the joints and form an on-board SHM network system. Each of the MTCFs will work as a unit of the AU and AE technology. The proposed MTCF technology has been patented and developed by Prof. Guo in Cranfield University, UK in the past a few years. The manufactured MTCF has been successfully employed in the composite SLJ (Single-Lap Joint). In terms of the structure integrity, the hybrid SLJ reinforced by MTCF achieves 19.1% improvement in the ultimate failure strength in comparison to the bonded SLJ. By increasing the diameter or rearranging the lay-up sequence of MTCF, the hybrid SLJ reinforced by MTCF is able to achieve the equivalent ultimate strength as that reinforced by titanium fastener. The predicted ultimate strength in simulation is in good agreement with the test results. In terms of the structural health monitoring, a signal from the MTCF was measured well before the load of mechanical failure. This signal provides a warning of initial crack in the joint which could not be detected by the strain gauge until the final failure.

Keywords: composite single-lap joint, crack propagation, multifunctional composite fastener, structural health monitoring

Procedia PDF Downloads 163
5250 The Postcognitivist Era in Cognitive Psychology

Authors: C. Jameke

Abstract:

During the cognitivist era in cognitive psychology, a theory of internal rules and symbolic representations was posited as an account of human cognition. This type of cognitive architecture had its heyday during the 1970s and 80s, but it has now been largely abandoned in favour of subsymbolic architectures (e.g. connectionism), non-representational frameworks (e.g. dynamical systems theory), and statistical approaches such as Bayesian theory. In this presentation I describe this changing landscape of research, and comment on the increasing influence of neuroscience on cognitive psychology. I then briefly review a few recent developments in connectionism, and neurocomputation relevant to cognitive psychology, and critically discuss the assumption made by some researchers in these frameworks that higher-level aspects of human cognition are simply emergent properties of massively large distributed neural networks

Keywords: connectionism, emergentism, postocgnitivist, representations, subsymbolic archiitecture

Procedia PDF Downloads 578
5249 Neural Correlates of Arabic Digits Naming

Authors: Fernando Ojedo, Alejandro Alvarez, Pedro Macizo

Abstract:

In the present study, we explored electrophysiological correlates of Arabic digits naming to determine semantic processing of numbers. Participants named Arabic digits grouped by category or intermixed with exemplars of other semantic categories while the N400 event-related potential was examined. Around 350-450 ms after the presentation of Arabic digits, brain waves were more positive in anterior regions and more negative in posterior regions when stimuli were grouped by category relative to the mixed condition. Contrary to what was found in other studies, electrophysiological results suggested that the production of numerals involved semantic mediation.

Keywords: Arabic digit naming, event-related potentials, semantic processing, number production

Procedia PDF Downloads 582
5248 Modeling Engagement with Multimodal Multisensor Data: The Continuous Performance Test as an Objective Tool to Track Flow

Authors: Mohammad H. Taheri, David J. Brown, Nasser Sherkat

Abstract:

Engagement is one of the most important factors in determining successful outcomes and deep learning in students. Existing approaches to detect student engagement involve periodic human observations that are subject to inter-rater reliability. Our solution uses real-time multimodal multisensor data labeled by objective performance outcomes to infer the engagement of students. The study involves four students with a combined diagnosis of cerebral palsy and a learning disability who took part in a 3-month trial over 59 sessions. Multimodal multisensor data were collected while they participated in a continuous performance test. Eye gaze, electroencephalogram, body pose, and interaction data were used to create a model of student engagement through objective labeling from the continuous performance test outcomes. In order to achieve this, a type of continuous performance test is introduced, the Seek-X type. Nine features were extracted including high-level handpicked compound features. Using leave-one-out cross-validation, a series of different machine learning approaches were evaluated. Overall, the random forest classification approach achieved the best classification results. Using random forest, 93.3% classification for engagement and 42.9% accuracy for disengagement were achieved. We compared these results to outcomes from different models: AdaBoost, decision tree, k-Nearest Neighbor, naïve Bayes, neural network, and support vector machine. We showed that using a multisensor approach achieved higher accuracy than using features from any reduced set of sensors. We found that using high-level handpicked features can improve the classification accuracy in every sensor mode. Our approach is robust to both sensor fallout and occlusions. The single most important sensor feature to the classification of engagement and distraction was shown to be eye gaze. It has been shown that we can accurately predict the level of engagement of students with learning disabilities in a real-time approach that is not subject to inter-rater reliability, human observation or reliant on a single mode of sensor input. This will help teachers design interventions for a heterogeneous group of students, where teachers cannot possibly attend to each of their individual needs. Our approach can be used to identify those with the greatest learning challenges so that all students are supported to reach their full potential.

Keywords: affective computing in education, affect detection, continuous performance test, engagement, flow, HCI, interaction, learning disabilities, machine learning, multimodal, multisensor, physiological sensors, student engagement

Procedia PDF Downloads 94
5247 Experimental Study of an Isobaric Expansion Heat Engine with Hydraulic Power Output for Conversion of Low-Grade-Heat to Electricity

Authors: Maxim Glushenkov, Alexander Kronberg

Abstract:

Isobaric expansion (IE) process is an alternative to conventional gas/vapor expansion accompanied by a pressure decrease typical of all state-of-the-art heat engines. The elimination of the expansion stage accompanied by useful work means that the most critical and expensive parts of ORC systems (turbine, screw expander, etc.) are also eliminated. In many cases, IE heat engines can be more efficient than conventional expansion machines. In addition, IE machines have a very simple, reliable, and inexpensive design. They can also perform all the known operations of existing heat engines and provide usable energy in a very convenient hydraulic or pneumatic form. This paper reports measurement made with the engine operating as a heat-to-shaft-power or electricity converter and a comparison of the experimental results to a thermodynamic model. Experiments were carried out at heat source temperature in the range 30–85 °C and heat sink temperature around 20 °C; refrigerant R134a was used as the engine working fluid. The pressure difference generated by the engine varied from 2.5 bar at the heat source temperature 40 °C to 23 bar at the heat source temperature 85 °C. Using a differential piston, the generated pressure was quadrupled to pump hydraulic oil through a hydraulic motor that generates shaft power and is connected to an alternator. At the frequency of about 0.5 Hz, the engine operates with useful powers up to 1 kW and an oil pumping flowrate of 7 L/min. Depending on the temperature of the heat source, the obtained efficiency was 3.5 – 6 %. This efficiency looks very high, considering such a low temperature difference (10 – 65 °C) and low power (< 1 kW). The engine’s observed performance is in good agreement with the predictions of the model. The results are very promising, showing that the engine is a simple and low-cost alternative to ORC plants and other known energy conversion systems, especially at low temperatures (< 100 °C) and low power range (< 500 kW) where other known technologies are not economic. Thus low-grade solar, geothermal energy, biomass combustion, and waste heat with a temperature above 30 °C can be involved into various energy conversion processes.

Keywords: isobaric expansion, low-grade heat, heat engine, renewable energy, waste heat recovery

Procedia PDF Downloads 226
5246 Transfer Function Model-Based Predictive Control for Nuclear Core Power Control in PUSPATI TRIGA Reactor

Authors: Mohd Sabri Minhat, Nurul Adilla Mohd Subha

Abstract:

The 1MWth PUSPATI TRIGA Reactor (RTP) in Malaysia Nuclear Agency has been operating more than 35 years. The existing core power control is using conventional controller known as Feedback Control Algorithm (FCA). It is technically challenging to keep the core power output always stable and operating within acceptable error bands for the safety demand of the RTP. Currently, the system could be considered unsatisfactory with power tracking performance, yet there is still significant room for improvement. Hence, a new design core power control is very important to improve the current performance in tracking and regulating reactor power by controlling the movement of control rods that suit the demand of highly sensitive of nuclear reactor power control. In this paper, the proposed Model Predictive Control (MPC) law was applied to control the core power. The model for core power control was based on mathematical models of the reactor core, MPC, and control rods selection algorithm. The mathematical models of the reactor core were based on point kinetics model, thermal hydraulic models, and reactivity models. The proposed MPC was presented in a transfer function model of the reactor core according to perturbations theory. The transfer function model-based predictive control (TFMPC) was developed to design the core power control with predictions based on a T-filter towards the real-time implementation of MPC on hardware. This paper introduces the sensitivity functions for TFMPC feedback loop to reduce the impact on the input actuation signal and demonstrates the behaviour of TFMPC in term of disturbance and noise rejections. The comparisons of both tracking and regulating performance between the conventional controller and TFMPC were made using MATLAB and analysed. In conclusion, the proposed TFMPC has satisfactory performance in tracking and regulating core power for controlling nuclear reactor with high reliability and safety.

Keywords: core power control, model predictive control, PUSPATI TRIGA reactor, TFMPC

Procedia PDF Downloads 241
5245 Short-Term Impact of a Return to Conventional Tillage on Soil Microbial Attributes

Authors: Promil Mehra, Nanthi Bolan, Jack Desbiolles, Risha Gupta

Abstract:

Agricultural practices affect the soil physical and chemical properties, which in turn influence the soil microorganisms as a function of the soil biological environment. On the return to conventional tillage (CT) from continuing no-till (NT) cropping system, a very little information is available from the impact caused by the intermittent tillage on the soil biochemical properties from a short-term (2-year) study period. Therefore, the contribution made by different microorganisms (fungal, bacteria) was also investigated in order to find out the effective changes in the soil microbial activity under a South Australian dryland faring system. This study was conducted to understand the impact of microbial dynamics on the soil organic carbon (SOC) under NT and CT systems when treated with different levels of mulching (0, 2.5 and 5 t/ha). Our results demonstrated that from the incubation experiment the cumulative CO2 emitted from CT system was 34.5% higher than NT system. Relatively, the respiration from surface layer (0-10 cm) was significantly (P<0.05) higher by 8.5% and 15.8 from CT; 8% and 18.9% from NT system w.r.t 10-20 and 20-30 cm respectively. Further, the dehydrogenase enzyme activity (DHA) and microbial biomass carbon (MBC) were both significantly lower (P<0.05) under CT, i.e., 7.4%, 7.2%, 6.0% (DHA) and 19.7%, 15.7%, 4% (MBC) across the different mulching levels (0, 2.5, 5 t/ha) respectively. In general, it was found that from both the tillage system the enzyme activity and MBC decreased with the increase in depth (0-10, 10-20 and 20-30 cm) and with the increase in mulching rate (0, 2.5 and 5 t/ha). From the perspective of microbial stress, there was 28.6% higher stress under CT system compared to NT system. Whereas, the microbial activity of different microorganisms like fungal and bacterial activities were determined by substrate-induced inhibition respiration using antibiotics like cycloheximide (16 mg/gm of soil) and streptomycin sulphate (14 mg/gm of soil), by trapping the CO2 using an alkali (0.5 M NaOH) solution. The microbial activities were confirmed through platting technique, where it was that found bacterial activities were 46.2% and 38.9% higher than fungal activity under CT and NT system. In conclusion, it was expected that changes in the relative abundance and activity of different microorganisms (bacteria and fungi) under different tillage systems could significantly affect the C cycling and storage due to its unique structures and differential interactions with the soil physical properties.

Keywords: tillage, soil respiration, MBC, fungal-bacterial activity

Procedia PDF Downloads 261
5244 Upgrades for Hydric Supply in Water System Distribution: Use of the Bayesian Network and Technical Expedients

Authors: Elena Carcano, James Ball

Abstract:

This work details the strategies adopted by the Italian Water Utilities during the distribution of water in emergency conditions which glide from earthquakes and droughts to floods and fires. Several water bureaus located over the national territory have been interviewed, and the collected information has been used in a database of potential interventions to be taken. The work discusses the actions adopted by water utilities. These are generally prioritized in order to minimize the social, temporal, and economic burden that the damaged and nearby areas need to support. Actions are defined relying on the Bayesian Network Approach, which constitutes the hard core of any decision support system. The Bayesian Networks give answers to interventions to real and most likely risky cases. The added value of this research consists in supplying the National Bureau, namely Protezione Civile, in charge of managing havoc and catastrophic situations with a univocal plot outline so as to be able to handle actions uniformly at the expense of different local laws or contradictory customs which squander any recovery conditions, proper technical service, and economic aids. The paper is organized as follows: in section 1, the introduction is stated; section 2 provides a brief discussion of BNNs (Bayesian Networks), section 3 introduces the adopted methodology; and in the last sections, results are presented, and conclusions are drawn.

Keywords: hierarchical process, strategic plan, water emergency conditions, water supply

Procedia PDF Downloads 160
5243 An Extended Domain-Specific Modeling Language for Marine Observatory Relying on Enterprise Architecture

Authors: Charbel Aoun, Loic Lagadec

Abstract:

A Sensor Network (SN) is considered as an operation of two phases: (1) the observation/measuring, which means the accumulation of the gathered data at each sensor node; (2) transferring the collected data to some processing center (e.g., Fusion Servers) within the SN. Therefore, an underwater sensor network can be defined as a sensor network deployed underwater that monitors underwater activity. The deployed sensors, such as Hydrophones, are responsible for registering underwater activity and transferring it to more advanced components. The process of data exchange between the aforementioned components perfectly defines the Marine Observatory (MO) concept which provides information on ocean state, phenomena and processes. The first step towards the implementation of this concept is defining the environmental constraints and the required tools and components (Marine Cables, Smart Sensors, Data Fusion Server, etc). The logical and physical components that are used in these observatories perform some critical functions such as the localization of underwater moving objects. These functions can be orchestrated with other services (e.g. military or civilian reaction). In this paper, we present an extension to our MO meta-model that is used to generate a design tool (ArchiMO). We propose new constraints to be taken into consideration at design time. We illustrate our proposal with an example from the MO domain. Additionally, we generate the corresponding simulation code using our self-developed domain-specific model compiler. On the one hand, this illustrates our approach in relying on Enterprise Architecture (EA) framework that respects: multiple views, perspectives of stakeholders, and domain specificity. On the other hand, it helps reducing both complexity and time spent in design activity, while preventing from design modeling errors during porting this activity in the MO domain. As conclusion, this work aims to demonstrate that we can improve the design activity of complex system based on the use of MDE technologies and a domain-specific modeling language with the associated tooling. The major improvement is to provide an early validation step via models and simulation approach to consolidate the system design.

Keywords: smart sensors, data fusion, distributed fusion architecture, sensor networks, domain specific modeling language, enterprise architecture, underwater moving object, localization, marine observatory, NS-3, IMS

Procedia PDF Downloads 177