Search results for: equivalent linear model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19025

Search results for: equivalent linear model

15635 A Process Model for Online Trip Reservation System

Authors: Sh. Wafa, M. Alanoud, S. Liyakathunisa

Abstract:

Online booking for a trip or hotel has become an indispensable traveling tool today, people tend to be more interested in selecting air flight travel as their first choice when going for a long trip. People's shopping behavior has greatly changed by the advent of social network. Traditional ticket booking methods are considered as outdated with the advancement in tools and technology. Web based booking framework is an 'absolute necessity to have' for any visit or movement business that is investing heaps of energy noting telephone calls, sending messages or considering employing more staff. In this paper, we propose a process model for online trip reservation for our designed web application. Our proposed system will be highly beneficial and helps in reduction in time and cost for customers.

Keywords: trip, hotel, reservation, process model, time, cost, web app

Procedia PDF Downloads 192
15634 Effect of White Roofing on Refrigerated Buildings

Authors: Samuel Matylewicz, K. W. Goossen

Abstract:

The deployment of white or cool (high albedo) roofing is a common energy savings recommendation for a variety of buildings all over the world. Here, the effect of a white roof on the energy savings of an ice rink facility in the northeastern US is determined by measuring the effect of solar irradiance on the consumption of the rink's ice refrigeration system. The consumption of the refrigeration system was logged over a year, along with multiple weather vectors, and a statistical model was applied. The experimental model indicates that the expected savings of replacing the existing grey roof with a white roof on the consumption of the refrigeration system is only 4.7 %. This overall result of the statistical model is confirmed with isolated instances of otherwise similar weather days, but cloudy vs. sunny, where there was no measurable difference in refrigeration consumption up to the noise in the local data, which was a few percent. This compares with a simple theoretical calculation that indicates 30% savings. The difference is attributed to a lack of convective cooling of the roof in the theoretical model. The best experimental model shows a relative effect of the weather vectors dry bulb temperature, solar irradiance, wind speed, and relative humidity on refrigeration consumption of 1, 0.026, 0.163, and -0.056, respectively. This result can have an impact on decisions to apply white roofing to refrigerated buildings in general.

Keywords: cool roofs, solar cooling load, refrigerated buildings, energy-efficient building envelopes

Procedia PDF Downloads 115
15633 Optimizing Telehealth Internet of Things Integration: A Sustainable Approach through Fog and Cloud Computing Platforms for Energy Efficiency

Authors: Yunyong Guo, Sudhakar Ganti, Bryan Guo

Abstract:

The swift proliferation of telehealth Internet of Things (IoT) devices has sparked concerns regarding energy consumption and the need for streamlined data processing. This paper presents an energy-efficient model that integrates telehealth IoT devices into a platform based on fog and cloud computing. This integrated system provides a sustainable and robust solution to address the challenges. Our model strategically utilizes fog computing as a localized data processing layer and leverages cloud computing for resource-intensive tasks, resulting in a significant reduction in overall energy consumption. The incorporation of adaptive energy-saving strategies further enhances the efficiency of our approach. Simulation analysis validates the effectiveness of our model in improving energy efficiency for telehealth IoT systems, particularly when integrated with localized fog nodes and both private and public cloud infrastructures. Subsequent research endeavors will concentrate on refining the energy-saving model, exploring additional functional enhancements, and assessing its broader applicability across various healthcare and industry sectors.

Keywords: energy-efficient, fog computing, IoT, telehealth

Procedia PDF Downloads 60
15632 Optimised Path Recommendation for a Real Time Process

Authors: Likewin Thomas, M. V. Manoj Kumar, B. Annappa

Abstract:

Traditional execution process follows the path of execution drawn by the process analyst without observing the behaviour of resource and other real-time constraints. Identifying process model, predicting the behaviour of resource and recommending the optimal path of execution for a real time process is challenging. The proposed AlfyMiner: αyM iner gives a new dimension in process execution with the novel techniques Process Model Analyser: PMAMiner and Resource behaviour Analyser: RBAMiner for recommending the probable path of execution. PMAMiner discovers next probable activity for currently executing activity in an online process using variant matching technique to identify the set of next probable activity, among which the next probable activity is discovered using decision tree model. RBAMiner identifies the resource suitable for performing the discovered next probable activity and observe the behaviour based on; load and performance using polynomial regression model, and waiting time using queueing theory. Based on the observed behaviour αyM iner recommend the probable path of execution with; next probable activity and the best suitable resource for performing it. Experiments were conducted on process logs of CoSeLoG Project1 and 72% of accuracy is obtained in identifying and recommending next probable activity and the efficiency of resource performance was optimised by 59% by decreasing their load.

Keywords: cross-organization process mining, process behaviour, path of execution, polynomial regression model

Procedia PDF Downloads 316
15631 Kinetic Modeling Study and Scale-Up of Niogas Generation Using Garden Grass and Cattle Dung as Feedstock

Authors: Tumisang Seodigeng, Hilary Rutto

Abstract:

In this study we investigate the use of a laboratory batch digester to derive kinetic parameters for anaerobic digestion of garden grass and cattle dung. Laboratory experimental data from a 5 liter batch digester operating at mesophilic temperature of 32 C is used to derive parameters for Michaelis-Menten kinetic model. These fitted kinetics are further used to predict the scale-up parameters of a batch digester using DynoChem modeling and scale-up software. The scale-up model results are compared with performance data from 20 liter, 50 liter, and 200 liter batch digesters. Michaelis-Menten kinetic model shows to be a very good and easy to use model for kinetic parameter fitting on DynoChem and can accurately predict scale-up performance of 20 liter and 50 liter batch reactor based on parameters fitted on a 5 liter batch reactor.

Keywords: Biogas, kinetics, DynoChem Scale-up, Michaelis-Menten

Procedia PDF Downloads 479
15630 Investigation of Martensitic Transformation Zone at the Crack Tip of NiTi under Mode-I Loading Using Microscopic Image Correlation

Authors: Nima Shafaghi, Gunay Anlaş, C. Can Aydiner

Abstract:

A realistic understanding of martensitic phase transition under complex stress states is key for accurately describing the mechanical behavior of shape memory alloys (SMAs). Particularly regarding the sharply changing stress fields at the tip of a crack, the size, nature and shape of transformed zones are of great interest. There is significant variation among various analytical models in their predictions of the size and shape of the transformation zone. As the fully transformed region remains inside a very small boundary at the tip of the crack, experimental validation requires microscopic resolution. Here, the crack tip vicinity of NiTi compact tension specimen has been monitored in situ with microscopic image correlation with 20x magnification. With nominal 15 micrometer grains and 0.2 micrometer per pixel optical resolution, the strains at the crack tip are mapped with intra-grain detail. The transformation regions are then deduced using an equivalent strain formulation.

Keywords: digital image correlation, fracture, martensitic phase transition, mode I, NiTi, transformation zone

Procedia PDF Downloads 340
15629 Implementation of a Non-Poissonian Model in a Low-Seismicity Area

Authors: Ludivine Saint-Mard, Masato Nakajima, Gloria Senfaute

Abstract:

In areas with low to moderate seismicity, the probabilistic seismic hazard analysis frequently uses a Poisson approach, which assumes independence in time and space of events to determine the annual probability of earthquake occurrence. Nevertheless, in countries with high seismic rate, such as Japan, it is frequently use non-poissonian model which assumes that next earthquake occurrence depends on the date of previous one. The objective of this paper is to apply a non-poissonian models in a region of low to moderate seismicity to get a feedback on the following questions: can we overcome the lack of data to determine some key parameters?, and can we deal with uncertainties to apply largely this methodology on an industrial context?. The Brownian-Passage-Time model was applied to a fault located in France and conclude that even if the lack of data can be overcome with some calculations, the amount of uncertainties and number of scenarios leads to a numerous branches in PSHA, making this method difficult to apply on a large scale of low to moderate seismicity areas and in an industrial context.

Keywords: probabilistic seismic hazard, non-poissonian model, earthquake occurrence, low seismicity

Procedia PDF Downloads 41
15628 Model-Based Approach as Support for Product Industrialization: Application to an Optical Sensor

Authors: Frederic Schenker, Jonathan J. Hendriks, Gianluca Nicchiotti

Abstract:

In a product industrialization perspective, the end-product shall always be at the peak of technological advancement and developed in the shortest time possible. Thus, the constant growth of complexity and a shorter time-to-market calls for important changes on both the technical and business level. Undeniably, the common understanding of the system is beclouded by its complexity which leads to the communication gap between the engineers and the sale department. This communication link is therefore important to maintain and increase the information exchange between departments to ensure a punctual and flawless delivery to the end customer. This evolution brings engineers to reason with more hindsight and plan ahead. In this sense, they use new viewpoints to represent the data and to express the model deliverables in an understandable way that the different stakeholder may identify their needs and ideas. This article focuses on the usage of Model-Based System Engineering (MBSE) in a perspective of system industrialization and reconnect the engineering with the sales team. The modeling method used and presented in this paper concentrates on displaying as closely as possible the needs of the customer. Firstly, by providing a technical solution to the sales team to help them elaborate commercial offers without omitting technicalities. Secondly, the model simulates between a vast number of possibilities across a wide range of components. It becomes a dynamic tool for powerful analysis and optimizations. Thus, the model is no longer a technical tool for the engineers, but a way to maintain and solidify the communication between departments using different views of the model. The MBSE contribution to cost optimization during New Product Introduction (NPI) activities is made explicit through the illustration of a case study describing the support provided by system models to architectural choices during the industrialization of a novel optical sensor.

Keywords: analytical model, architecture comparison, MBSE, product industrialization, SysML, system thinking

Procedia PDF Downloads 142
15627 A Domain Specific Modeling Language Semantic Model for Artefact Orientation

Authors: Bunakiye R. Japheth, Ogude U. Cyril

Abstract:

Since the process of transforming user requirements to modeling constructs are not very well supported by domain-specific frameworks, it became necessary to integrate domain requirements with the specific architectures to achieve an integrated customizable solutions space via artifact orientation. Domain-specific modeling language specifications of model-driven engineering technologies focus more on requirements within a particular domain, which can be tailored to aid the domain expert in expressing domain concepts effectively. Modeling processes through domain-specific language formalisms are highly volatile due to dependencies on domain concepts or used process models. A capable solution is given by artifact orientation that stresses on the results rather than expressing a strict dependence on complicated platforms for model creation and development. Based on this premise, domain-specific methods for producing artifacts without having to take into account the complexity and variability of platforms for model definitions can be integrated to support customizable development. In this paper, we discuss methods for the integration capabilities and necessities within a common structure and semantics that contribute a metamodel for artifact-orientation, which leads to a reusable software layer with concrete syntax capable of determining design intents from domain expert. These concepts forming the language formalism are established from models explained within the oil and gas pipelines industry.

Keywords: control process, metrics of engineering, structured abstraction, semantic model

Procedia PDF Downloads 126
15626 Multi-Objective Production Planning Problem: A Case Study of Certain and Uncertain Environment

Authors: Ahteshamul Haq, Srikant Gupta, Murshid Kamal, Irfan Ali

Abstract:

This case study designs and builds a multi-objective production planning model for a hardware firm with certain & uncertain data. During the time of interaction with the manager of the firm, they indicate some of the parameters may be vague. This vagueness in the formulated model is handled by the concept of fuzzy set theory. Triangular & Trapezoidal fuzzy numbers are used to represent the uncertainty in the collected data. The fuzzy nature is de-fuzzified into the crisp form using well-known defuzzification method via graded mean integration representation method. The proposed model attempts to maximize the production of the firm, profit related to the manufactured items & minimize the carrying inventory costs in both certain & uncertain environment. The recommended optimal plan is determined via fuzzy programming approach, and the formulated models are solved by using optimizing software LINGO 16.0 for getting the optimal production plan. The proposed model yields an efficient compromise solution with the overall satisfaction of decision maker.

Keywords: production planning problem, multi-objective optimization, fuzzy programming, fuzzy sets

Procedia PDF Downloads 194
15625 The Optimal Order Policy for the Newsvendor Model under Worker Learning

Authors: Sunantha Teyarachakul

Abstract:

We consider the worker-learning Newsvendor Model, under the case of lost-sales for unmet demand, with the research objective of proposing the cost-minimization order policy and lot size, scheduled to arrive at the beginning of the selling-period. In general, the New Vendor Model is used to find the optimal order quantity for the perishable items such as fashionable products or those with seasonal demand or short-life cycles. Technically, it is used when the product demand is stochastic and available for the single selling-season, and when there is only a one time opportunity for the vendor to purchase, with possibly of long ordering lead-times. Our work differs from the classical Newsvendor Model in that we incorporate the human factor (specifically worker learning) and its influence over the costs of processing units into the model. We describe this by using the well-known Wright’s Learning Curve. Most of the assumptions of the classical New Vendor Model are still maintained in our work, such as the constant per-unit cost of leftover and shortage, the zero initial inventory, as well as the continuous time. Our problem is challenging in the way that the best order quantity in the classical model, which is balancing the over-stocking and under-stocking costs, is no longer optimal. Specifically, when adding the cost-saving from worker learning to such expected total cost, the convexity of the cost function will likely not be maintained. This has called for a new way in determining the optimal order policy. In response to such challenges, we found a number of characteristics related to the expected cost function and its derivatives, which we then used in formulating the optimal ordering policy. Examples of such characteristics are; the optimal order quantity exists and is unique if the demand follows a Uniform Distribution; if the demand follows the Beta Distribution with some specific properties of its parameters, the second derivative of the expected cost function has at most two roots; and there exists the specific level of lot size that satisfies the first order condition. Our research results could be helpful for analysis of supply chain coordination and of the periodic review system for similar problems.

Keywords: inventory management, Newsvendor model, order policy, worker learning

Procedia PDF Downloads 401
15624 Obsession of Time and the New Musical Ontologies. The Concert for Saxophone, Daniel Kientzy and Orchestra by Myriam Marbe

Authors: Dutica Luminita

Abstract:

For the music composer Myriam Marbe the musical time and memory represent 2 (complementary) phenomena with conclusive impact on the settlement of new musical ontologies. Summarizing the most important achievements of the contemporary techniques of composition, her vision on the microform presented in The Concert for Daniel Kientzy, saxophone and orchestra transcends the linear and unidirectional time in favour of a flexible, multi-vectorial speech with spiral developments, where the sound substance is auto(re)generated by analogy with the fundamental processes of the memory. The conceptual model is of an archetypal essence, the music composer being concerned with identifying the mechanisms of the creation process, especially of those specific to the collective creation (of oral tradition). Hence the spontaneity of expression, improvisation tint, free rhythm, micro-interval intonation, coloristic-timbral universe dominated by multiphonics and unique sound effects. Hence the atmosphere of ritual, however purged by the primary connotations and reprojected into a wonderful spectacular space. The Concert is a work of artistic maturity and enforces respect, among others, by the timbral diversity of the three species of saxophone required by the music composer (baritone, sopranino and alt), in Part III Daniel Kientzy shows the performance of playing two saxophones concomitantly. The score of the music composer Myriam Marbe contains a deeply spiritualized music, full or archetypal symbols, a music whose drama suggests a real cinematographic movement.

Keywords: archetype, chronogenesis, concert, multiphonics

Procedia PDF Downloads 528
15623 A Bayesian Hierarchical Poisson Model with an Underlying Cluster Structure for the Analysis of Measles in Colombia

Authors: Ana Corberan-Vallet, Karen C. Florez, Ingrid C. Marino, Jose D. Bermudez

Abstract:

In 2016, the Region of the Americas was declared free of measles, a viral disease that can cause severe health problems. However, since 2017, measles has reemerged in Venezuela and has subsequently reached neighboring countries. In 2018, twelve American countries reported confirmed cases of measles. Governmental and health authorities in Colombia, a country that shares the longest land boundary with Venezuela, are aware of the need for a strong response to restrict the expanse of the epidemic. In this work, we apply a Bayesian hierarchical Poisson model with an underlying cluster structure to describe disease incidence in Colombia. Concretely, the proposed methodology provides relative risk estimates at the department level and identifies clusters of disease, which facilitates the implementation of targeted public health interventions. Socio-demographic factors, such as the percentage of migrants, gross domestic product, and entry routes, are included in the model to better describe the incidence of disease. Since the model does not impose any spatial correlation at any level of the model hierarchy, it avoids the spatial confounding problem and provides a suitable framework to estimate the fixed-effect coefficients associated with spatially-structured covariates.

Keywords: Bayesian analysis, cluster identification, disease mapping, risk estimation

Procedia PDF Downloads 139
15622 The Imminent Other in Anna Deavere Smith’s Performance

Authors: Joy Shihyi Huang

Abstract:

This paper discusses the concept of community in Anna Deavere Smith’s performance, one that challenges and explores existing notions of justice and the other. In contrast to unwavering assumptions of essentialism that have helped to propel a discourse on moral agency within the black community, Smith employs postmodern ideas in which the theatrical attributes of doubling and repetition are conceptualized as part of what Marvin Carlson coined as a ‘memory machine.’ Her dismissal of the need for linear time, such as that regulated by Aristotle’s The Poetics and its concomitant ethics, values, and emotions as a primary ontological and epistemological construct produced by the existing African American historiography, demonstrates an urgency to produce an alternative communal self to override metanarratives in which the African Americans’ lives are contained and sublated by specific historical confines. Drawing on Emmanuel Levinas’ theories in ethics, specifically his notion of ‘proximity’ and ‘the third,’ the paper argues that Smith enacts a new model of ethics by launching an acting method that eliminates the boundary of self and other. Defying psychological realism, Smith conceptualizes an approach to acting that surpasses the mere mimetic value of invoking a ‘likeness’ of an actor to a character, which as such, resembles the mere attribution of various racial or sexual attributes in identity politics. Such acting, she contends, reduces the other to a representation of, at best, an ultimate rendering of me/my experience. She instead appreciates ‘unlikeness,’ recognizes the unavoidable actor/character gap as a power that humbles the self, whose irreversible journey to the other carves out its own image.

Keywords: Anna Deavere Smith, Emmanuel Levinas, other, performance

Procedia PDF Downloads 141
15621 Influence of a Company’s Dynamic Capabilities on Its Innovation Capabilities

Authors: Lovorka Galetic, Zeljko Vukelic

Abstract:

The advanced concepts of strategic and innovation management in the sphere of company dynamic and innovation capabilities, and achieving their mutual alignment and a synergy effect, are important elements in business today. This paper analyses the theory and empirically investigates the influence of a company’s dynamic capabilities on its innovation capabilities. A new multidimensional model of dynamic capabilities is presented, consisting of five factors appropriate to real time requirements, while innovation capabilities are considered pursuant to the official OECD and Eurostat standards. After examination of dynamic and innovation capabilities indicated their theoretical links, the empirical study testing the model and examining the influence of a company’s dynamic capabilities on its innovation capabilities showed significant results. In the study, a research model was posed to relate company dynamic and innovation capabilities. One side of the model features the variables that are the determinants of dynamic capabilities defined through their factors, while the other side features the determinants of innovation capabilities pursuant to the official standards. With regard to the research model, five hypotheses were set. The study was performed in late 2014 on a representative sample of large and very large Croatian enterprises with a minimum of 250 employees. The research instrument was a questionnaire administered to company top management. For both variables, the position of the company was tested in comparison to industry competitors, on a fivepoint scale. In order to test the hypotheses, correlation tests were performed to determine whether there is a correlation between each individual factor of company dynamic capabilities with the existence of its innovation capabilities, in line with the research model. The results indicate a strong correlation between a company’s possession of dynamic capabilities in terms of their factors, due to the new multi-dimensional model presented in this paper, with its possession of innovation capabilities. Based on the results, all five hypotheses were accepted. Ultimately, it was concluded that there is a strong association between the dynamic and innovation capabilities of a company. 

Keywords: dynamic capabilities, innovation capabilities, competitive advantage, business results

Procedia PDF Downloads 292
15620 Examination of State of Repair of Buildings in Private Housing Estates in Enugu Metropolis, Enugu State Nigeria

Authors: Umeora Chukwunonso Obiefuna

Abstract:

The private sector in housing provision continually take steps towards addressing part of the problem of cushioning the effect of the housing shortage in Nigeria by establishing housing estates since the government alone cannot provide housing for everyone. This research examined and reported findings from research conducted on the state of repair of buildings in private housing estates in Enugu metropolis, Enugu state Nigeria. The objectives of the study were to examine the physical conditions of the building fabrics and appraise the performance of infrastructural services provided in the buildings. The questionnaire was used as a research instrument to elicit data from respondents. Stratified sampling of the estates based on building type was adopted as a sampling method for this study. Findings from the research show that the state of repair of most buildings require minor repairs to make them fit for habitation and sound to ensure the well-being of the residents. In addition, four independent variables from the nine independent variables investigated significantly explained residual variation in the dependent variable - state of repair of the buildings in the study area. These variables are: Average Monthly Income of Residents (AMIR), Length of Stay of the Residents in the estates (LSY), Type of Wall Finishes on the buildings (TWF), and Time Taken to Respond to Resident’s complaints by the estate managers (TTRC). With this, the linear model was established for predicting the state of repair of buildings in private housing estates in the study area. This would assist in identifying variables that are lucid in predicting the state of repair of the buildings.

Keywords: building, housing estate, private, repair

Procedia PDF Downloads 125
15619 Machine Learning Techniques in Seismic Risk Assessment of Structures

Authors: Farid Khosravikia, Patricia Clayton

Abstract:

The main objective of this work is to evaluate the advantages and disadvantages of various machine learning techniques in two key steps of seismic hazard and risk assessment of different types of structures. The first step is the development of ground-motion models, which are used for forecasting ground-motion intensity measures (IM) given source characteristics, source-to-site distance, and local site condition for future events. IMs such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available. Second, it is investigated how machine learning techniques could be beneficial for developing probabilistic seismic demand models (PSDMs), which provide the relationship between the structural demand responses (e.g., component deformations, accelerations, internal forces, etc.) and the ground motion IMs. In the risk framework, such models are used to develop fragility curves estimating exceeding probability of damage for pre-defined limit states, and therefore, control the reliability of the predictions in the risk assessment. In this study, machine learning algorithms like artificial neural network, random forest, and support vector machine are adopted and trained on the demand parameters to derive PSDMs for them. It is observed that such models can provide more accurate estimates of prediction in relatively shorter about of time compared to conventional methods. Moreover, they can be used for sensitivity analysis of fragility curves with respect to many modeling parameters without necessarily requiring more intense numerical response-history analysis.

Keywords: artificial neural network, machine learning, random forest, seismic risk analysis, seismic hazard analysis, support vector machine

Procedia PDF Downloads 89
15618 Machine Learning Methods for Flood Hazard Mapping

Authors: Stefano Zappacosta, Cristiano Bove, Maria Carmela Marinelli, Paola di Lauro, Katarina Spasenovic, Lorenzo Ostano, Giuseppe Aiello, Marco Pietrosanto

Abstract:

This paper proposes a novel neural network approach for assessing flood hazard mapping. The core of the model is a machine learning component fed by frequency ratios, namely statistical correlations between flood event occurrences and a selected number of topographic properties. The proposed hybrid model can be used to classify four different increasing levels of hazard. The classification capability was compared with the flood hazard mapping River Basin Plans (PAI) designed by the Italian Institute for Environmental Research and Defence, ISPRA (Istituto Superiore per la Protezione e la Ricerca Ambientale). The study area of Piemonte, an Italian region, has been considered without loss of generality. The frequency ratios may be used as a standalone block to model the flood hazard mapping. Nevertheless, the mixture with a neural network improves the classification power of several percentage points, and may be proposed as a basic tool to model the flood hazard map in a wider scope.

Keywords: flood modeling, hazard map, neural networks, hydrogeological risk, flood risk assessment

Procedia PDF Downloads 160
15617 Using SNAP and RADTRAD to Establish the Analysis Model for Maanshan PWR Plant

Authors: J. R. Wang, H. C. Chen, C. Shih, S. W. Chen, J. H. Yang, Y. Chiang

Abstract:

In this study, we focus on the establishment of the analysis model for Maanshan PWR nuclear power plant (NPP) by using RADTRAD and SNAP codes with the FSAR, manuals, and other data. In order to evaluate the cumulative dose at the Exclusion Area Boundary (EAB) and Low Population Zone (LPZ) outer boundary, Maanshan NPP RADTRAD/SNAP model was used to perform the analysis of the DBA LOCA case. The analysis results of RADTRAD were similar to FSAR data. These analysis results were lower than the failure criteria of 10 CFR 100.11 (a total radiation dose to the whole body, 250 mSv; a total radiation dose to the thyroid from iodine exposure, 3000 mSv).

Keywords: RADionuclide, transport, removal, and dose estimation (RADTRAD), symbolic nuclear analysis package (SNAP), dose, PWR

Procedia PDF Downloads 446
15616 Effect of Acid-Basic Treatments of Lingocellulosic Material Forest Wastes Wild Carob on Ethyl Violet Dye Adsorption

Authors: Abdallah Bouguettoucha, Derradji Chebli, Tariq Yahyaoui, Hichem Attout

Abstract:

The effect of acid -basic treatment of lingocellulosic material (forest wastes wild carob) on Ethyl violet adsorption was investigated. It was found that surface chemistry plays an important role in Ethyl violet (EV) adsorption. HCl treatment produces more active acidic surface groups such as carboxylic and lactone, resulting in an increase in the adsorption of EV dye. The adsorption efficiency was higher for treated of lingocellulosic material with HCl than for treated with KOH. Maximum biosorption capacity was 170 and 130 mg/g, for treated of lingocellulosic material with HCl than for treated with KOH at pH 6 respectively. It was also found that the time to reach equilibrium takes less than 25 min for both treated materials. The adsorption of basic dye (i.e., ethyl violet or basic violet 4) was carried out by varying some process parameters, such as initial concentration, pH and temperature. The adsorption process can be well described by means of a pseudo-second-order reaction model showing that boundary layer resistance was not the rate-limiting step, as confirmed by intraparticle diffusion since the linear plot of Qt versus t^0.5 did not pass through the origin. In addition, experimental data were accurately expressed by the Sips equation if compared with the Langmuir and Freundlich isotherms. The values of ΔG° and ΔH° confirmed that the adsorption of EV on acid-basic treated forest wast wild carob was spontaneous and endothermic in nature. The positive values of ΔS° suggested an irregular increase of the randomness at the treated lingocellulosic material -solution interface during the adsorption process.

Keywords: adsorption, isotherm models, thermodynamic parameters, wild carob

Procedia PDF Downloads 263
15615 Circular Economy and Remedial Frameworks in Contract Law

Authors: Reza Beheshti

Abstract:

This paper examines remedies for defective manufactured goods in commercial circular economic transactions. The linear ‘take-make-dispose’ model fits well with the conventional remedial framework in which damages are considered the primary remedy. Damages under English Sales Law encourages buyers to look for a substitute seller with broadly similar goods to the ones agreed on in the original contract, enter into contract with this new seller and hence terminate the original contract. By doing so, the buyer ends the contractual relationship. This seems contrary to the core principles of the circular economy: keeping products, components, and materials in longer use, which can partly be achieved by product refurbishment. This process involves returning a product to good working condition by replacing or repairing major components that are faulty or close to failure and making ‘cosmetic’ changes to update the appearance of a product. This remedy has not been widely accepted or applied in commercial cases, which in turn flags up the secondary nature of performance-related remedies. This paper critically analyses the laws concerning the seller’s duty to cure in English law and the extent to which they correspond with core principles of the circular economy. In addition, this paper takes into account the potential of circular economic transactions being characterised as something other than sales. In such situations, the likely outcome will be a license to use products, which may limit the choice of remedy further. Consequently, this paper suggests an outline remedial framework specifically for commercial circular economic transactions in manufactured goods.

Keywords: circular economy, contract law, remedies, English Sales Law

Procedia PDF Downloads 131
15614 Mathematical Modeling of Drip Emitter Discharge of Trapezoidal Labyrinth Channel

Authors: N. Philipova

Abstract:

The influence of the geometric parameters of trapezoidal labyrinth channel on the emitter discharge is investigated in this work. The impact of the dentate angle, the dentate spacing, and the dentate height are studied among the geometric parameters of the labyrinth channel. Numerical simulations of the water flow movement are performed according to central cubic composite design using Commercial codes GAMBIT and FLUENT. Inlet pressure of the dripper is set up to be 1 bar. The objective of this paper is to derive a mathematical model of the emitter discharge depending on the dentate angle, the dentate spacing, the dentate height of the labyrinth channel. As a result, the obtained mathematical model is a second-order polynomial reporting 2-way interactions among the geometric parameters. The dentate spacing has the most important and positive influence on the emitter discharge, followed by the simultaneous impact of the dentate spacing and the dentate height. The dentate angle in the observed interval has no significant effect on the emitter discharge. The obtained model can be used as a basis for a future emitter design.

Keywords: drip irrigation, labyrinth channel hydrodynamics, numerical simulations, Reynolds stress model.

Procedia PDF Downloads 172
15613 Multiscale Cohesive Zone Modeling of Composite Microstructure

Authors: Vincent Iacobellis, Kamran Behdinan

Abstract:

A finite element cohesive zone model is used to predict the temperature dependent material properties of a polyimide matrix composite with unidirectional carbon fiber arrangement. The cohesive zone parameters have been obtained from previous research involving an atomistic-to-continuum multiscale simulation of the fiber-matrix interface using the bridging cell multiscale method. The goal of the research was to both investigate the effect of temperature change on the composite behavior with respect to transverse loading as well as the validate the use of cohesive parameters obtained from atomistic-to-continuum multiscale modeling to predict fiber-matrix interfacial cracking. From the multiscale model cohesive zone parameters (i.e. maximum traction and energy of separation) were obtained by modeling the interface between the coarse-grained polyimide matrix and graphite based carbon fiber. The cohesive parameters from this simulation were used in a cohesive zone model of the composite microstructure in order to predict the properties of the macroscale composite with respect to changes in temperature ranging from 21 ˚C to 316 ˚C. Good agreement was found between the microscale RUC model and experimental results for stress-strain response, stiffness, and material strength at low and high temperatures. Examination of the deformation of the composite through localized crack initiation at the fiber-matrix interface also agreed with experimental observations of similar phenomena. Overall, the cohesive zone model was shown to be both effective at modeling the composite properties with respect to transverse loading as well as validated the use of cohesive zone parameters obtained from the multiscale simulation.

Keywords: cohesive zone model, fiber-matrix interface, microscale damage, multiscale modeling

Procedia PDF Downloads 467
15612 Positive Obligations of the State Concerning the Protection of Human Rights

Authors: Monika Florczak-Wator

Abstract:

The model of positive obligations of the state concerning the protection of the rights of an individual was created within the jurisdiction of the German Federal Constitutional Court in the 1970s. That model assumes that the state should protect an individual against infringement of their fundamental rights by another individual. It is based on the idea concerning the modification of the function and duties of the state towards an individual and society. Initially the state was perceived as the main infringer of the fundamental rights of an individual formulating the individual’s obligations of negative nature (obligation of noninterference), however, at present the state is perceived as a guarantor and protector of the fundamental rights of an individual of positive nature (obligation of protection). Examination of the chosen judicial decisions of that court will enable us to determine what the obligation of protection is specifically about, when it is updated and whether it is accompanied by claims of an individual requesting the state to take actions protecting their fundamental rights against infringement by the private entities. The comparative perspective for the German model of positive obligations of the state will be an analogous model present in the jurisdiction of the European Court of Human Rights. It is justified to include it in the research as the Convention, similarly to the constitution, focuses on the protection of an individual against the infringement of their rights by the state and both models have been developed within the jurisdiction for several dozens of years. Analysis of the provisions of the Constitution of the Republic of Poland as well as judgements of the Polish Constitutional Tribunal will allow for the presentation of the application the model of the protective duties of the state in Poland.

Keywords: human rights, horizontal relationships, constitution, state protection

Procedia PDF Downloads 468
15611 The Use of Hec Ras One-Dimensional Model and Geophysics for the Determination of Flood Zones

Authors: Ayoub El Bourtali, Abdessamed Najine, Amrou Moussa Benmoussa

Abstract:

It is becoming more and more necessary to manage flood risk, and it must include all stakeholders and all possible means available. The goal of this work is to map the vulnerability of the Oued Derna-region Tagzirt flood zone in the semi-arid region. This is about implementing predictive models and flood control. This allows for the development of flood risk prevention plans. In this study, A resistivity survey was conducted over the area to locate and evaluate soil characteristics in order to calculate discharges and prevent flooding for the study area. The development of a one-dimensional (1D) hydrodynamic model of the Derna River was carried out in HEC-RAS 5.0.4 using a combination of survey data and spatially extracted cross-sections and recorded river flows. The study area was hit by several extreme floods, causing a lot of property loss and loss of life. This research focuses on the most recent flood events, based on the collected data, the water level, river flow and river cross-section were analyzed. A set of flood levels were obtained as the outputs of the hydraulic model and the accuracy of the simulated flood levels and velocity.

Keywords: derna river, 1D hydrodynamic model, flood modelling, HEC-RAS 5.0.4

Procedia PDF Downloads 294
15610 Comparison of Two-Phase Critical Flow Models for Estimation of Leak Flow Rate through Cracks

Authors: Tadashi Watanabe, Jinya Katsuyama, Akihiro Mano

Abstract:

The estimation of leak flow rates through narrow cracks in structures is of importance for nuclear reactor safety, since the leak flow could be detected before occurrence of loss-of-coolant accidents. The two-phase critical leak flow rates are calculated using the system analysis code, and two representative non-homogeneous critical flow models, Henry-Fauske model and Ransom-Trapp model, are compared. The pressure decrease and vapor generation in the crack, and the leak flow rates are found to be larger for the Henry-Fauske model. It is shown that the leak flow rates are not affected by the structural temperature, but affected largely by the roughness of crack surface.

Keywords: crack, critical flow, leak, roughness

Procedia PDF Downloads 164
15609 A Development of Science Instructional Model Based on Stem Education Approach to Enhance Scientific Mind and Problem Solving Skills for Primary Students

Authors: Prasita Sooksamran, Wareerat Kaewurai

Abstract:

STEM is an integrated teaching approach promoted by the Ministry of Education in Thailand. STEM Education is an integrated approach to teaching Science, Technology, Engineering, and Mathematics. It has been questioned by Thai teachers on the grounds of how to integrate STEM into the classroom. Therefore, the main objective of this study is to develop a science instructional model based on the STEM approach to enhance scientific mind and problem-solving skills for primary students. This study is participatory action research, and follows the following steps: 1) develop a model 2) seek the advice of experts regarding the teaching model. Developing the instructional model began with the collection and synthesis of information from relevant documents, related research and other sources in order to create prototype instructional model. 2) The examination of the validity and relevance of instructional model by a panel of nine experts. The findings were as follows: 1. The developed instructional model comprised of principles, objective, content, operational procedures and learning evaluation. There were 4 principles: 1) Learning based on the natural curiosity of primary school level children leading to knowledge inquiry, understanding and knowledge construction, 2) Learning based on the interrelation between people and environment, 3) Learning that is based on concrete learning experiences, exploration and the seeking of knowledge, 4) Learning based on the self-construction of knowledge, creativity, innovation and 5) relating their findings to real life and the solving of real-life problems. The objective of this construction model is to enhance scientific mind and problem-solving skills. Children will be evaluated according to their achievements. Lesson content is based on science as a core subject which is integrated with technology and mathematics at grade 6 level according to The Basic Education Core Curriculum 2008 guidelines. The operational procedures consisted of 6 steps: 1) Curiosity 2) Collection of data 3) Collaborative planning 4) Creativity and Innovation 5) Criticism and 6) Communication and Service. The learning evaluation is an authentic assessment based on continuous evaluation of all the material taught. 2. The experts agreed that the Science Instructional Model based on the STEM Education Approach had an excellent level of validity and relevance (4.67 S.D. 0.50).

Keywords: instructional model, STEM education, scientific mind, problem solving

Procedia PDF Downloads 177
15608 Forecasting Container Throughput: Using Aggregate or Terminal-Specific Data?

Authors: Gu Pang, Bartosz Gebka

Abstract:

We forecast the demand of total container throughput at the Indonesia’s largest seaport, Tanjung Priok Port. We propose four univariate forecasting models, including SARIMA, the additive Seasonal Holt-Winters, the multiplicative Seasonal Holt-Winters and the Vector Error Correction Model. Our aim is to provide insights into whether forecasting the total container throughput obtained by historical aggregated port throughput time series is superior to the forecasts of the total throughput obtained by summing up the best individual terminal forecasts. We test the monthly port/individual terminal container throughput time series between 2003 and 2013. The performance of forecasting models is evaluated based on Mean Absolute Error and Root Mean Squared Error. Our results show that the multiplicative Seasonal Holt-Winters model produces the most accurate forecasts of total container throughput, whereas SARIMA generates the worst in-sample model fit. The Vector Error Correction Model provides the best model fits and forecasts for individual terminals. Our results report that the total container throughput forecasts based on modelling the total throughput time series are consistently better than those obtained by combining those forecasts generated by terminal-specific models. The forecasts of total throughput until the end of 2018 provide an essential insight into the strategic decision-making on the expansion of port's capacity and construction of new container terminals at Tanjung Priok Port.

Keywords: SARIMA, Seasonal Holt-Winters, Vector Error Correction Model, container throughput

Procedia PDF Downloads 490
15607 Predicting the Exposure Level of Airborne Contaminants in Occupational Settings via the Well-Mixed Room Model

Authors: Alireza Fallahfard, Ludwig Vinches, Stephane Halle

Abstract:

In the workplace, the exposure level of airborne contaminants should be evaluated due to health and safety issues. It can be done by numerical models or experimental measurements, but the numerical approach can be useful when it is challenging to perform experiments. One of the simplest models is the well-mixed room (WMR) model, which has shown its usefulness to predict inhalation exposure in many situations. However, since the WMR is limited to gases and vapors, it cannot be used to predict exposure to aerosols. The main objective is to modify the WMR model to expand its application to exposure scenarios involving aerosols. To reach this objective, the standard WMR model has been modified to consider the deposition of particles by gravitational settling and Brownian and turbulent deposition. Three deposition models were implemented in the model. The time-dependent concentrations of airborne particles predicted by the model were compared to experimental results conducted in a 0.512 m3 chamber. Polystyrene particles of 1, 2, and 3 µm in aerodynamic diameter were generated with a nebulizer under two air changes per hour (ACH). The well-mixed condition and chamber ACH were determined by the tracer gas decay method. The mean friction velocity on the chamber surfaces as one of the input variables for the deposition models was determined by computational fluid dynamics (CFD) simulation. For the experimental procedure, the particles were generated until reaching the steady-state condition (emission period). Then generation stopped, and concentration measurements continued until reaching the background concentration (decay period). The results of the tracer gas decay tests revealed that the ACHs of the chamber were: 1.4 and 3.0, and the well-mixed condition was achieved. The CFD results showed the average mean friction velocity and their standard deviations for the lowest and highest ACH were (8.87 ± 0.36) ×10-2 m/s and (8.88 ± 0.38) ×10-2 m/s, respectively. The numerical results indicated the difference between the predicted deposition rates by the three deposition models was less than 2%. The experimental and numerical aerosol concentrations were compared in the emission period and decay period. In both periods, the prediction accuracy of the modified model improved in comparison with the classic WMR model. However, there is still a difference between the actual value and the predicted value. In the emission period, the modified WMR results closely follow the experimental data. However, the model significantly overestimates the experimental results during the decay period. This finding is mainly due to an underestimation of the deposition rate in the model and uncertainty related to measurement devices and particle size distribution. Comparing the experimental and numerical deposition rates revealed that the actual particle deposition rate is significant, but the deposition mechanisms considered in the model were ten times lower than the experimental value. Thus, particle deposition was significant and will affect the airborne concentration in occupational settings, and it should be considered in the airborne exposure prediction model. The role of other removal mechanisms should be investigated.

Keywords: aerosol, CFD, exposure assessment, occupational settings, well-mixed room model, zonal model

Procedia PDF Downloads 90
15606 Impact of Audit Committee on Real Earnings Management: Cases of Netherlands

Authors: Sana Masmoudi Mardassi, Yosra Makni Fourati

Abstract:

Regulators highlight the importance of the Audit Committee (AC) as a key internal corporate governance mechanism. One of the most important roles of this committee is to oversee the financial reporting process. The purpose of this paper is to examine the link between the characteristics of an audit committee and the financial reporting quality by investigating whether the characteristics of audit committees are associated with improved financial reporting quality, especially the Real Earnings Management. In the current study, a panel data from 80 nonfinancial companies listed on the Amsterdam Stock Exchange during the period between 2010 and 2017 were used. To measure audit committee characteristics, four proxies have been used, specifically, audit committee independence, financial expertise, gender diversity and AC meetings. For this research, a linear regression model was used to identify the influence of a set of board characteristics of the audit committee on real earnings management after controlling for firm audit committee size, leverage, size, loss, growth and board size. This research provides empirical evidence of the association between audit committee independence, financial expertise, gender diversity and meetings and Real Earnings Management (REM) as a proxy of financial reporting quality. The study finds that independence and AC Gender diversity are strongly related to financial reporting quality. In fact, these two characteristics constrain REM. The results also suggest that AC- financial expertise reduces to some extent, the likelihood of engaging in REM. These conclusions provide support then to the audit committee requirement under the Dutch Corporate Governance Code rules regarding gender diversity and AC meetings.

Keywords: audit committee, financial expertise, independence, real earnings management

Procedia PDF Downloads 150