Search results for: stochastic pi calculus
42 Nanoporous Metals Reinforced with Fullerenes
Authors: Deni̇z Ezgi̇ Gülmez, Mesut Kirca
Abstract:
Nanoporous (np) metals have attracted considerable attention owing to their cellular morphological features at atomistic scale which yield ultra-high specific surface area awarding a great potential to be employed in diverse applications such as catalytic, electrocatalytic, sensing, mechanical and optical. As one of the carbon based nanostructures, fullerenes are also another type of outstanding nanomaterials that have been extensively investigated due to their remarkable chemical, mechanical and optical properties. In this study, the idea of improving the mechanical behavior of nanoporous metals by inclusion of the fullerenes, which offers a new metal-carbon nanocomposite material, is examined and discussed. With this motivation, tensile mechanical behavior of nanoporous metals reinforced with carbon fullerenes is investigated by classical molecular dynamics (MD) simulations. Atomistic models of the nanoporous metals with ultrathin ligaments are obtained through a stochastic process simply based on the intersection of spherical volumes which has been used previously in literature. According to this technique, the atoms within the ensemble of intersecting spherical volumes is removed from the pristine solid block of the selected metal, which results in porous structures with spherical cells. Following this, fullerene units are added into the cellular voids to obtain final atomistic configurations for the numerical tensile tests. Several numerical specimens are prepared with different number of fullerenes per cell and with varied fullerene sizes. LAMMPS code was used to perform classical MD simulations to conduct uniaxial tension experiments on np models filled by fullerenes. The interactions between the metal atoms are modeled by using embedded atomic method (EAM) while adaptive intermolecular reactive empirical bond order (AIREBO) potential is employed for the interaction of carbon atoms. Furthermore, atomic interactions between the metal and carbon atoms are represented by Lennard-Jones potential with appropriate parameters. In conclusion, the ultimate goal of the study is to present the effects of fullerenes embedded into the cellular structure of np metals on the tensile response of the porous metals. The results are believed to be informative and instructive for the experimentalists to synthesize hybrid nanoporous materials with improved properties and multifunctional characteristics.Keywords: fullerene, intersecting spheres, molecular dynamic, nanoporous metals
Procedia PDF Downloads 23941 The Usage of Bridge Estimator for Hegy Seasonal Unit Root Tests
Authors: Huseyin Guler, Cigdem Kosar
Abstract:
The aim of this study is to propose Bridge estimator for seasonal unit root tests. Seasonality is an important factor for many economic time series. Some variables may contain seasonal patterns and forecasts that ignore important seasonal patterns have a high variance. Therefore, it is very important to eliminate seasonality for seasonal macroeconomic data. There are some methods to eliminate the impacts of seasonality in time series. One of them is filtering the data. However, this method leads to undesired consequences in unit root tests, especially if the data is generated by a stochastic seasonal process. Another method to eliminate seasonality is using seasonal dummy variables. Some seasonal patterns may result from stationary seasonal processes, which are modelled using seasonal dummies but if there is a varying and changing seasonal pattern over time, so the seasonal process is non-stationary, deterministic seasonal dummies are inadequate to capture the seasonal process. It is not suitable to use seasonal dummies for modeling such seasonally nonstationary series. Instead of that, it is necessary to take seasonal difference if there are seasonal unit roots in the series. Different alternative methods are proposed in the literature to test seasonal unit roots, such as Dickey, Hazsa, Fuller (DHF) and Hylleberg, Engle, Granger, Yoo (HEGY) tests. HEGY test can be also used to test the seasonal unit root in different frequencies (monthly, quarterly, and semiannual). Another issue in unit root tests is the lag selection. Lagged dependent variables are added to the model in seasonal unit root tests as in the unit root tests to overcome the autocorrelation problem. In this case, it is necessary to choose the lag length and determine any deterministic components (i.e., a constant and trend) first, and then use the proper model to test for seasonal unit roots. However, this two-step procedure might lead size distortions and lack of power in seasonal unit root tests. Recent studies show that Bridge estimators are good in selecting optimal lag length while differentiating nonstationary versus stationary models for nonseasonal data. The advantage of this estimator is the elimination of the two-step nature of conventional unit root tests and this leads a gain in size and power. In this paper, the Bridge estimator is proposed to test seasonal unit roots in a HEGY model. A Monte-Carlo experiment is done to determine the efficiency of this approach and compare the size and power of this method with HEGY test. Since Bridge estimator performs well in model selection, our approach may lead to some gain in terms of size and power over HEGY test.Keywords: bridge estimators, HEGY test, model selection, seasonal unit root
Procedia PDF Downloads 34240 Further Development of Offshore Floating Solar and Its Design Requirements
Authors: Madjid Karimirad
Abstract:
Floating solar was not very well-known in the renewable energy field a decade ago; however, there has been tremendous growth internationally with a Compound Annual Growth Rate (CAGR) of nearly 30% in recent years. To reach the goal of global net-zero emission by 2050, all renewable energy sources including solar should be used. Considering that 40% of the world’s population lives within 100 kilometres of the coasts, floating solar in coastal waters is an obvious energy solution. However, this requires more robust floating solar solutions. This paper tries to enlighten the fundamental requirements in the design of floating solar for offshore installations from the hydrodynamic and offshore engineering points of view. In this regard, a closer look at dynamic characteristics, stochastic behaviour and nonlinear phenomena appearing in this kind of structure is a major focus of the current article. Floating solar structures are alternative and very attractive green energy installations with (a) Less strain on land usage for densely populated areas; (b) Natural cooling effect with efficiency gain; and (c) Increased irradiance from the reflectivity of water. Also, floating solar in conjunction with the hydroelectric plants can optimise energy efficiency and improve system reliability. The co-locating of floating solar units with other types such as offshore wind, wave energy, tidal turbines as well as aquaculture (fish farming) can result in better ocean space usage and increase the synergies. Floating solar technology has seen considerable developments in installed capacities in the past decade. Development of design standards and codes of practice for floating solar technologies deployed on both inland water-bodies and offshore is required to ensure robust and reliable systems that do not have detrimental impacts on the hosting water body. Floating solar will account for 17% of all PV energy produced worldwide by 2030. To enhance the development, further research in this area is needed. This paper aims to discuss the main critical design aspects in light of the load and load effects that the floating solar platforms are subjected to. The key considerations in hydrodynamics, aerodynamics and simultaneous effects from the wind and wave load actions will be discussed. The link of dynamic nonlinear loading, limit states and design space considering the environmental conditions is set to enable a better understanding of the design requirements of fast-evolving floating solar technology.Keywords: floating solar, offshore renewable energy, wind and wave loading, design space
Procedia PDF Downloads 8039 A Study on Inverse Determination of Impact Force on a Honeycomb Composite Panel
Authors: Hamed Kalhori, Lin Ye
Abstract:
In this study, an inverse method was developed to reconstruct the magnitude and duration of impact forces exerted to a rectangular carbon fibre-epoxy composite honeycomb sandwich panel. The dynamic signals captured by Piezoelectric (PZT) sensors installed on the panel remotely from the impact locations were utilized to reconstruct the impact force generated by an instrumented hammer through an extended deconvolution approach. Two discretized forms of convolution integral are considered; the traditional one with an explicit transfer function and the modified one without an explicit transfer function. Deconvolution, usually applied to reconstruct the time history (e.g. magnitude) of a stochastic force at a defined location, is extended to identify both the location and magnitude of the impact force among a number of potential impact locations. It is assumed that a number of impact forces are simultaneously exerted to all potential locations, but the magnitude of all forces except one is zero, implicating that the impact occurs only at one location. The extended deconvolution is then applied to determine the magnitude as well as location (among the potential ones), incorporating the linear superposition of responses resulted from impact at each potential location. The problem can be categorized into under-determined (the number of sensors is less than that of impact locations), even-determined (the number of sensors equals that of impact locations), or over-determined (the number of sensors is greater than that of impact locations) cases. For an under-determined case, it comprises three potential impact locations and one PZT sensor for the rectangular carbon fibre-epoxy composite honeycomb sandwich panel. Assessments are conducted to evaluate the factors affecting the precision of the reconstructed force. Truncated Singular Value Decomposition (TSVD) and the Tikhonov regularization are independently chosen to regularize the problem to find the most suitable method for this system. The selection of optimal value of the regularization parameter is investigated through L-curve and Generalized Cross Validation (GCV) methods. In addition, the effect of different width of signal windows on the reconstructed force is examined. It is observed that the impact force generated by the instrumented impact hammer is sensitive to the impact locations of the structure, having a shape from a simple half-sine to a complicated one. The accuracy of the reconstructed impact force is evaluated using the correlation co-efficient between the reconstructed force and the actual one. Based on this criterion, it is concluded that the forces reconstructed by using the extended deconvolution without an explicit transfer function together with Tikhonov regularization match well with the actual forces in terms of magnitude and duration.Keywords: honeycomb composite panel, deconvolution, impact localization, force reconstruction
Procedia PDF Downloads 53638 User Experience in Relation to Eye Tracking Behaviour in VR Gallery
Authors: Veslava Osinska, Adam Szalach, Dominik Piotrowski
Abstract:
Contemporary VR technologies allow users to explore virtual 3D spaces where they can work, socialize, learn, and play. User's interaction with GUI and the pictures displayed implicate perceptual and also cognitive processes which can be monitored due to neuroadaptive technologies. These modalities provide valuable information about the users' intentions, situational interpretations, and emotional states, to adapt an application or interface accordingly. Virtual galleries outfitted by specialized assets have been designed using the Unity engine BITSCOPE project in the frame of CHIST-ERA IV program. Users interaction with gallery objects implies the questions about his/her visual interests in art works and styles. Moreover, an attention, curiosity, and other emotional states are possible to be monitored and analyzed. Natural gaze behavior data and eye position were recorded by built-in eye-tracking module within HTC Vive headset gogle for VR. Eye gaze results are grouped due to various users’ behavior schemes and the appropriate perpetual-cognitive styles are recognized. Parallelly usability tests and surveys were adapted to identify the basic features of a user-centered interface for the virtual environments across most of the timeline of the project. A total of sixty participants were selected from the distinct faculties of University and secondary schools. Users’ primary knowledge about art and was evaluated during pretest and this way the level of art sensitivity was described. Data were collected during two months. Each participant gave written informed consent before participation. In data analysis reducing the high-dimensional data into a relatively low-dimensional subspace ta non linear algorithms were used such as multidimensional scaling and novel technique technique t-Stochastic Neighbor Embedding. This way it can classify digital art objects by multi modal time characteristics of eye tracking measures and reveal signatures describing selected artworks. Current research establishes the optimal place on aesthetic-utility scale because contemporary interfaces of most applications require to be designed in both functional and aesthetical ways. The study concerns also an analysis of visual experience for subsamples of visitors, differentiated, e.g., in terms of frequency of museum visits, cultural interests. Eye tracking data may also show how to better allocate artefacts and paintings or increase their visibility when possible.Keywords: eye tracking, VR, UX, visual art, virtual gallery, visual communication
Procedia PDF Downloads 4537 Economic Efficiency of Cassava Production in Nimba County, Liberia: An Output-Oriented Approach
Authors: Kollie B. Dogba, Willis Oluoch-Kosura, Chepchumba Chumo
Abstract:
In Liberia, many of the agricultural households cultivate cassava for either sustenance purposes, or to generate farm income. Many of the concentrated cassava farmers reside in Nimba, a north-eastern County that borders two other economies: the Republics of Cote D’Ivoire and Guinea. With a high demand for cassava output and products in emerging Asian markets coupled with an objective of the Liberia agriculture policies to increase the competitiveness of valued agriculture crops; there is a need to examine the level of resource-use efficiency for many agriculture crops. However, there is a scarcity of information on the efficiency of many agriculture crops, including cassava. Hence the study applying an output-oriented method seeks to assess the economic efficiency of cassava farmers in Nimba County, Liberia. A multi-stage sampling technique was employed to generate a sample for the study. From 216 cassava farmers, data related to on-farm attributes, socio-economic and institutional factors were collected. The stochastic frontier models, using the Translog functional forms, of production and revenue, were used to determine the level of revenue efficiency and its determinants. The result showed that most of the cassava farmers are male (60%). Many of the farmers are either married, engaged or living together with a spouse (83%), with a mean household size of nine persons. Farmland is prevalently obtained by inheritance (95%), average farm size is 1.34 hectares, and most cassava farmers did not access agriculture credits (76%) and extension services (91%). The mean cassava output per hectare is 1,506.02 kg, which estimates average revenue of L$23,551.16 (Liberian dollars). Empirical results showed that the revenue efficiency of cassava farmers varies from 0.1% to 73.5%; with the mean revenue efficiency of 12.9%. This indicates that on average, there is a vast potential of 87.1% to increase the economic efficiency of cassava farmers in Nimba by improving technical and allocative efficiencies. For the significant determinants of revenue efficiency, age and group membership had negative effects on revenue efficiency of cassava production; while farming experience, access to extension, formal education, and average wage rate have positive effects. The study recommends the setting-up and incentivizing of farmer field schools for cassava farmers to primarily share their farming experiences with others and to learn robust cultivation techniques of sustainable agriculture. Also, farm managers and farmers should consider a fix wage rate in labor contracts for all stages of cassava farming.Keywords: economic efficiency, frontier production and revenue functions, Nimba County, Liberia, output-oriented approach, revenue efficiency, sustainable agriculture
Procedia PDF Downloads 12836 Monitoring Potential Temblor Localities as a Supplemental Risk Control System
Authors: Mikhail Zimin, Svetlana Zimina, Maxim Zimin
Abstract:
Without question, the basic method of prevention of human and material losses is the provision for adequate strength of constructions. At the same time, seismic load has a stochastic character. So, at all times, there is little danger of earthquake forces exceeding the selected design load. This risk is very low, but the consequences of such events may be extremely serious. Very dangerous are also occasional mistakes in seismic zoning, soil conditions changing before temblors, and failure to take into account hazardous natural phenomena caused by earthquakes. Besides, it is known that temblors detrimentally affect the environmental situation in regions where they occur, resulting in panic and worsening various disease courses. It may lead to mistakes of personnel of hazardous production facilities like the production and distribution of gas and oil, which may provoke severe accidents. In addition, gas and oil pipelines often have long mileage and cross many perilous zones by contrast with buildings. This situation increases the risk of heavy accidents. In such cases, complex monitoring of potential earthquake localities would be relevant. Even though the number of successful real-time forecasts of earthquakes is not great, it is well in excess, such as may be under random guessing. Experimental performed time-lapse study and analysis consist of searching seismic, biological, meteorological, and light earthquake precursors, processing such data with the help of fuzzy sets, collecting weather information, utilizing a database of terrain, and computing risk of slope processes under the temblor in a given setting. Works were done in a real-time environment and broadly acceptable results took place. Observations from already in-place seismic recording systems are used. Furthermore, a look back study of precursors of known earthquakes is done. Situations before Ashkhabad, Tashkent, and Haicheng seismic events are analyzed. Fairish findings are obtained. Results of earthquake forecasts can be used for predicting dangerous natural phenomena caused by temblors such as avalanches and mudslides. They may also be utilized for prophylaxis of some diseases and their complications. Relevant software is worked out too. It should be emphasized that such control does not require serious financial expenses and can be performed by a small group of professionals. Thus, complex monitoring of potential earthquake localities, including short-term earthquake forecasts and analysis of possible hazardous consequences of temblors, may further the safety of pipeline facilities.Keywords: risk, earthquake, monitoring, forecast, precursor
Procedia PDF Downloads 2435 Predicting Polyethylene Processing Properties Based on Reaction Conditions via a Coupled Kinetic, Stochastic and Rheological Modelling Approach
Authors: Kristina Pflug, Markus Busch
Abstract:
Being able to predict polymer properties and processing behavior based on the applied operating reaction conditions in one of the key challenges in modern polymer reaction engineering. Especially, for cost-intensive processes such as the high-pressure polymerization of low-density polyethylene (LDPE) with high safety-requirements, the need for simulation-based process optimization and product design is high. A multi-scale modelling approach was set-up and validated via a series of high-pressure mini-plant autoclave reactor experiments. The approach starts with the numerical modelling of the complex reaction network of the LDPE polymerization taking into consideration the actual reaction conditions. While this gives average product properties, the complex polymeric microstructure including random short- and long-chain branching is calculated via a hybrid Monte Carlo-approach. Finally, the processing behavior of LDPE -its melt flow behavior- is determined in dependence of the previously determined polymeric microstructure using the branch on branch algorithm for randomly branched polymer systems. All three steps of the multi-scale modelling approach can be independently validated against analytical data. A triple-detector GPC containing an IR, viscosimetry and multi-angle light scattering detector is applied. It serves to determine molecular weight distributions as well as chain-length dependent short- and long-chain branching frequencies. 13C-NMR measurements give average branching frequencies, and rheological measurements in shear and extension serve to characterize the polymeric flow behavior. The accordance of experimental and modelled results was found to be extraordinary, especially taking into consideration that the applied multi-scale modelling approach does not contain parameter fitting of the data. This validates the suggested approach and proves its universality at the same time. In the next step, the modelling approach can be applied to other reactor types, such as tubular reactors or industrial scale. Moreover, sensitivity analysis for systematically varying process conditions is easily feasible. The developed multi-scale modelling approach finally gives the opportunity to predict and design LDPE processing behavior simply based on process conditions such as feed streams and inlet temperatures and pressures.Keywords: low-density polyethylene, multi-scale modelling, polymer properties, reaction engineering, rheology
Procedia PDF Downloads 12534 Forecasting Residential Water Consumption in Hamilton, New Zealand
Authors: Farnaz Farhangi
Abstract:
Many people in New Zealand believe that the access to water is inexhaustible, and it comes from a history of virtually unrestricted access to it. For the region like Hamilton which is one of New Zealand’s fastest growing cities, it is crucial for policy makers to know about the future water consumption and implementation of rules and regulation such as universal water metering. Hamilton residents use water freely and they do not have any idea about how much water they use. Hence, one of proposed objectives of this research is focusing on forecasting water consumption using different methods. Residential water consumption time series exhibits seasonal and trend variations. Seasonality is the pattern caused by repeating events such as weather conditions in summer and winter, public holidays, etc. The problem with this seasonal fluctuation is that, it dominates other time series components and makes difficulties in determining other variations (such as educational campaign’s effect, regulation, etc.) in time series. Apart from seasonality, a stochastic trend is also combined with seasonality and makes different effects on results of forecasting. According to the forecasting literature, preprocessing (de-trending and de-seasonalization) is essential to have more performed forecasting results, while some other researchers mention that seasonally non-adjusted data should be used. Hence, I answer the question that is pre-processing essential? A wide range of forecasting methods exists with different pros and cons. In this research, I apply double seasonal ARIMA and Artificial Neural Network (ANN), considering diverse elements such as seasonality and calendar effects (public and school holidays) and combine their results to find the best predicted values. My hypothesis is the examination the results of combined method (hybrid model) and individual methods and comparing the accuracy and robustness. In order to use ARIMA, the data should be stationary. Also, ANN has successful forecasting applications in terms of forecasting seasonal and trend time series. Using a hybrid model is a way to improve the accuracy of the methods. Due to the fact that water demand is dominated by different seasonality, in order to find their sensitivity to weather conditions or calendar effects or other seasonal patterns, I combine different methods. The advantage of this combination is reduction of errors by averaging of each individual model. It is also useful when we are not sure about the accuracy of each forecasting model and it can ease the problem of model selection. Using daily residential water consumption data from January 2000 to July 2015 in Hamilton, I indicate how prediction by different methods varies. ANN has more accurate forecasting results than other method and preprocessing is essential when we use seasonal time series. Using hybrid model reduces forecasting average errors and increases the performance.Keywords: artificial neural network (ANN), double seasonal ARIMA, forecasting, hybrid model
Procedia PDF Downloads 33933 Measures of Reliability and Transportation Quality on an Urban Rail Transit Network in Case of Links’ Capacities Loss
Authors: Jie Liu, Jinqu Cheng, Qiyuan Peng, Yong Yin
Abstract:
Urban rail transit (URT) plays a significant role in dealing with traffic congestion and environmental problems in cities. However, equipment failure and obstruction of links often lead to URT links’ capacities loss in daily operation. It affects the reliability and transport service quality of URT network seriously. In order to measure the influence of links’ capacities loss on reliability and transport service quality of URT network, passengers are divided into three categories in case of links’ capacities loss. Passengers in category 1 are less affected by the loss of links’ capacities. Their travel is reliable since their travel quality is not significantly reduced. Passengers in category 2 are affected by the loss of links’ capacities heavily. Their travel is not reliable since their travel quality is reduced seriously. However, passengers in category 2 still can travel on URT. Passengers in category 3 can not travel on URT because their travel paths’ passenger flow exceeds capacities. Their travel is not reliable. Thus, the proportion of passengers in category 1 whose travel is reliable is defined as reliability indicator of URT network. The transport service quality of URT network is related to passengers’ travel time, passengers’ transfer times and whether seats are available to passengers. The generalized travel cost is a comprehensive reflection of travel time, transfer times and travel comfort. Therefore, passengers’ average generalized travel cost is used as transport service quality indicator of URT network. The impact of links’ capacities loss on transport service quality of URT network is measured with passengers’ relative average generalized travel cost with and without links’ capacities loss. The proportion of the passengers affected by links and betweenness of links are used to determine the important links in URT network. The stochastic user equilibrium distribution model based on the improved logit model is used to determine passengers’ categories and calculate passengers’ generalized travel cost in case of links’ capacities loss, which is solved with method of successive weighted averages algorithm. The reliability and transport service quality indicators of URT network are calculated with the solution result. Taking Wuhan Metro as a case, the reliability and transport service quality of Wuhan metro network is measured with indicators and method proposed in this paper. The result shows that using the proportion of the passengers affected by links can identify important links effectively which have great influence on reliability and transport service quality of URT network; The important links are mostly connected to transfer stations and the passenger flow of important links is high; With the increase of number of failure links and the proportion of capacity loss, the reliability of the network keeps decreasing, the proportion of passengers in category 3 keeps increasing and the proportion of passengers in category 2 increases at first and then decreases; When the number of failure links and the proportion of capacity loss increased to a certain level, the decline of transport service quality is weakened.Keywords: urban rail transit network, reliability, transport service quality, links’ capacities loss, important links
Procedia PDF Downloads 12832 Risk Assessment of Flood Defences by Utilising Condition Grade Based Probabilistic Approach
Authors: M. Bahari Mehrabani, Hua-Peng Chen
Abstract:
Management and maintenance of coastal defence structures during the expected life cycle have become a real challenge for decision makers and engineers. Accurate evaluation of the current condition and future performance of flood defence structures is essential for effective practical maintenance strategies on the basis of available field inspection data. Moreover, as coastal defence structures age, it becomes more challenging to implement maintenance and management plans to avoid structural failure. Therefore, condition inspection data are essential for assessing damage and forecasting deterioration of ageing flood defence structures in order to keep the structures in an acceptable condition. The inspection data for flood defence structures are often collected using discrete visual condition rating schemes. In order to evaluate future condition of the structure, a probabilistic deterioration model needs to be utilised. However, existing deterioration models may not provide a reliable prediction of performance deterioration for a long period due to uncertainties. To tackle the limitation, a time-dependent condition-based model associated with a transition probability needs to be developed on the basis of condition grade scheme for flood defences. This paper presents a probabilistic method for predicting future performance deterioration of coastal flood defence structures based on condition grading inspection data and deterioration curves estimated by expert judgement. In condition-based deterioration modelling, the main task is to estimate transition probability matrices. The deterioration process of the structure related to the transition states is modelled according to Markov chain process, and a reliability-based approach is used to estimate the probability of structural failure. Visual inspection data according to the United Kingdom Condition Assessment Manual are used to obtain the initial condition grade curve of the coastal flood defences. The initial curves then modified in order to develop transition probabilities through non-linear regression based optimisation algorithms. The Monte Carlo simulations are then used to evaluate the future performance of the structure on the basis of the estimated transition probabilities. Finally, a case study is given to demonstrate the applicability of the proposed method under no-maintenance and medium-maintenance scenarios. Results show that the proposed method can provide an effective predictive model for various situations in terms of available condition grading data. The proposed model also provides useful information on time-dependent probability of failure in coastal flood defences.Keywords: condition grading, flood defense, performance assessment, stochastic deterioration modelling
Procedia PDF Downloads 23531 Effects of Temperature and the Use of Bacteriocins on Cross-Contamination from Animal Source Food Processing: A Mathematical Model
Authors: Benjamin Castillo, Luis Pastenes, Fernando Cerdova
Abstract:
The contamination of food by microbial agents is a common problem in the industry, especially regarding the elaboration of animal source products. Incorrect manipulation of the machinery or on the raw materials can cause a decrease in production or an epidemiological outbreak due to intoxication. In order to improve food product quality, different methods have been used to reduce or, at least, to slow down the growth of the pathogens, especially deteriorated, infectious or toxigenic bacteria. These methods are usually carried out under low temperatures and short processing time (abiotic agents), along with the application of antibacterial substances, such as bacteriocins (biotic agents). This, in a controlled and efficient way that fulfills the purpose of bacterial control without damaging the final product. Therefore, the objective of the present study is to design a secondary mathematical model that allows the prediction of both the biotic and abiotic factor impact associated with animal source food processing. In order to accomplish this objective, the authors propose a three-dimensional differential equation model, whose components are: bacterial growth, release, production and artificial incorporation of bacteriocins and changes in pH levels of the medium. These three dimensions are constantly being influenced by the temperature of the medium. Secondly, this model adapts to an idealized situation of cross-contamination animal source food processing, with the study agents being both the animal product and the contact surface. Thirdly, the stochastic simulations and the parametric sensibility analysis are compared with referential data. The main results obtained from the analysis and simulations of the mathematical model were to discover that, although bacterial growth can be stopped in lower temperatures, even lower ones are needed to eradicate it. However, this can be not only expensive, but counterproductive as well in terms of the quality of the raw materials and, on the other hand, higher temperatures accelerate bacterial growth. In other aspects, the use and efficiency of bacteriocins are an effective alternative in the short and medium terms. Moreover, an indicator of bacterial growth is a low-level pH, since lots of deteriorating bacteria are lactic acids. Lastly, the processing times are a secondary agent of concern when the rest of the aforementioned agents are under control. Our main conclusion is that when acclimating a mathematical model within the context of the industrial process, it can generate new tools that predict bacterial contamination, the impact of bacterial inhibition, and processing method times. In addition, the mathematical modeling proposed logistic input of broad application, which can be replicated on non-meat food products, other pathogens or even on contamination by crossed contact of allergen foods.Keywords: bacteriocins, cross-contamination, mathematical model, temperature
Procedia PDF Downloads 14530 Assessing Children’s Probabilistic and Creative Thinking in a Non-formal Learning Context
Authors: Ana Breda, Catarina Cruz
Abstract:
Daily, we face unpredictable events, often attributed to chance, as there is no justification for such an occurrence. Chance, understood as a source of uncertainty, is present in several aspects of human life, such as weather forecasts, dice rolling, and lottery. Surprisingly, humans and some animals can quickly adjust their behavior to handle efficiently doubly stochastic processes (random events with two layers of randomness, like unpredictable weather affecting dice rolling). This adjustment ability suggests that the human brain has built-in mechanisms for perceiving, understanding, and responding to simple probabilities. It also explains why current trends in mathematics education include probability concepts in official curriculum programs, starting from the third year of primary education onwards. In the first years of schooling, children learn to use a certain type of (specific) vocabulary, such as never, always, rarely, perhaps, likely, and unlikely, to help them to perceive and understand the probability of some events. These are keywords of crucial importance for their perception and understanding of probabilities. The development of the probabilistic concepts comes from facts and cause-effect sequences resulting from the subject's actions, as well as the notion of chance and intuitive estimates based on everyday experiences. As part of a junior summer school program, which took place at a Portuguese university, a non-formal learning experiment was carried out with 18 children in the 5th and 6th grades. This experience was designed to be implemented in a dynamic of a serious ice-breaking game, to assess their levels of probabilistic, critical, and creative thinking in understanding impossible, certain, equally probable, likely, and unlikely events, and also to gain insight into how the non-formal learning context influenced their achievements. The criteria used to evaluate probabilistic thinking included the creative ability to conceive events classified in the specified categories, the ability to properly justify the categorization, the ability to critically assess the events classified by other children, and the ability to make predictions based on a given probability. The data analysis employs a qualitative, descriptive, and interpretative-methods approach based on students' written productions, audio recordings, and researchers' field notes. This methodology allowed us to conclude that such an approach is an appropriate and helpful formative assessment tool. The promising results of this initial exploratory study require a future research study with children from these levels of education, from different regions, attending public or private schools, to validate and expand our findings.Keywords: critical and creative thinking, non-formal mathematics learning, probabilistic thinking, serious game
Procedia PDF Downloads 2829 Modeling and Analysis Of Occupant Behavior On Heating And Air Conditioning Systems In A Higher Education And Vocational Training Building In A Mediterranean Climate
Authors: Abderrahmane Soufi
Abstract:
The building sector is the largest consumer of energy in France, accounting for 44% of French consumption. To reduce energy consumption and improve energy efficiency, France implemented an energy transition law targeting 40% energy savings by 2030 in the tertiary building sector. Building simulation tools are used to predict the energy performance of buildings but the reliability of these tools is hampered by discrepancies between the real and simulated energy performance of a building. This performance gap lies in the simplified assumptions of certain factors, such as the behavior of occupants on air conditioning and heating, which is considered deterministic when setting a fixed operating schedule and a fixed interior comfort temperature. However, the behavior of occupants on air conditioning and heating is stochastic, diverse, and complex because it can be affected by many factors. Probabilistic models are an alternative to deterministic models. These models are usually derived from statistical data and express occupant behavior by assuming a probabilistic relationship to one or more variables. In the literature, logistic regression has been used to model the behavior of occupants with regard to heating and air conditioning systems by considering univariate logistic models in residential buildings; however, few studies have developed multivariate models for higher education and vocational training buildings in a Mediterranean climate. Therefore, in this study, occupant behavior on heating and air conditioning systems was modeled using logistic regression. Occupant behavior related to the turn-on heating and air conditioning systems was studied through experimental measurements collected over a period of one year (June 2023–June 2024) in three classrooms occupied by several groups of students in engineering schools and professional training. Instrumentation was provided to collect indoor temperature and indoor relative humidity in 10-min intervals. Furthermore, the state of the heating/air conditioning system (off or on) and the set point were determined. The outdoor air temperature, relative humidity, and wind speed were collected as weather data. The number of occupants, age, and sex were also considered. Logistic regression was used for modeling an occupant turning on the heating and air conditioning systems. The results yielded a proposed model that can be used in building simulation tools to predict the energy performance of teaching buildings. Based on the first months (summer and early autumn) of the investigations, the results illustrate that the occupant behavior of the air conditioning systems is affected by the indoor relative humidity and temperature in June, July, and August and by the indoor relative humidity, temperature, and number of occupants in September and October. Occupant behavior was analyzed monthly, and univariate and multivariate models were developed.Keywords: occupant behavior, logistic regression, behavior model, mediterranean climate, air conditioning, heating
Procedia PDF Downloads 6228 Accounting for Downtime Effects in Resilience-Based Highway Network Restoration Scheduling
Authors: Zhenyu Zhang, Hsi-Hsien Wei
Abstract:
Highway networks play a vital role in post-disaster recovery for disaster-damaged areas. Damaged bridges in such networks can disrupt the recovery activities by impeding the transportation of people, cargo, and reconstruction resources. Therefore, rapid restoration of damaged bridges is of paramount importance to long-term disaster recovery. In the post-disaster recovery phase, the key to restoration scheduling for a highway network is prioritization of bridge-repair tasks. Resilience is widely used as a measure of the ability to recover with which a network can return to its pre-disaster level of functionality. In practice, highways will be temporarily blocked during the downtime of bridge restoration, leading to the decrease of highway-network functionality. The failure to take downtime effects into account can lead to overestimation of network resilience. Additionally, post-disaster recovery of highway networks is generally divided into emergency bridge repair (EBR) in the response phase and long-term bridge repair (LBR) in the recovery phase, and both of EBR and LBR are different in terms of restoration objectives, restoration duration, budget, etc. Distinguish these two phases are important to precisely quantify highway network resilience and generate suitable restoration schedules for highway networks in the recovery phase. To address the above issues, this study proposes a novel resilience quantification method for the optimization of long-term bridge repair schedules (LBRS) taking into account the impact of EBR activities and restoration downtime on a highway network’s functionality. A time-dependent integer program with recursive functions is formulated for optimally scheduling LBR activities. Moreover, since uncertainty always exists in the LBRS problem, this paper extends the optimization model from the deterministic case to the stochastic case. A hybrid genetic algorithm that integrates a heuristic approach into a traditional genetic algorithm to accelerate the evolution process is developed. The proposed methods are tested using data from the 2008 Wenchuan earthquake, based on a regional highway network in Sichuan, China, consisting of 168 highway bridges on 36 highways connecting 25 cities/towns. The results show that, in this case, neglecting the bridge restoration downtime can lead to approximately 15% overestimation of highway network resilience. Moreover, accounting for the impact of EBR on network functionality can help to generate a more specific and reasonable LBRS. The theoretical and practical values are as follows. First, the proposed network recovery curve contributes to comprehensive quantification of highway network resilience by accounting for the impact of both restoration downtime and EBR activities on the recovery curves. Moreover, this study can improve the highway network resilience from the organizational dimension by providing bridge managers with optimal LBR strategies.Keywords: disaster management, highway network, long-term bridge repair schedule, resilience, restoration downtime
Procedia PDF Downloads 15127 Stochastic Nuisance Flood Risk for Coastal Areas
Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong
Abstract:
The U.S. Federal Emergency Management Agency (FEMA) developed flood maps based on experts’ experience and estimates of the probability of flooding. Current flood-risk models evaluate flood risk with regional and subjective measures without impact from torrential rain and nuisance flooding at the neighborhood level. Nuisance flooding occurs in small areas in the community, where a few streets or blocks are routinely impacted. This type of flooding event occurs when torrential rainstorm combined with high tide and sea level rise temporarily exceeds a given threshold. In South Florida, this threshold is 1.7 ft above Mean Higher High Water (MHHW). The National Weather Service defines torrential rain as rain deposition at a rate greater than 0.3-inches per hour or three inches in a single day. Data from the Florida Climate Center, 1970 to 2020, shows 371 events with more than 3-inches of rain in a day in 612 months. The purpose of this research is to develop a data-driven method to determine comprehensive analytical damage-avoidance criteria that account for nuisance flood events at the single-family home level. The method developed uses the Failure Mode and Effect Analysis (FMEA) method from the American Society of Quality (ASQ) to estimate the Damage Avoidance (DA) preparation for a 1-day 100-year storm. The Consequence of Nuisance Flooding (CoNF) is estimated from community mitigation efforts to prevent nuisance flooding damage. The Probability of Nuisance Flooding (PoNF) is derived from the frequency and duration of torrential rainfall causing delays and community disruptions to daily transportation, human illnesses, and property damage. Urbanization and population changes are related to the U.S. Census Bureau's annual population estimates. Data collected by the United States Department of Agriculture (USDA) Natural Resources Conservation Service’s National Resources Inventory (NRI) and locally by the South Florida Water Management District (SFWMD) track the development and land use/land cover changes with time. The intent is to include temporal trends in population density growth and the impact on land development. Results from this investigation provide the risk of nuisance flooding as a function of CoNF and PoNF for coastal areas of South Florida. The data-based criterion provides awareness to local municipalities on their flood-risk assessment and gives insight into flood management actions and watershed development.Keywords: flood risk, nuisance flooding, urban flooding, FMEA
Procedia PDF Downloads 10026 Calculation of Organ Dose for Adult and Pediatric Patients Undergoing Computed Tomography Examinations: A Software Comparison
Authors: Aya Al Masri, Naima Oubenali, Safoin Aktaou, Thibault Julien, Malorie Martin, Fouad Maaloul
Abstract:
Introduction: The increased number of performed 'Computed Tomography (CT)' examinations raise public concerns regarding associated stochastic risk to patients. In its Publication 102, the ‘International Commission on Radiological Protection (ICRP)’ emphasized the importance of managing patient dose, particularly from repeated or multiple examinations. We developed a Dose Archiving and Communication System that gives multiple dose indexes (organ dose, effective dose, and skin-dose mapping) for patients undergoing radiological imaging exams. The aim of this study is to compare the organ dose values given by our software for patients undergoing CT exams with those of another software named "VirtualDose". Materials and methods: Our software uses Monte Carlo simulations to calculate organ doses for patients undergoing computed tomography examinations. The general calculation principle consists to simulate: (1) the scanner machine with all its technical specifications and associated irradiation cases (kVp, field collimation, mAs, pitch ...) (2) detailed geometric and compositional information of dozens of well identified organs of computational hybrid phantoms that contain the necessary anatomical data. The mass as well as the elemental composition of the tissues and organs that constitute our phantoms correspond to the recommendations of the international organizations (namely the ICRP and the ICRU). Their body dimensions correspond to reference data developed in the United States. Simulated data was verified by clinical measurement. To perform the comparison, 270 adult patients and 150 pediatric patients were used, whose data corresponds to exams carried out in France hospital centers. The comparison dataset of adult patients includes adult males and females for three different scanner machines and three different acquisition protocols (Head, Chest, and Chest-Abdomen-Pelvis). The comparison sample of pediatric patients includes the exams of thirty patients for each of the following age groups: new born, 1-2 years, 3-7 years, 8-12 years, and 13-16 years. The comparison for pediatric patients were performed on the “Head” protocol. The percentage of the dose difference were calculated for organs receiving a significant dose according to the acquisition protocol (80% of the maximal dose). Results: Adult patients: for organs that are completely covered by the scan range, the maximum percentage of dose difference between the two software is 27 %. However, there are three organs situated at the edges of the scan range that show a slightly higher dose difference. Pediatric patients: the percentage of dose difference between the two software does not exceed 30%. These dose differences may be due to the use of two different generations of hybrid phantoms by the two software. Conclusion: This study shows that our software provides a reliable dosimetric information for patients undergoing Computed Tomography exams.Keywords: adult and pediatric patients, computed tomography, organ dose calculation, software comparison
Procedia PDF Downloads 16525 The Data Quality Model for the IoT based Real-time Water Quality Monitoring Sensors
Authors: Rabbia Idrees, Ananda Maiti, Saurabh Garg, Muhammad Bilal Amin
Abstract:
IoT devices are the basic building blocks of IoT network that generate enormous volume of real-time and high-speed data to help organizations and companies to take intelligent decisions. To integrate this enormous data from multisource and transfer it to the appropriate client is the fundamental of IoT development. The handling of this huge quantity of devices along with the huge volume of data is very challenging. The IoT devices are battery-powered and resource-constrained and to provide energy efficient communication, these IoT devices go sleep or online/wakeup periodically and a-periodically depending on the traffic loads to reduce energy consumption. Sometime these devices get disconnected due to device battery depletion. If the node is not available in the network, then the IoT network provides incomplete, missing, and inaccurate data. Moreover, many IoT applications, like vehicle tracking and patient tracking require the IoT devices to be mobile. Due to this mobility, If the distance of the device from the sink node become greater than required, the connection is lost. Due to this disconnection other devices join the network for replacing the broken-down and left devices. This make IoT devices dynamic in nature which brings uncertainty and unreliability in the IoT network and hence produce bad quality of data. Due to this dynamic nature of IoT devices we do not know the actual reason of abnormal data. If data are of poor-quality decisions are likely to be unsound. It is highly important to process data and estimate data quality before bringing it to use in IoT applications. In the past many researchers tried to estimate data quality and provided several Machine Learning (ML), stochastic and statistical methods to perform analysis on stored data in the data processing layer, without focusing the challenges and issues arises from the dynamic nature of IoT devices and how it is impacting data quality. A comprehensive review on determining the impact of dynamic nature of IoT devices on data quality is done in this research and presented a data quality model that can deal with this challenge and produce good quality of data. This research presents the data quality model for the sensors monitoring water quality. DBSCAN clustering and weather sensors are used in this research to make data quality model for the sensors monitoring water quality. An extensive study has been done in this research on finding the relationship between the data of weather sensors and sensors monitoring water quality of the lakes and beaches. The detailed theoretical analysis has been presented in this research mentioning correlation between independent data streams of the two sets of sensors. With the help of the analysis and DBSCAN, a data quality model is prepared. This model encompasses five dimensions of data quality: outliers’ detection and removal, completeness, patterns of missing values and checks the accuracy of the data with the help of cluster’s position. At the end, the statistical analysis has been done on the clusters formed as the result of DBSCAN, and consistency is evaluated through Coefficient of Variation (CoV).Keywords: clustering, data quality, DBSCAN, and Internet of things (IoT)
Procedia PDF Downloads 14124 Controlling Deforestation in the Densely Populated Region of Central Java Province, Banjarnegara District, Indonesia
Authors: Guntur Bagus Pamungkas
Abstract:
As part of a tropical country that is normally rich in forest land areas, Indonesia has always been in the world's spotlight due to its significantly increasing process of deforestation. In one hand, it is related to the mainstay for maintaining the sustainability of the earth's ecosystem functions. On the other hand, they also cover the various potential sources of the global economy. Therefore, it can always be the target of different scale of investors to excessively exploit them. No wonder the emergence of disasters in various characteristics always comes up. In fact, the deforestation phenomenon does not only occur in various forest land areas in the main islands of Indonesia but also includes Java Island, the most densely populated areas in the world. This island only remains the forest land of about 9.8% of the total forest land in Indonesia due to its long history of it, especially in Central Java Province, the most densely populated area in Java. Again, not surprisingly, this province belongs to the area with the highest frequency of disasters because of it, landslides in particular. One of the areas that often experience it is Banjarnegara District, especially in mountainous areas that lies in the range from 1000 to 3000 meters above sea level, where the remains of land forest area can easyly still be found. Even among them still leaves less untouchable tropical rain forest whose area also covers part of a neighboring district, Pekalongan, which is considered to be the rest of the world's little paradise on Earth. The district's landscape is indeed beautiful, especially in the Dieng area, a major tourist destination in Central Java Province after Borobudur Temple. However, annually hazardous always threatens this district due to this landslide disaster. Even, there was a tragic event that was buried with its inhabitants a few decades ago. This research aims to find part of the concept of effective forest management through monitoring the presence of remaining forest areas in this area. The research implemented monitoring of deforestation rates using the Stochastic Cellular Automata-Markov Chain (SCA-MC) method, which serves to provide a spatial simulation of land use and cover changes (LULCC). This geospatial process uses the Landsat-8 OLI image product with Thermal Infra-Red Sensors (TIRS) Band 10 in 2020 and Landsat 5 TM with TIRS Band 6 in 2010. Then it is also integrated with physical and social geography issues using the QGIS 2.18.11 application with the Mollusce Plugin, which serves to clarify and calculate the area of land use and cover, especially in forest areas—using the LULCC method, which calculates the rate of forest area reduction in 2010-2020 in Banjarnegara District. Since the dependence of this area on the use of forest land is quite high, concepts and preventive actions are needed, such as rehabilitation and reforestation of critical lands through providing proper monitoring and targeted forest management to restore its ecosystem in the future.Keywords: deforestation, populous area, LULCC method, proper control and effective forest management
Procedia PDF Downloads 13623 Stochastic Approach for Technical-Economic Viability Analysis of Electricity Generation Projects with Natural Gas Pressure Reduction Turbines
Authors: Roberto M. G. Velásquez, Jonas R. Gazoli, Nelson Ponce Jr, Valério L. Borges, Alessandro Sete, Fernanda M. C. Tomé, Julian D. Hunt, Heitor C. Lira, Cristiano L. de Souza, Fabio T. Bindemann, Wilmar Wounnsoscky
Abstract:
Nowadays, society is working toward reducing energy losses and greenhouse gas emissions, as well as seeking clean energy sources, as a result of the constant increase in energy demand and emissions. Energy loss occurs in the gas pressure reduction stations at the delivery points in natural gas distribution systems (city gates). Installing pressure reduction turbines (PRT) parallel to the static reduction valves at the city gates enhances the energy efficiency of the system by recovering the enthalpy of the pressurized natural gas, obtaining in the pressure-lowering process shaft work and generating electrical power. Currently, the Brazilian natural gas transportation network has 9,409 km in extension, while the system has 16 national and 3 international natural gas processing plants, including more than 143 delivery points to final consumers. Thus, the potential of installing PRT in Brazil is 66 MW of power, which could yearly avoid the emission of 235,800 tons of CO2 and generate 333 GWh/year of electricity. On the other hand, an economic viability analysis of these energy efficiency projects is commonly carried out based on estimates of the project's cash flow obtained from several variables forecast. Usually, the cash flow analysis is performed using representative values of these variables, obtaining a deterministic set of financial indicators associated with the project. However, in most cases, these variables cannot be predicted with sufficient accuracy, resulting in the need to consider, to a greater or lesser degree, the risk associated with the calculated financial return. This paper presents an approach applied to the technical-economic viability analysis of PRTs projects that explicitly considers the uncertainties associated with the input parameters for the financial model, such as gas pressure at the delivery point, amount of energy generated by TRP, the future price of energy, among others, using sensitivity analysis techniques, scenario analysis, and Monte Carlo methods. In the latter case, estimates of several financial risk indicators, as well as their empirical probability distributions, can be obtained. This is a methodology for the financial risk analysis of PRT projects. The results of this paper allow a more accurate assessment of the potential PRT project's financial feasibility in Brazil. This methodology will be tested at the Cuiabá thermoelectric plant, located in the state of Mato Grosso, Brazil, and can be applied to study the potential in other countries.Keywords: pressure reduction turbine, natural gas pressure drop station, energy efficiency, electricity generation, monte carlo methods
Procedia PDF Downloads 11322 Fuzzy Availability Analysis of a Battery Production System
Authors: Merve Uzuner Sahin, Kumru D. Atalay, Berna Dengiz
Abstract:
In today’s competitive market, there are many alternative products that can be used in similar manner and purpose. Therefore, the utility of the product is an important issue for the preferability of the brand. This utility could be measured in terms of its functionality, durability, reliability. These all are affected by the system capabilities. Reliability is an important system design criteria for the manufacturers to be able to have high availability. Availability is the probability that a system (or a component) is operating properly to its function at a specific point in time or a specific period of times. System availability provides valuable input to estimate the production rate for the company to realize the production plan. When considering only the corrective maintenance downtime of the system, mean time between failure (MTBF) and mean time to repair (MTTR) are used to obtain system availability. Also, the MTBF and MTTR values are important measures to improve system performance by adopting suitable maintenance strategies for reliability engineers and practitioners working in a system. Failure and repair time probability distributions of each component in the system should be known for the conventional availability analysis. However, generally, companies do not have statistics or quality control departments to store such a large amount of data. Real events or situations are defined deterministically instead of using stochastic data for the complete description of real systems. A fuzzy set is an alternative theory which is used to analyze the uncertainty and vagueness in real systems. The aim of this study is to present a novel approach to compute system availability using representation of MTBF and MTTR in fuzzy numbers. Based on the experience in the system, it is decided to choose 3 different spread of MTBF and MTTR such as 15%, 20% and 25% to obtain lower and upper limits of the fuzzy numbers. To the best of our knowledge, the proposed method is the first application that is used fuzzy MTBF and fuzzy MTTR for fuzzy system availability estimation. This method is easy to apply in any repairable production system by practitioners working in industry. It is provided that the reliability engineers/managers/practitioners could analyze the system performance in a more consistent and logical manner based on fuzzy availability. This paper presents a real case study of a repairable multi-stage production line in lead-acid battery production factory in Turkey. The following is focusing on the considered wet-charging battery process which has a higher production level than the other types of battery. In this system, system components could exist only in two states, working or failed, and it is assumed that when a component in the system fails, it becomes as good as new after repair. Instead of classical methods, using fuzzy set theory and obtaining intervals for these measures would be very useful for system managers, practitioners to analyze system qualifications to find better results for their working conditions. Thus, much more detailed information about system characteristics is obtained.Keywords: availability analysis, battery production system, fuzzy sets, triangular fuzzy numbers (TFNs)
Procedia PDF Downloads 22521 University Building: Discussion about the Effect of Numerical Modelling Assumptions for Occupant Behavior
Authors: Fabrizio Ascione, Martina Borrelli, Rosa Francesca De Masi, Silvia Ruggiero, Giuseppe Peter Vanoli
Abstract:
The refurbishment of public buildings is one of the key factors of energy efficiency policy of European States. Educational buildings account for the largest share of the oldest edifice with interesting potentialities for demonstrating best practice with regards to high performance and low and zero-carbon design and for becoming exemplar cases within the community. In this context, this paper discusses the critical issue of dealing the energy refurbishment of a university building in heating dominated climate of South Italy. More in detail, the importance of using validated models will be examined exhaustively by proposing an analysis on uncertainties due to modelling assumptions mainly referring to the adoption of stochastic schedules for occupant behavior and equipment or lighting usage. Indeed, today, the great part of commercial tools provides to designers a library of possible schedules with which thermal zones can be described. Very often, the users do not pay close attention to diversify thermal zones and to modify or to adapt predefined profiles, and results of designing are affected positively or negatively without any alarm about it. Data such as occupancy schedules, internal loads and the interaction between people and windows or plant systems, represent some of the largest variables during the energy modelling and to understand calibration results. This is mainly due to the adoption of discrete standardized and conventional schedules with important consequences on the prevision of the energy consumptions. The problem is surely difficult to examine and to solve. In this paper, a sensitivity analysis is presented, to understand what is the order of magnitude of error that is committed by varying the deterministic schedules used for occupation, internal load, and lighting system. This could be a typical uncertainty for a case study as the presented one where there is not a regulation system for the HVAC system thus the occupant cannot interact with it. More in detail, starting from adopted schedules, created according to questioner’ s responses and that has allowed a good calibration of energy simulation model, several different scenarios are tested. Two type of analysis are presented: the reference building is compared with these scenarios in term of percentage difference on the projected total electric energy need and natural gas request. Then the different entries of consumption are analyzed and for more interesting cases also the comparison between calibration indexes. Moreover, for the optimal refurbishment solution, the same simulations are done. The variation on the provision of energy saving and global cost reduction is evidenced. This parametric study wants to underline the effect on performance indexes evaluation of the modelling assumptions during the description of thermal zones.Keywords: energy simulation, modelling calibration, occupant behavior, university building
Procedia PDF Downloads 14120 Microgrid Design Under Optimal Control With Batch Reinforcement Learning
Authors: Valentin Père, Mathieu Milhé, Fabien Baillon, Jean-Louis Dirion
Abstract:
Microgrids offer potential solutions to meet the need for local grid stability and increase isolated networks autonomy with the integration of intermittent renewable energy production and storage facilities. In such a context, sizing production and storage for a given network is a complex task, highly depending on input data such as power load profile and renewable resource availability. This work aims at developing an operating cost computation methodology for different microgrid designs based on the use of deep reinforcement learning (RL) algorithms to tackle the optimal operation problem in stochastic environments. RL is a data-based sequential decision control method based on Markov decision processes that enable the consideration of random variables for control at a chosen time scale. Agents trained via RL constitute a promising class of Energy Management Systems (EMS) for the operation of microgrids with energy storage. Microgrid sizing (or design) is generally performed by minimizing investment costs and operational costs arising from the EMS behavior. The latter might include economic aspects (power purchase, facilities aging), social aspects (load curtailment), and ecological aspects (carbon emissions). Sizing variables are related to major constraints on the optimal operation of the network by the EMS. In this work, an islanded mode microgrid is considered. Renewable generation is done with photovoltaic panels; an electrochemical battery ensures short-term electricity storage. The controllable unit is a hydrogen tank that is used as a long-term storage unit. The proposed approach focus on the transfer of agent learning for the near-optimal operating cost approximation with deep RL for each microgrid size. Like most data-based algorithms, the training step in RL leads to important computer time. The objective of this work is thus to study the potential of Batch-Constrained Q-learning (BCQ) for the optimal sizing of microgrids and especially to reduce the computation time of operating cost estimation in several microgrid configurations. BCQ is an off-line RL algorithm that is known to be data efficient and can learn better policies than on-line RL algorithms on the same buffer. The general idea is to use the learned policy of agents trained in similar environments to constitute a buffer. The latter is used to train BCQ, and thus the agent learning can be performed without update during interaction sampling. A comparison between online RL and the presented method is performed based on the score by environment and on the computation time.Keywords: batch-constrained reinforcement learning, control, design, optimal
Procedia PDF Downloads 12419 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics
Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin
Abstract:
Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.Keywords: convolutional neural networks, deep learning, shallow correctors, sign language
Procedia PDF Downloads 10118 Ground Motion Modeling Using the Least Absolute Shrinkage and Selection Operator
Authors: Yildiz Stella Dak, Jale Tezcan
Abstract:
Ground motion models that relate a strong motion parameter of interest to a set of predictive seismological variables describing the earthquake source, the propagation path of the seismic wave, and the local site conditions constitute a critical component of seismic hazard analyses. When a sufficient number of strong motion records are available, ground motion relations are developed using statistical analysis of the recorded ground motion data. In regions lacking a sufficient number of recordings, a synthetic database is developed using stochastic, theoretical or hybrid approaches. Regardless of the manner the database was developed, ground motion relations are developed using regression analysis. Development of a ground motion relation is a challenging process which inevitably requires the modeler to make subjective decisions regarding the inclusion criteria of the recordings, the functional form of the model and the set of seismological variables to be included in the model. Because these decisions are critically important to the validity and the applicability of the model, there is a continuous interest on procedures that will facilitate the development of ground motion models. This paper proposes the use of the Least Absolute Shrinkage and Selection Operator (LASSO) in selecting the set predictive seismological variables to be used in developing a ground motion relation. The LASSO can be described as a penalized regression technique with a built-in capability of variable selection. Similar to the ridge regression, the LASSO is based on the idea of shrinking the regression coefficients to reduce the variance of the model. Unlike ridge regression, where the coefficients are shrunk but never set equal to zero, the LASSO sets some of the coefficients exactly to zero, effectively performing variable selection. Given a set of candidate input variables and the output variable of interest, LASSO allows ranking the input variables in terms of their relative importance, thereby facilitating the selection of the set of variables to be included in the model. Because the risk of overfitting increases as the ratio of the number of predictors to the number of recordings increases, selection of a compact set of variables is important in cases where a small number of recordings are available. In addition, identification of a small set of variables can improve the interpretability of the resulting model, especially when there is a large number of candidate predictors. A practical application of the proposed approach is presented, using more than 600 recordings from the National Geospatial-Intelligence Agency (NGA) database, where the effect of a set of seismological predictors on the 5% damped maximum direction spectral acceleration is investigated. The set of candidate predictors considered are Magnitude, Rrup, Vs30. Using LASSO, the relative importance of the candidate predictors has been ranked. Regression models with increasing levels of complexity were constructed using one, two, three, and four best predictors, and the models’ ability to explain the observed variance in the target variable have been compared. The bias-variance trade-off in the context of model selection is discussed.Keywords: ground motion modeling, least absolute shrinkage and selection operator, penalized regression, variable selection
Procedia PDF Downloads 33017 An in silico Approach for Exploring the Intercellular Communication in Cancer Cells
Authors: M. Cardenas-Garcia, P. P. Gonzalez-Perez
Abstract:
Intercellular communication is a necessary condition for cellular functions and it allows a group of cells to survive as a population. Throughout this interaction, the cells work in a coordinated and collaborative way which facilitates their survival. In the case of cancerous cells, these take advantage of intercellular communication to preserve their malignancy, since through these physical unions they can send signs of malignancy. The Wnt/β-catenin signaling pathway plays an important role in the formation of intercellular communications, being also involved in a large number of cellular processes such as proliferation, differentiation, adhesion, cell survival, and cell death. The modeling and simulation of cellular signaling systems have found valuable support in a wide range of modeling approaches, which cover a wide spectrum ranging from mathematical models; e.g., ordinary differential equations, statistical methods, and numerical methods– to computational models; e.g., process algebra for modeling behavior and variation in molecular systems. Based on these models, different simulation tools have been developed from mathematical ones to computational ones. Regarding cellular and molecular processes in cancer, its study has also found a valuable support in different simulation tools that, covering a spectrum as mentioned above, have allowed the in silico experimentation of this phenomenon at the cellular and molecular level. In this work, we simulate and explore the complex interaction patterns of intercellular communication in cancer cells using the Cellulat bioinformatics tool, a computational simulation tool developed by us and motivated by two key elements: 1) a biochemically inspired model of self-organizing coordination in tuple spaces, and 2) the Gillespie’s algorithm, a stochastic simulation algorithm typically used to mimic systems of chemical/biochemical reactions in an efficient and accurate way. The main idea behind the Cellulat simulation tool is to provide an in silico experimentation environment that complements and guides in vitro experimentation in intra and intercellular signaling networks. Unlike most of the cell signaling simulation tools, such as E-Cell, BetaWB and Cell Illustrator which provides abstractions to model only intracellular behavior, Cellulat is appropriate for modeling both intracellular signaling and intercellular communication, providing the abstractions required to model –and as a result, simulate– the interaction mechanisms that involve two or more cells, that is essential in the scenario discussed in this work. During the development of this work we made evident the application of our computational simulation tool (Cellulat) for the modeling and simulation of intercellular communication between normal and cancerous cells, and in this way, propose key molecules that may prevent the arrival of malignant signals to the cells that surround the tumor cells. In this manner, we could identify the significant role that has the Wnt/β-catenin signaling pathway in cellular communication, and therefore, in the dissemination of cancer cells. We verified, using in silico experiments, how the inhibition of this signaling pathway prevents that the cells that surround a cancerous cell are transformed.Keywords: cancer cells, in silico approach, intercellular communication, key molecules, modeling and simulation
Procedia PDF Downloads 25116 Improving Fingerprinting-Based Localization System Using Generative AI
Authors: Getaneh Berie Tarekegn, Li-Chia Tai
Abstract:
With the rapid advancement of artificial intelligence, low-power built-in sensors on Internet of Things devices, and communication technologies, location-aware services have become increasingly popular and have permeated every aspect of people’s lives. Global navigation satellite systems (GNSSs) are the default method of providing continuous positioning services for ground and aerial vehicles, as well as consumer devices (smartphones, watches, notepads, etc.). However, the environment affects satellite positioning systems, particularly indoors, in dense urban and suburban cities enclosed by skyscrapers, or when deep shadows obscure satellite signals. This is because (1) indoor environments are more complicated due to the presence of many objects surrounding them; (2) reflection within the building is highly dependent on the surrounding environment, including the positions of objects and human activity; and (3) satellite signals cannot be reached in an indoor environment, and GNSS doesn't have enough power to penetrate building walls. GPS is also highly power-hungry, which poses a severe challenge for battery-powered IoT devices. Due to these challenges, IoT applications are limited. Consequently, precise, seamless, and ubiquitous Positioning, Navigation and Timing (PNT) systems are crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarms, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine
Procedia PDF Downloads 4415 Uncertainty Quantification of Crack Widths and Crack Spacing in Reinforced Concrete
Authors: Marcel Meinhardt, Manfred Keuser, Thomas Braml
Abstract:
Cracking of reinforced concrete is a complex phenomenon induced by direct loads or restraints affecting reinforced concrete structures as soon as the tensile strength of the concrete is exceeded. Hence it is important to predict where cracks will be located and how they will propagate. The bond theory and the crack formulas in the actual design codes, for example, DIN EN 1992-1-1, are all based on the assumption that the reinforcement bars are embedded in homogeneous concrete without taking into account the influence of transverse reinforcement and the real stress situation. However, it can often be observed that real structures such as walls, slabs or beams show a crack spacing that is orientated to the transverse reinforcement bars or to the stirrups. In most Finite Element Analysis studies, the smeared crack approach is used for crack prediction. The disadvantage of this model is that the typical strain localization of a crack on element level can’t be seen. The crack propagation in concrete is a discontinuous process characterized by different factors such as the initial random distribution of defects or the scatter of material properties. Such behavior presupposes the elaboration of adequate models and methods of simulation because traditional mechanical approaches deal mainly with average material parameters. This paper concerned with the modelling of the initiation and the propagation of cracks in reinforced concrete structures considering the influence of transverse reinforcement and the real stress distribution in reinforced concrete (R/C) beams/plates in bending action. Therefore, a parameter study was carried out to investigate: (I) the influence of the transversal reinforcement to the stress distribution in concrete in bending mode and (II) the crack initiation in dependence of the diameter and distance of the transversal reinforcement to each other. The numerical investigations on the crack initiation and propagation were carried out with a 2D reinforced concrete structure subjected to quasi static loading and given boundary conditions. To model the uncertainty in the tensile strength of concrete in the Finite Element Analysis correlated normally and lognormally distributed random filed with different correlation lengths were generated. The paper also presents and discuss different methods to generate random fields, e.g. the Covariance Matrix Decomposition Method. For all computations, a plastic constitutive law with softening was used to model the crack initiation and the damage of the concrete in tension. It was found that the distributions of crack spacing and crack widths are highly dependent of the used random field. These distributions are validated to experimental studies on R/C panels which were carried out at the Laboratory for Structural Engineering at the University of the German Armed Forces in Munich. Also, a recommendation for parameters of the random field for realistic modelling the uncertainty of the tensile strength is given. The aim of this research was to show a method in which the localization of strains and cracks as well as the influence of transverse reinforcement on the crack initiation and propagation in Finite Element Analysis can be seen.Keywords: crack initiation, crack modelling, crack propagation, cracks, numerical simulation, random fields, reinforced concrete, stochastic
Procedia PDF Downloads 15814 Improving Fingerprinting-Based Localization (FPL) System Using Generative Artificial Intelligence (GAI)
Authors: Getaneh Berie Tarekegn, Li-Chia Tai
Abstract:
With the rapid advancement of artificial intelligence, low-power built-in sensors on Internet of Things devices, and communication technologies, location-aware services have become increasingly popular and have permeated every aspect of people’s lives. Global navigation satellite systems (GNSSs) are the default method of providing continuous positioning services for ground and aerial vehicles, as well as consumer devices (smartphones, watches, notepads, etc.). However, the environment affects satellite positioning systems, particularly indoors, in dense urban and suburban cities enclosed by skyscrapers, or when deep shadows obscure satellite signals. This is because (1) indoor environments are more complicated due to the presence of many objects surrounding them; (2) reflection within the building is highly dependent on the surrounding environment, including the positions of objects and human activity; and (3) satellite signals cannot be reached in an indoor environment, and GNSS doesn't have enough power to penetrate building walls. GPS is also highly power-hungry, which poses a severe challenge for battery-powered IoT devices. Due to these challenges, IoT applications are limited. Consequently, precise, seamless, and ubiquitous Positioning, Navigation and Timing (PNT) systems are crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 0.39 m, and more than 90% of the errors are less than 0.82 m. According to numerical results, SRCLoc improves positioning performance and reduces radio map construction costs significantly compared to traditional methods.Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine
Procedia PDF Downloads 5013 Case-Based Reasoning for Modelling Random Variables in the Reliability Assessment of Existing Structures
Authors: Francesca Marsili
Abstract:
The reliability assessment of existing structures with probabilistic methods is becoming an increasingly important and frequent engineering task. However probabilistic reliability methods are based on an exhaustive knowledge of the stochastic modeling of the variables involved in the assessment; at the moment standards for the modeling of variables are absent, representing an obstacle to the dissemination of probabilistic methods. The framework according to probability distribution functions (PDFs) are established is represented by the Bayesian statistics, which uses Bayes Theorem: a prior PDF for the considered parameter is established based on information derived from the design stage and qualitative judgments based on the engineer past experience; then, the prior model is updated with the results of investigation carried out on the considered structure, such as material testing, determination of action and structural properties. The application of Bayesian statistics arises two different kind of problems: 1. The results of the updating depend on the engineer previous experience; 2. The updating of the prior PDF can be performed only if the structure has been tested, and quantitative data that can be statistically manipulated have been collected; performing tests is always an expensive and time consuming operation; furthermore, if the considered structure is an ancient building, destructive tests could compromise its cultural value and therefore should be avoided. In order to solve those problems, an interesting research path is represented by investigating Artificial Intelligence (AI) techniques that can be useful for the automation of the modeling of variables and for the updating of material parameters without performing destructive tests. Among the others, one that raises particular attention in relation to the object of this study is constituted by Case-Based Reasoning (CBR). In this application, cases will be represented by existing buildings where material tests have already been carried out and an updated PDFs for the material mechanical parameters has been computed through a Bayesian analysis. Then each case will be composed by a qualitative description of the material under assessment and the posterior PDFs that describe its material properties. The problem that will be solved is the definition of PDFs for material parameters involved in the reliability assessment of the considered structure. A CBR system represent a good candi¬date in automating the modelling of variables because: 1. Engineers already draw an estimation of the material properties based on the experience collected during the assessment of similar structures, or based on similar cases collected in literature or in data-bases; 2. Material tests carried out on structure can be easily collected from laboratory database or from literature; 3. The system will provide the user of a reliable probabilistic description of the variables involved in the assessment that will also serve as a tool in support of the engineer’s qualitative judgments. Automated modeling of variables can help in spreading probabilistic reliability assessment of existing buildings in the common engineering practice, and target at the best intervention and further tests on the structure; CBR represents a technique which may help to achieve this.Keywords: reliability assessment of existing buildings, Bayesian analysis, case-based reasoning, historical structures
Procedia PDF Downloads 339