Search results for: solution construction algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12172

Search results for: solution construction algorithm

742 Sludge Marvel (Densification): The Ultimate Solution For Doing More With Less Effort!

Authors: Raj Chavan

Abstract:

At present, the United States is home to more than 14,000 Water Resource Recovery Facilities (WRRFs), of which approximately 35% have implemented nutrient limits of some kind. These WRRFs contribute 10 to 15% of the total nutrient burden to surface rivers in the United States and account for approximately 1% of total power demand and 2% of total greenhouse gas emissions (GHG). There are several factors that have influenced the development of densification technologies in the direction of more compact and energy-efficient nutrient removal processes. Prior to surface water discharge, existing facilities that necessitate capacity expansion or biomass densification for greater treatability within the same footprint are being subjected to stricter nutrient removal requirements. Densification of activated sludge as a method for nutrient removal and process intensification at WRRFs has garnered considerable attention in recent times. The biological processes take place within the aerobic sediment granules, which form the basis of the technology. The possibility of generating granular sludge through continuous (or conventional) activated sludge processes (CAS) or densification of biomass through the transfer of activated sludge flocs to a denser biomass aggregate as an exceptionally efficient intensification technique has generated considerable interest. This presentation aims to furnish attendees with a foundational comprehension of densification through the illustration of practical concerns and insights. The subsequent subjects will be deliberated upon. What are some potential techniques for producing and preserving densified granules? What processes are responsible for the densification of biological flocs? How do physical selectors contribute to the process of biological flocs becoming denser? What viable strategies exist for the management of densified biological flocs, and which design parameters of physical selection influence the retention of densified biological flocs? determining operational solutions for floc and granule customization in order to meet capacity and performance objectives? The answers to these pivotal questions will be derived from existing full-scale treatment facilities, bench-scale and pilot-scale investigations, and existing literature data. By the conclusion of the presentation, the audience will possess a fundamental comprehension of the densification concept and its significance in attaining effective effluent treatment. Additionally, case studies pertaining to the design and operation of densification procedures will be incorporated into the presentation.

Keywords: densification, intensification, nutrient removal, granular sludge

Procedia PDF Downloads 67
741 Evaluation of the Effect of Learning Disabilities and Accommodations on the Prediction of the Exam Performance: Ordinal Decision-Tree Algorithm

Authors: G. Singer, M. Golan

Abstract:

Providing students with learning disabilities (LD) with extra time to grant them equal access to the exam is a necessary but insufficient condition to compensate for their LD; there should also be a clear indication that the additional time was actually used. For example, if students with LD use more time than students without LD and yet receive lower grades, this may indicate that a different accommodation is required. If they achieve higher grades but use the same amount of time, then the effectiveness of the accommodation has not been demonstrated. The main goal of this study is to evaluate the effect of including parameters related to LD and extended exam time, along with other commonly-used characteristics (e.g., student background and ability measures such as high-school grades), on the ability of ordinal decision-tree algorithms to predict exam performance. We use naturally-occurring data collected from hundreds of undergraduate engineering students. The sub-goals are i) to examine the improvement in prediction accuracy when the indicator of exam performance includes 'actual time used' in addition to the conventional indicator (exam grade) employed in most research; ii) to explore the effectiveness of extended exam time on exam performance for different courses and for LD students with different profiles (i.e., sets of characteristics). This is achieved by using the patterns (i.e., subgroups) generated by the algorithms to identify pairs of subgroups that differ in just one characteristic (e.g., course or type of LD) but have different outcomes in terms of exam performance (grade and time used). Since grade and time used to exhibit an ordering form, we propose a method based on ordinal decision-trees, which applies a weighted information-gain ratio (WIGR) measure for selecting the classifying attributes. Unlike other known ordinal algorithms, our method does not assume monotonicity in the data. The proposed WIGR is an extension of an information-theoretic measure, in the sense that it adjusts to the case of an ordinal target and takes into account the error severity between two different target classes. Specifically, we use ordinal C4.5, random-forest, and AdaBoost algorithms, as well as an ensemble technique composed of ordinal and non-ordinal classifiers. Firstly, we find that the inclusion of LD and extended exam-time parameters improves prediction of exam performance (compared to specifications of the algorithms that do not include these variables). Secondly, when the indicator of exam performance includes 'actual time used' together with grade (as opposed to grade only), the prediction accuracy improves. Thirdly, our subgroup analyses show clear differences in the effect of extended exam time on exam performance among different courses and different student profiles. From a methodological perspective, we find that the ordinal decision-tree based algorithms outperform their conventional, non-ordinal counterparts. Further, we demonstrate that the ensemble-based approach leverages the strengths of each type of classifier (ordinal and non-ordinal) and yields better performance than each classifier individually.

Keywords: actual exam time usage, ensemble learning, learning disabilities, ordinal classification, time extension

Procedia PDF Downloads 99
740 Bread-Making Properties of Rice Flour Dough Using Fatty Acid Salt

Authors: T. Hamaishi, Y. Morinaga, H. Morita

Abstract:

Introduction: Rice consumption in Japan has decreased, and Japanese government has recommended use of rice flour in order to expand the consumption of rice. There are two major protein components present in flour, called gliadin and glutenin. Gluten forms when water is added to flour and is mixed. As mixing continues, glutenin interacts with gliadin to form viscoelastic matrix of gluten. Rice flour bread does not expand as much as wheat flour bread. Because rice flour is not included gluten, it cannot construct gluten network in the dough. In recent years, some food additives have been used for dough-improving agent in bread making, especially surfactants has effect in order to improve dough extensibility. Therefore, we focused to fatty acid salt which is one of anionic surfactants. Fatty acid salt is a salt consist of fatty acid and alkali, it is main components of soap. According to JECFA(FAO/WHO Joint Expert Committee on Food Additives), salts of Myristic(C14), Palmitic(C16) and Stearic(C18) could be used as food additive. They have been evaluated ADI was not specified. In this study, we investigated to improving bread-making properties of rice flour dough adding fatty acid salt. Materials and methods: The sample of fatty acid salt is myristic (C14) dissolved in KOH solution to a concentration of 350 mM and pH 10.5. Rice dough was consisted of 100 g of flour using rice flour and wheat gluten, 5 g of sugar, 1.7 g of salt, 1.7g of dry yeast, 80 mL of water and fatty acid salt. Mixing was performed for 500 times by using hand. The concentration of C14K in the dough was 10 % relative to flour weight. Amount of gluten in the dough was 20 %, 30 % relative to flour weight. Dough expansion ability test was performed to measure physical property of bread dough according to the methods of Baker’s Yeast by Japan Yeast Industry Association. In this test, 150 g of dough was filled from bottom of the cylinder and fermented at 30 °C,85 % humidity for 120 min on an incubator. The height of the expansion in the dough was measured and determined its expansion ability. Results and Conclusion: Expansion ability of rice dough with gluten content of 20 %, 30% showed 316 mL, 341 mL for 120 min. When C14K adding to the rice dough, dough expansion abilities were 314 mL, 368 mL for 120 min, there was no significant difference. Conventionally it has been known that the rice flour dough contain gluten of 20 %. The considerable improvement of dough expansion ability was achieved when added C14K to wheat flour. The experimental result shows that c14k adding to the rice dough with gluten content more than 20 % was not improving bread-making properties. In conclusion, rice bread made with gluten content more than 20 % without C14K has been suggested to contribute to the formation of the sufficient gluten network.

Keywords: expansion ability, fatty acid salt, gluten, rice flour dough

Procedia PDF Downloads 242
739 Study on Adding Story and Seismic Strengthening of Old Masonry Buildings

Authors: Youlu Huang, Huanjun Jiang

Abstract:

A large number of old masonry buildings built in the last century still remain in the city. It generates the problems of unsafety, obsolescence, and non-habitability. In recent years, many old buildings have been reconstructed through renovating façade, strengthening, and adding floors. However, most projects only provide a solution for a single problem. It is difficult to comprehensively solve problems of poor safety and lack of building functions. Therefore, a comprehensive functional renovation program of adding reinforced concrete frame story at the bottom via integrally lifting the building and then strengthening the building was put forward. Based on field measurement and YJK calculation software, the seismic performance of an actual three-story masonry structure in Shanghai was identified. The results show that the material strength of masonry is low, and the bearing capacity of some masonry walls could not meet the code requirements. The elastoplastic time history analysis of the structure was carried out by using SAP2000 software. The results show that under the 7 degrees rare earthquake, the seismic performance of the structure reaches 'serious damage' performance level. Based on the code requirements of the stiffness ration of the bottom frame (lateral stiffness ration of the transition masonry story and frame story), the bottom frame story was designed. The integral lifting process of the masonry building was introduced based on many engineering examples. The reinforced methods for the bottom frame structure strengthened by the steel-reinforced mesh mortar surface layer (SRMM) and base isolators, respectively, were proposed. The time history analysis of the two kinds of structures, under the frequent earthquake, the fortification earthquake, and the rare earthquake, was conducted by SAP2000 software. For the bottom frame structure, the results show that the seismic response of the masonry floor is significantly reduced after reinforced by the two methods compared to the masonry structure. The previous earthquake disaster indicated that the bottom frame is vulnerable to serious damage under a strong earthquake. The analysis results showed that under the rare earthquake, the inter-story displacement angle of the bottom frame floor meets the 1/100 limit value of the seismic code. The inter-story drift of the masonry floor for the base isolated structure under different levels of earthquakes is similar to that of structure with SRMM, while the base-isolated program is better to protect the bottom frame. Both reinforced methods could significantly improve the seismic performance of the bottom frame structure.

Keywords: old buildings, adding story, seismic strengthening, seismic performance

Procedia PDF Downloads 118
738 Platform Virtual for Joint Amplitude Measurement Based in MEMS

Authors: Mauro Callejas-Cuervo, Andrea C. Alarcon-Aldana, Andres F. Ruiz-Olaya, Juan C. Alvarez

Abstract:

Motion capture (MC) is the construction of a precise and accurate digital representation of a real motion. Systems have been used in the last years in a wide range of applications, from films special effects and animation, interactive entertainment, medicine, to high competitive sport where a maximum performance and low injury risk during training and competition is seeking. This paper presents an inertial and magnetic sensor based technological platform, intended for particular amplitude monitoring and telerehabilitation processes considering an efficient cost/technical considerations compromise. Our platform particularities offer high social impact possibilities by making telerehabilitation accessible to large population sectors in marginal socio-economic sector, especially in underdeveloped countries that in opposition to developed countries specialist are scarce, and high technology is not available or inexistent. This platform integrates high-resolution low-cost inertial and magnetic sensors with adequate user interfaces and communication protocols to perform a web or other communication networks available diagnosis service. The amplitude information is generated by sensors then transferred to a computing device with adequate interfaces to make it accessible to inexperienced personnel, providing a high social value. Amplitude measurements of the platform virtual system presented a good fit to its respective reference system. Analyzing the robotic arm results (estimation error RMSE 1=2.12° and estimation error RMSE 2=2.28°), it can be observed that during arm motion in any sense, the estimation error is negligible; in fact, error appears only during sense inversion what can easily be explained by the nature of inertial sensors and its relation to acceleration. Inertial sensors present a time constant delay which acts as a first order filter attenuating signals at large acceleration values as is the case for a change of sense in motion. It can be seen a damped response of platform virtual in other images where error analysis show that at maximum amplitude an underestimation of amplitude is present whereas at minimum amplitude estimations an overestimation of amplitude is observed. This work presents and describes the platform virtual as a motion capture system suitable for telerehabilitation with the cost - quality and precision - accessibility relations optimized. These particular characteristics achieved by efficiently using the state of the art of accessible generic technology in sensors and hardware, and adequate software for capture, transmission analysis and visualization, provides the capacity to offer good telerehabilitation services, reaching large more or less marginal populations where technologies and specialists are not available but accessible with basic communication networks.

Keywords: inertial sensors, joint amplitude measurement, MEMS, telerehabilitation

Procedia PDF Downloads 257
737 Deep Learning Approach for Colorectal Cancer’s Automatic Tumor Grading on Whole Slide Images

Authors: Shenlun Chen, Leonard Wee

Abstract:

Tumor grading is an essential reference for colorectal cancer (CRC) staging and survival prognostication. The widely used World Health Organization (WHO) grading system defines histological grade of CRC adenocarcinoma based on the density of glandular formation on whole slide images (WSI). Tumors are classified as well-, moderately-, poorly- or un-differentiated depending on the percentage of the tumor that is gland forming; >95%, 50-95%, 5-50% and <5%, respectively. However, manually grading WSIs is a time-consuming process and can cause observer error due to subjective judgment and unnoticed regions. Furthermore, pathologists’ grading is usually coarse while a finer and continuous differentiation grade may help to stratifying CRC patients better. In this study, a deep learning based automatic differentiation grading algorithm was developed and evaluated by survival analysis. Firstly, a gland segmentation model was developed for segmenting gland structures. Gland regions of WSIs were delineated and used for differentiation annotating. Tumor regions were annotated by experienced pathologists into high-, medium-, low-differentiation and normal tissue, which correspond to tumor with clear-, unclear-, no-gland structure and non-tumor, respectively. Then a differentiation prediction model was developed on these human annotations. Finally, all enrolled WSIs were processed by gland segmentation model and differentiation prediction model. The differentiation grade can be calculated by deep learning models’ prediction of tumor regions and tumor differentiation status according to WHO’s defines. If multiple WSIs were possessed by a patient, the highest differentiation grade was chosen. Additionally, the differentiation grade was normalized into scale between 0 to 1. The Cancer Genome Atlas, project COAD (TCGA-COAD) project was enrolled into this study. For the gland segmentation model, receiver operating characteristic (ROC) reached 0.981 and accuracy reached 0.932 in validation set. For the differentiation prediction model, ROC reached 0.983, 0.963, 0.963, 0.981 and accuracy reached 0.880, 0.923, 0.668, 0.881 for groups of low-, medium-, high-differentiation and normal tissue in validation set. Four hundred and one patients were selected after removing WSIs without gland regions and patients without follow up data. The concordance index reached to 0.609. Optimized cut off point of 51% was found by “Maxstat” method which was almost the same as WHO system’s cut off point of 50%. Both WHO system’s cut off point and optimized cut off point performed impressively in Kaplan-Meier curves and both p value of logrank test were below 0.005. In this study, gland structure of WSIs and differentiation status of tumor regions were proven to be predictable through deep leaning method. A finer and continuous differentiation grade can also be automatically calculated through above models. The differentiation grade was proven to stratify CAC patients well in survival analysis, whose optimized cut off point was almost the same as WHO tumor grading system. The tool of automatically calculating differentiation grade may show potential in field of therapy decision making and personalized treatment.

Keywords: colorectal cancer, differentiation, survival analysis, tumor grading

Procedia PDF Downloads 133
736 Microgrid Design Under Optimal Control With Batch Reinforcement Learning

Authors: Valentin Père, Mathieu Milhé, Fabien Baillon, Jean-Louis Dirion

Abstract:

Microgrids offer potential solutions to meet the need for local grid stability and increase isolated networks autonomy with the integration of intermittent renewable energy production and storage facilities. In such a context, sizing production and storage for a given network is a complex task, highly depending on input data such as power load profile and renewable resource availability. This work aims at developing an operating cost computation methodology for different microgrid designs based on the use of deep reinforcement learning (RL) algorithms to tackle the optimal operation problem in stochastic environments. RL is a data-based sequential decision control method based on Markov decision processes that enable the consideration of random variables for control at a chosen time scale. Agents trained via RL constitute a promising class of Energy Management Systems (EMS) for the operation of microgrids with energy storage. Microgrid sizing (or design) is generally performed by minimizing investment costs and operational costs arising from the EMS behavior. The latter might include economic aspects (power purchase, facilities aging), social aspects (load curtailment), and ecological aspects (carbon emissions). Sizing variables are related to major constraints on the optimal operation of the network by the EMS. In this work, an islanded mode microgrid is considered. Renewable generation is done with photovoltaic panels; an electrochemical battery ensures short-term electricity storage. The controllable unit is a hydrogen tank that is used as a long-term storage unit. The proposed approach focus on the transfer of agent learning for the near-optimal operating cost approximation with deep RL for each microgrid size. Like most data-based algorithms, the training step in RL leads to important computer time. The objective of this work is thus to study the potential of Batch-Constrained Q-learning (BCQ) for the optimal sizing of microgrids and especially to reduce the computation time of operating cost estimation in several microgrid configurations. BCQ is an off-line RL algorithm that is known to be data efficient and can learn better policies than on-line RL algorithms on the same buffer. The general idea is to use the learned policy of agents trained in similar environments to constitute a buffer. The latter is used to train BCQ, and thus the agent learning can be performed without update during interaction sampling. A comparison between online RL and the presented method is performed based on the score by environment and on the computation time.

Keywords: batch-constrained reinforcement learning, control, design, optimal

Procedia PDF Downloads 118
735 Retrospective Cartography of Tbilisi and Surrounding Area

Authors: Dali Nikolaishvili, Nino Khareba, Mariam Tsitsagi

Abstract:

Tbilisi has been a capital of Georgia since the 5ᵗʰ century. City area was covered by forest in historical past. Nowadays the situation has been changing dramatically. Dozens of problems are caused by damages/destruction of green cover and solution, at one glance, seems to be uncomplicated (planting trees and creating green quarters), but on the other hand, according to the increasing tendency, the built up of areas still remains unsolved. Finding out the ways to overcome such obstacles is important even for protecting the health of society. Making of Retrospective cartography of the forest area of Tbilisi with use of GIS technology and remote sensing was the main aim of the research. Research about the dynamic of forest-cover in Tbilisi and its surroundings included the following steps: assessment of the dynamic of forest in Tbilisi and its surroundings. The survey was mainly based on the retrospective mapping method. Using of GIS technology, studying, comparing and identifying the narrative sources was the next step. And the last one was analyzed of the changes from the 80s to the present days on the basis of decryption of remotely sensed images. After creating a unified cartographic basis, the mapping and plans of different periods have been linked to this geodatabase. Data about green parks, individual old plants existing in the private yards and respondents' Information (according to a questionnaire created in advance) was added to the basic database, the general plan of Tbilisi and Scientific works as well. On the basis of analysis of historic, including cartographic sources, forest-cover maps for different periods of time were made. In addition, was made the catalog of individual green parks (location, area, typical composition, name and so on), which was the basis of creating several thematic maps. Areas with a high rate of green area degradation were identified. Several maps depicting the dynamics of forest cover of Tbilisi were created and analyzed. The methods of linking the data of the old cartographic sources to the modern basis were developed too, the result of which may be used in Urban Planning of Tbilisi. Understanding, perceiving and analyzing the real condition of green cover in Tbilisi and its problems, in turn, will help to take appropriate measures for the maintenance of ancient plants, to develop forests and to plan properly parks, squares, and recreational sites. Because the healthy environment is the main condition of human health and implies to the rational development of the city.

Keywords: catalogue of green area, GIS, historical cartography, cartography, remote sensing, Tbilisi

Procedia PDF Downloads 132
734 How to “Eat” without Actually Eating: Marking Metaphor with Spanish Se and Italian Si

Authors: Cinzia Russi, Chiyo Nishida

Abstract:

Using data from online corpora (Spanish CREA, Italian CORIS), this paper examines the relatively understudied use of Spanish se and Italian si exemplified in (1) and (2), respectively. (1) El rojo es … el que se come a los demás. ‘The red (bottle) is the one that outshines/*eats the rest.’(2) … ebbe anche la saggezza di mangiarsi tutto il suo patrimonio. ‘… he even had the wisdom to squander/*eat all his estate.’ In these sentences, se/si accompanies the consumption verb comer/mangiare ‘to eat’, without which the sentences would not be interpreted appropriately. This se/si cannot readily be attributed to any of the multiple functions so far identified in the literature: reflexive, ergative, middle/passive, inherent, benefactive, and complete consumptive. In particular, this paper argues against the feasibility of a recent construction-based analysis of sentences like (1) and (2), which situates se/si within a prototype-based network of meanings all deriving from the central meaning of 'COMPLETE CONSUMPTION' (e.g., Alice se comió toda la torta/Alicesi è mangiata tutta la torta ‘John ate the whole cake’). Clearly, the empirical adequacy of such an account is undermined by the fact that the events depicted in the se/si-sentences at issue do not always entail complete consumption because they may lack an INCREMENTAL THEME, the distinguishing property of complete consumption. Alternatively, it is proposed that the sentences under analysis represent instances of verbal METAPHORICAL EXTENSION: se/si represents an explicit marker of this cognitive process, which has independently developed from the complete consumptive se/si, and the meaning extension is captured by the general tenets of Conceptual Metaphor Theory (CMT). Two conceptual domains, Source (DS) and target (DT), are related by similarity, assigning an appropriate metaphorical interpretation to DT. The domains paired here are comer/mangiare (DS) and comerse/mangiarsi (DT). The eating event (DS) involves (a) the physical process of xEATER grinding yFOOD-STUFF into pieces and swallowing it; and (b) the aspect of xEATER savoring yFOOD-STUFF and being nurtured by it. In the physical act of eating, xEATER has dominance and exercises his force over yFOOD-STUFF. This general sense of dominance and force is mapped onto DT and is manifested in the ways exemplified in (1) and (2), and many others. According to CMT, two other properties are observed in each pair of DS & DT. First, DS tends to be more physical and concrete and DT more abstract, and systematic mappings are established between constituent elements in DS and those in DT: xEATER corresponds to the element that destroys and yFOOD-STUFF to the element that is destroyed in DT, as exemplified in (1) and (2). Though the metaphorical extension marker se/si appears by far most frequently with comer/mangiare in the corpora, similar systematic mappings are observed in several other verb pairs, for example, jugar/giocare ‘to play (games)’ and jugarse/giocarsi ‘to jeopardize/risk (life, reputation, etc.)’, perder/perdere ‘to lose (an object)’ and perderse/perdersi ‘to miss out on (an event)’, etc. Thus, this study provides evidence that languages may indeed formally mark metaphor using means available to them.

Keywords: complete consumption value, conceptual metaphor, Italian si/Spanish se, metaphorical extension.

Procedia PDF Downloads 46
733 Implication of Fractal Kinetics and Diffusion Limited Reaction on Biomass Hydrolysis

Authors: Sibashish Baksi, Ujjaini Sarkar, Sudeshna Saha

Abstract:

In the present study, hydrolysis of Pinus roxburghi wood powder was carried out with Viscozyme, and kinetics of the hydrolysis has been investigated. Finely ground sawdust is submerged into 2% aqueous peroxide solution (pH=11.5) and pretreated through autoclaving, probe sonication, and alkaline peroxide pretreatment. Afterward, the pretreated material is subjected to hydrolysis. A chain of experiments was executed with delignified biomass (50 g/l) and varying enzyme concentrations (24.2–60.5 g/l). In the present study, 14.32 g/l of glucose, along with 7.35 g/l of xylose, have been recovered with a viscozyme concentration of 48.8 g/l and the same condition was treated as optimum condition. Additionally, thermal deactivation of viscozyme has been investigated and found to be gradually decreasing with escalated enzyme loading from 48.4 g/l (dissociation constant= 0.05 h⁻¹) to 60.5 g/l (dissociation constant= 0.02 h⁻¹). The hydrolysis reaction is a pseudo first-order reaction, and therefore, the rate of the hydrolysis can be expressed as a fractal-like kinetic equation that communicates between the product concentration and hydrolytic time t. It is seen that the value of rate constant (K) increases from 0.008 to 0.017 with augmented enzyme concentration from 24.2 g/l to 60.5 g/l. Greater value of K is associated with stronger enzyme binding capacity of the substrate mass. However, escalated concentration of supplied enzyme ensures improved interaction with more substrate molecules resulting in an enhanced de-polymerization of the polymeric sugar chains per unit time which eventually modifies the physiochemical structure of biomass. All fractal dimensions are in between 0 and 1. Lower the value of fractal dimension, more easily the biomass get hydrolyzed. It can be seen that with increased enzyme concentration from 24.2 g/l to 48.4 g/l, the values of fractal dimension go down from 0.1 to 0.044. This indicates that the presence of more enzyme molecules can more easily hydrolyze the substrate. However, an increased value has been observed with a further increment of enzyme concentration to 60.5g/l because of diffusional limitation. It is evident that the hydrolysis reaction system is a heterogeneous organization, and the product formation rate depends strongly on the enzyme diffusion resistances caused by the rate-limiting structures of the substrate-enzyme complex. Value of the rate constant increases from 1.061 to 2.610 with escalated enzyme concentration from 24.2 to 48.4 g/l. As the rate constant is proportional to Fick’s diffusion coefficient, it can be assumed that with a higher concentration of enzyme, a larger amount of enzyme mass dM diffuses into the substrate through the surface dF per unit time dt. Therefore, a higher rate constant value is associated with a faster diffusion of enzyme into the substrate. Regression analysis of time curves with various enzyme concentrations shows that diffusion resistant constant increases from 0.3 to 0.51 for the first two enzyme concentrations and again decreases with enzyme concentration of 60.5 g/l. During diffusion in a differential scale, the enzyme also experiences a greater resistance during diffusion of larger dM through dF in dt.

Keywords: viscozyme, glucose, fractal kinetics, thermal deactivation

Procedia PDF Downloads 108
732 Towards a Mandatory Frame of ADR in Divorce Cases: Key Elements from a Comparative Perspective for Belgium

Authors: Celine Jaspers

Abstract:

The Belgian legal system is slowly evolving to mandatory mediation to promote ADR. One of the reasons for this evolution is the lack of use of alternative methods in relation to their possible benefits. Especially in divorce cases, ADR can play a beneficial role in resolving disputes, since the emotional component is very much present. When children are involved, a solution provided by the parent may be more adapted to the child’s best interest than a court order. In the first part, the lack of use of voluntary ADR and the evolution toward mandatory ADR in Belgium will be indicated by sources of legislation, jurisprudence and social-scientific sources, with special attention to divorce cases. One of the reasons is lack of knowledge on ADR, despite the continuing efforts of the Belgian legislator to promote ADR. One of the last acts of ADR-promotion, was the implementation of an Act in 2018 which gives the judge the possibility to refer parties to mediation if at least one party wants to during the judicial procedure. This referral is subject to some conditions. The parties will be sent to a private mediator, recognized by the Federal Mediation Commission, to try to resolve their conflict. This means that at least one party can be mandated to try mediation (indicated as “semi-mandatory mediation”). The main goal is to establish the factors and elements that Belgium has to take into account in their further development of mandatory ADR, with consideration of the human rights perspective and the EU perspective. Furthermore it is also essential to detect some dangerous pitfalls other systems have encountered with their process design. Therefore, the second part, the comparative component, will discuss the existing framework in California, USA to establish the necessary elements, possible pitfalls and considerations the Belgian legislator can take into account when further developing the framework of mandatory ADR. The contrasting and functional method will be used to create key elements and possible pitfalls, to help Belgium improve its existing framework. The existing mandatory system in California has been in place since 1981 and is still up and running, and can thus provide valuable lessons and considerations for the Belgian system. Thirdly, the key elements from a human rights perspective and from a European Union perspective (e.g. the right to access to a judge, the right to privacy) will be discussed too, since the basic human rights and European legislation and jurisprudence play a significant part in Belgian legislation as well. The main sources for this part will be the international and European treaties, legislation, jurisprudence and soft law. In the last and concluding part, the paper will list the most important elements of a mandatory ADR-system design with special attention to the dangers of these elements (e.g. to include or exclude domestic violence cases in the mandatory ADR-framework and the consequences thereof), and with special attention for the necessary the international and European rights, prohibitions and guidelines.

Keywords: Belgium, divorce, framework, mandatory ADR

Procedia PDF Downloads 148
731 The Psychological Specification of Motivation of Managerial Activity

Authors: Laura Petrosyan

Abstract:

The high and persistent working results are possible when people are interested in the results of the work. Motivation of working may be present as a psychological complicated phenomena, which determines person's behavior in working process. Researchers point out that working motivation is displayed in three correlated conditions. These are interest in outcomes of work, satisfaction with the work, and the third, is the level of devotion of employee. Solution of the problem of effective staff management depends on the development of workers' skills. Despite, above mentioned problem could be solved by the process of finding methods to induce the employees to the effective work. Motivation of the managerial activity aroused not only during the working process, but also before it starts. During education the future manager obtains many professional skills. However, the experience shows, that only professional skills are not enough for the effective work. Presently, one of the global educational problems is the development of motivation in professions. In psychological literature the fact is mentioned, that the motivation can be inside and outside. Outside motivation is active only at short time. Instead, inside motivation can be active during all process of the professional development. Hence, the motivation of managerial activity might be developed during the education. The future manager choose the profession being under some impression of personal qualities. Detection of future manager’s motivation will influence on the development of syllabuses. Moreover, use of the psychological methods could be evolved for preparing motivated managers. Conducted research has been done in the Public Administration Academy of the RA. The aim of research was to discover students' motivation of profession. 102 master students took part in the research from Public Administration Academy. In the research were used the following methods: method of identifying a person's motivation to succeed (T. Elers) and method of studying students’ motivation (T.E. Ilyin). First of the methods designed to explore a person's motivational orientation to get success represented by Hackhausen. The method gives the opportunity to reveal the level of motivation to success. In the second method separated three scales: i) Knowledge achievements, ii) Knowledge of the profession, iii) Get a diploma. The data obtained from these tests gave quantitative data. Aanalyses of our survey results exposes that within master students the high level have the average rates of knowledge achievements. The average rates of knowledge of the profession and geting a diploma not in high level. Furthermore, there are almost equal to each other. In the educational process The student acquiring skills not synthesize with the wield profession. Results show that specialists really view about profession not formulated yet.

Keywords: managerial activity, motivation, psychological complicated phenomena, working process, education the future manager

Procedia PDF Downloads 446
730 Bacterial Recovery of Copper Ores

Authors: Zh. Karaulova, D. Baizhigitov

Abstract:

At the Aktogay deposit, the oxidized ore section has been developed since 2015; by now, the reserves of easily enriched ore are decreasing, and a large number of copper-poor, difficult-to-enrich ores has been accumulated in the dumps of the KAZ Minerals Aktogay deposit, which is unprofitable to mine using the traditional mining methods. Hence, another technology needs to be implemented, which will significantly expand the raw material base of copper production in Kazakhstan and ensure the efficient use of natural resources. Heap and dump bacterial recovery are the most acceptable technologies for processing low-grade secondary copper sulfide ores. Test objects were the copper ores of Aktogay deposit and chemolithotrophic bacteria Leptospirillum ferrooxidans (L.f.), Acidithiobacillus caldus (A.c.), Sulfobacillus Acidophilus (S.a.), which are mixed cultures were both used in bacterial oxidation systems. They can stay active in the 20-400C temperature range. These bacteria were the most extensively studied and widely used in sulfide mineral recovery technology. Biocatalytic acceleration was achieved as a result of bacteria oxidizing iron sulfides to form iron sulfate, which subsequently underwent chemical oxidation to become sulfate oxide. The following results have been achieved at the initial stage: the goal was to grow and maintain the life activity of bacterial cultures under laboratory conditions. These bacteria grew the best within the pH 1,2-1,8 range with light stirring and in an aerated environment. The optimal growth temperature was 30-33оC. The growth rate decreased by one-half for each 4-5°C fall in temperature from 30°C. At best, the number of bacteria doubled every 24 hours. Typically, the maximum concentration of cells that can be grown in ferrous solution is about 107/ml. A further step researched in this case was the adaptation of microorganisms to the environment of certain metals. This was followed by mass production of inoculum and maintenance for their further cultivation on a factory scale. This was done by adding sulfide concentrate, allowing the bacteria to convert the ferrous sulfate as indicated by the Eh (>600 mV), then diluting to double the volume and adding concentrate to achieve the same metal level. This process was repeated until the desired metal level and volumes were achieved. The final stage of bacterial recovery was the transportation and irrigation of secondary sulfide copper ores of the oxidized ore section. In conclusion, the project was implemented at the Aktogay mine since the bioleaching process was prolonged. Besides, the method of bacterial recovery might compete well with existing non-biological methods of extraction of metals from ores.

Keywords: bacterial recovery, copper ore, bioleaching, bacterial inoculum

Procedia PDF Downloads 68
729 Using Passive Cooling Strategies to Reduce Thermal Cooling Load for Coastal High-Rise Buildings of Jeddah, Saudi Arabia

Authors: Ahmad Zamzam

Abstract:

With the development of the economy in recent years, Saudi Arabia has been maintaining high economic growth. Therefore, its energy consumption has increased dramatically. This economic growth reflected on the expansion of high-rise tower's construction. Jeddah coastal strip (cornice) has many high-rise buildings planned to start next few years. These projects required a massive amount of electricity that was not planned to be supplied by the old infrastructure. This research studies the effect of the building envelope on its thermal performance. It follows a parametric simulation methodology using Ecotect software to analyze the effect of the building envelope design on its cooling energy load for an office high-rise building in Jeddah, Saudi Arabia, which includes building geometrical form, massing treatments, orientation and glazing type effect. The research describes an integrated passive design approach to reduce the cooling requirement for high-rise building through an improved building envelope design. The research used Ecotect to make four simulation studies; the first simulation compares the thermal performance of five high-rise buildings, presenting the basic shape of the plan. All the buildings have the same plan area and same floor height. The goal of this simulation is to find out the best shape for the thermal performance. The second simulation studies the effect of orientation on the thermal performance by rotating the same building model to find out the best and the worst angle for the building thermal performance. The third simulation studies the effect of the massing treatment on the total cooling load. It compared five models with different massing treatment, but with the same total built up area. The last simulation studied the effect of the glazing type by comparing the total cooling load of the same building using five different glass type and also studies the feasibility of using these glass types by studying the glass cost effect. The results indicate that using the circle shape as building plan could reduce the thermal cooling load by 40%. Also, using shading devices could reduce the cooling loads by 5%. The study states that using any of the massing grooving, recess or any treatment that could increase the outer exposed surface is not preferred and will decrease the building thermal performance. Also, the result shows that the best direction for glazing and openings from thermal performance viewpoint in Jeddah is the North direction while the worst direction is the East one. The best direction angle for openings - regarding the thermal performance in Jeddah- is 15 deg West and the worst is 250 deg West (110 deg East). Regarding the glass type effect, comparing to the double glass with air fill type as a reference case, the double glass with Air-Low-E will save 14% from the required amount of the thermal cooling load annually. Argon fill and triple glass will save 16% and 17% from the total thermal cooling load respectively, but for the glass cost purpose, using the Argon fill and triple glass is not feasible.

Keywords: passive cooling, reduce thermal load, Jeddah, building shape, energy

Procedia PDF Downloads 124
728 Investigating the Effect of Plant Root Exudates and of Saponin on Polycyclic Aromatic Hydrocarbons Solubilization in Brownfield Contaminated Soils

Authors: Marie Davin, Marie-Laure Fauconnier, Gilles Colinet

Abstract:

In Wallonia, there are 6,000 estimated brownfields (rising to over 3.5 million in Europe) that require remediation. Polycyclic Aromatic Hydrocarbons (PAHs) are a class of recalcitrant carcinogenic/mutagenic organic compounds of major concern as they accumulate in the environment and represent 17% of all encountered pollutants. As an alternative to environmentally aggressive, expensive and often disruptive soil remediation strategies, a lot of research has been directed to developing techniques targeting organic pollutants. The following experiment, based on the observation that PAHs soil content decreases in the presence of plants, aimed at improving our understanding of the underlying mechanisms involved in phytoremediation. It focusses on plant root exudates and whether they improve PAHs solubilization, which would make them more available for bioremediation by soil microorganisms. The effect of saponin, a natural surfactant found in some plant roots such as members of the Fabaceae family, on PAHs solubilization was also investigated as part of the implementation of the experimental protocol. The experiments were conducted on soil collected from a brownfield in Saint-Ghislain (Belgium) and presenting weathered PAHs contamination. Samples of soil were extracted with different solutions containing either plant root exudates or commercial saponin. Extracted PAHs were determined in the different aqueous solutions using High-Performance Liquid Chromatography and Fluorimetric Detection (HPLC-FLD). Both root exudates of alfalfa (Medicago sativa L.) or red clover (Trifolium pratense L.) and commercial saponin were tested in different concentrations. Distilled water was used as a control. First of all, results show that PAHs are more extracted using saponin solutions than distilled water and that the amounts generally rise with the saponin concentration. However, the amount of each extracted compound diminishes as its molecular weight rises. Also, it appears that passed a certain surfactant concentration, PAHs are less extracted. This suggests that saponin might be investigated as a washing agent in polluted soil remediation techniques, either for ex-situ or in-situ treatments, as an alternative to synthetic surfactants. On the other hand, preliminary results on experiments using plant root exudates also show differences in PAHs solubilization compared to the control solution. Further results will allow discussion as to whether or not there are differences according to the exudates provenance and concentrations.

Keywords: brownfield, Medicago sativa, phytoremediation, polycyclic aromatic hydrocarbons, root exudates, saponin, solubilization, Trifolium pratense

Procedia PDF Downloads 248
727 Concentrated Whey Protein Drink with Orange Flavor: Protein Modification and Formulation

Authors: Shahram Naghizadeh Raeisi, Ali Alghooneh

Abstract:

The application of whey protein in drink industry to enhance the nutritional value of the products is important. Furthermore, the gelification of protein during thermal treatment and shelf life makes some limitations in its application. So, the main goal of this research is manufacturing of high concentrate whey protein orange drink with appropriate shelf life. In this way, whey protein was 5 to 30% hydrolyzed ( in 5 percent intervals at six stages), then thermal stability of samples with 10% concentration of protein was tested in acidic condition (T= 90 °C, pH=4.2, 5 minutes ) and neutral condition (T=120° C, pH:6.7, 20 minutes.) Furthermore, to study the shelf life of heat treated samples in 4 months at 4 and 24 °C, the time sweep rheological test were done. At neutral conditions, 5 to 20% hydrolyzed sample showed gelling during thermal treatment, whereas at acidic condition, was happened only in 5 to 10 percent hydrolyzed samples. This phenomenon could be related to the difference in hydrodynamic radius and zeta potential of samples with different level of hydrolyzation at acidic and neutral conditions. To study the gelification of heat resistant protein solutions during shelf life, for 4 months with 7 days intervals, the time sweep analysis were performed. Cross over was observed for all heat resistant neutral samples at both storage temperature, while in heat resistant acidic samples with degree of hydrolysis, 25 and 30 percentage at 4 and 20 °C, it was not seen. It could be concluded that the former sample was stable during heat treatment and 4 months storage, which made them a good choice for manufacturing high protein drinks. The Scheffe polynomial model and numerical optimization were employed for modeling and high protein orange drink formula optimization. Scheffe model significantly predicted the overal acceptance index (Pvalue<0.05) of sensorial analysis. The coefficient of determination (R2) of 0.94, the adjusted coefficient of determination (R2Adj) of 0.90, insignificance of the lack-of-fit test and F value of 64.21 showed the accuracy of the model. Moreover, the coefficient of variable (C.V) was 6.8% which suggested the replicability of the experimental data. The desirability function had been achieved to be 0.89, which indicates the high accuracy of optimization. The optimum formulation was found as following: Modified whey protein solution (65.30%), natural orange juice (33.50%), stevia sweetener (0.05%), orange peel oil (0.15%) and citric acid (1 %), respectively. Its worth mentioning that this study made an appropriate model for application of whey protein in drink industry without bitter flavor and gelification during heat treatment and shelf life.

Keywords: croos over, orange beverage, protein modification, optimization

Procedia PDF Downloads 59
726 An Improved Atmospheric Correction Method with Diurnal Temperature Cycle Model for MSG-SEVIRI TIR Data under Clear Sky Condition

Authors: Caixia Gao, Chuanrong Li, Lingli Tang, Lingling Ma, Yonggang Qian, Ning Wang

Abstract:

Knowledge of land surface temperature (LST) is of crucial important in energy balance studies and environment modeling. Satellite thermal infrared (TIR) imagery is the primary source for retrieving LST at the regional and global scales. Due to the combination of atmosphere and land surface of received radiance by TIR sensors, atmospheric effect correction has to be performed to remove the atmospheric transmittance and upwelling radiance. Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard Meteosat Second Generation (MSG) provides measurements every 15 minutes in 12 spectral channels covering from visible to infrared spectrum at fixed view angles with 3km pixel size at nadir, offering new and unique capabilities for LST, LSE measurements. However, due to its high temporal resolution, the atmosphere correction could not be performed with radiosonde profiles or reanalysis data since these profiles are not available at all SEVIRI TIR image acquisition times. To solve this problem, a two-part six-parameter semi-empirical diurnal temperature cycle (DTC) model has been applied to the temporal interpolation of ECMWF reanalysis data. Due to the fact that the DTC model is underdetermined with ECMWF data at four synoptic times (UTC times: 00:00, 06:00, 12:00, 18:00) in one day for each location, some approaches are adopted in this study. It is well known that the atmospheric transmittance and upwelling radiance has a relationship with water vapour content (WVC). With the aid of simulated data, the relationship could be determined under each viewing zenith angle for each SEVIRI TIR channel. Thus, the atmospheric transmittance and upwelling radiance are preliminary removed with the aid of instantaneous WVC, which is retrieved from the brightness temperature in the SEVIRI channels 5, 9 and 10, and a group of the brightness temperatures for surface leaving radiance (Tg) are acquired. Subsequently, a group of the six parameters of the DTC model is fitted with these Tg by a Levenberg-Marquardt least squares algorithm (denoted as DTC model 1). Although the retrieval error of WVC and the approximate relationships between WVC and atmospheric parameters would induce some uncertainties, this would not significantly affect the determination of the three parameters, td, ts and β (β is the angular frequency, td is the time where the Tg reaches its maximum, ts is the starting time of attenuation) in DTC model. Furthermore, due to the large fluctuation in temperature and the inaccuracy of the DTC model around sunrise, SEVIRI measurements from two hours before sunrise to two hours after sunrise are excluded. With the knowledge of td , ts, and β, a new DTC model (denoted as DTC model 2) is accurately fitted again with these Tg at UTC times: 05:57, 11:57, 17:57 and 23:57, which is atmospherically corrected with ECMWF data. And then a new group of the six parameters of the DTC model is generated and subsequently, the Tg at any given times are acquired. Finally, this method is applied to SEVIRI data in channel 9 successfully. The result shows that the proposed method could be performed reasonably without assumption and the Tg derived with the improved method is much more consistent with that from radiosonde measurements.

Keywords: atmosphere correction, diurnal temperature cycle model, land surface temperature, SEVIRI

Procedia PDF Downloads 266
725 Influencing Factors on Stability of Shale with Silt Layers at Slopes

Authors: A. K. M. Badrul Alam, Yoshiaki Fujii, Nahid Hasan Dipu, Shakil Ahmed Razo

Abstract:

Shale rockmasses often include silt layers, impacting slope stability in construction and mining. Analyzing their interaction is crucial for long-term stability. A study used an elastoplastic model, incorporating the stress transfer method and Coulomb's criterion, to assess a shale rock mass with silt layers. It computed stress distribution, assessed failure potential, and identified vulnerable regions where nodal forces were calculated for a comprehensive analysis. A shale rock mass ranging from 14.75 to 16.75 meters thick, with silt layers varying from 0.36 to 0.5 meters, was considered in the model. It examined four silt layer conditions: horizontal (SiHL), vertical (SiVL), inclined against slope (SiIincAGS), and along slope (SilincALO). Mechanical parameters like uniaxial compressive strength (UCS), tensile strength (TS), Young’s modulus (E), Poisson’s ratio, and density were adjusted for varied scenarios: UCS (0.5 to 5 MPa), TS (0.1 to 1 MPa), and E (6 to 60 MPa). In elastic analysis of shale rock masses, stress distributions vary based on layer properties. When shale and silt layers have the same elasticity modulus (E), stress concentrates at corners. If the silt layer has a lower E than shale, marginal changes in maximum stress (σmax) occur for SilHL. A decrease in σmax is evident at SilVL. Slight variations in σmax are observed for SilincAGS and SilincALO. In the elastoplastic analysis, the overall decrease of 20%, 40%, 60%, 80%, and 90% was considered. For SilHL:(i) Same E, UCS, and TS for silt layer and shale, UCS/TS ratio 5: strength decrease led to shear (S), tension then shear (T then S) failure; noticeable failure at 60% decrease, significant at 80%, collapse at 90%. (ii) Lower E for silt layer, same strength as shale: No significant differences. (iii) Lower E and UCS, silt layer strength 1/10: No significant differences. For SilVL: (i) Same E, UCS, and TS for silt layer and shale, UCS/TS ratio 5: Similar effects as SilHL. (ii) Lower E for silt layer, same strength as shale: Slip occurred. (iii) Lower E and UCS, silt layer strength 1/10: Bitension failure also observed with larger slip. For SilincAGS: (i) Same E, UCS, and TS for silt layer and shale, UCS/TS ratio 5: Effects similar to SilHL. (ii) Lower E for silt layer, same strength as shale: Slip occurred. (iii) Lower E and UCS, silt layer strength 1/10: Tension failure also observed with larger slip. For SilincALO: (i) Same E, UCS, and TS for silt layer and shale, UCS/TS ratio 5: Similar to SilHL with tension failure. (ii) Lower E for silt layer, same strength as shale: No significant differences; failure diverged. (iii) Lower E and UCS, silt layer strength 1/10: Bitension failure also observed with larger slip; failure diverged. Toppling failure was observed for lower E cases of SilVL and SilincAGS. The presence of silt interlayers in shale greatly impacts slope stability. Designing slopes requires careful consideration of both the silt and shale's mechanical properties. The temporal degradation of strength in these layers is a major concern. Thus, slope design must comprehensively analyze the immediate and long-term mechanical behavior of interlayer silt and shale to effectively mitigate instability.

Keywords: shale rock masses, silt layers, slope stability, elasto-plastic model, temporal degradation

Procedia PDF Downloads 51
724 Clustering Locations of Textile and Garment Industries to Compare with the Future Industrial Cluster in Thailand

Authors: Kanogkan Leerojanaprapa

Abstract:

Textile and garment industry is used to a major exporting industry of Thailand. According to lacking of the nation's price-competitiveness by stopping the EU's GSP (Generalised Scheme of Preferences) and ‘Nationwide Minimum Wage Policy’ that Thailand’s employers must pay all employees at least 300 baht (about $10) a day, the supply chains of the Thai textile and garment industry is affected and need to be reformed. Therefore, either Thai textile or garment industry will be existed or not would be concerned. This is also challenged for the government to decide which industries should be promoted the future industries of Thailand. Recently Thai government launch The Cluster-based Special Economic Development Zones Policy for promoting business cluster (effect on September 16, 2015). They define a cluster as the concentration of interconnected businesses and related institutions that operate within the same geographic areas and textiles and garment is one of target industrial clusters and 9 provinces are targeted (Bangkok, Kanchanaburi, Nakhon Pathom, Ratchaburi, Samut Sakhon, Chonburi, Chachoengsao, Prachinburi, and Sa Kaeo). The cluster zone are defined to link west-east corridor connected to manufacturing source in Cambodia and Mynmar to Bangkok where are promoted to be design, sourcing, and trading hub. The Thai government will provide tax and non-tax incentives for targeted industries within the clusters and expects these businesses are scattered to where they can get the most benefit which will identify future industrial cluster. This research will show the difference between the current cluster and future cluster following the target provinces of the textile and garment. The current cluster is analysed from secondary data. The four characteristics of the numbers of plants in Spinning, weaving and finishing of textiles, Manufacture of made-up textile articles, except apparel, Manufacture of knitted and crocheted fabrics, and Manufacture of other textiles, not elsewhere classified in particular 77 provinces (in total) are clustered by K-means cluster analysis and Hierarchical Cluster Analysis. In addition, the cluster can be confirmed and showed which variables contribute the most to defined cluster solution with ANOVA test. The results of analysis can identify 22 provinces (which the textile or garment plants are located) into 3 clusters. Plants in cluster 1 tend to be large numbers of plants which is only Bangkok, Next plants in cluster 2 tend to be moderate numbers of plants which are Samut Prakan, Samut Sakhon and Nakhon Pathom. Finally plants in cluster 3 tend to be little numbers of plants which are other 18 provinces. The same methodology can be implemented in other industries for future study.

Keywords: ANOVA, hierarchical cluster analysis, industrial clusters, K -means cluster analysis, textile and garment industry

Procedia PDF Downloads 211
723 The Use of Flipped Classroom as a Teaching Method in a Professional Master's Program in Network, in Brazil

Authors: Carla Teixeira, Diana Azevedo, Jonatas Bessa, Maria Guilam

Abstract:

The flipped classroom is a blended learning modality that combines face-to-face and virtual activities of self-learning, mediated by digital information and communication technologies, which reverses traditional teaching approaches and presents, as a presupposition, the previous study of contents by students. In the following face-to-face activities, the contents are discussed, producing active learning. This work aims to describe the systematization process of the use of flipped classrooms as a method to develop complementary national activities in PROFSAÚDE, a professional master's program in the area of public health, offered as a distance learning course, in the network, in Brazil. The complementary national activities were organized with the objective of strengthening and qualifying students´ learning process. The network gathers twenty-two public institutions of higher education in the country. Its national coordination conducted a survey to detect complementary educational needs, supposed to improve the formative process and align important content sums for the program nationally. The activities were organized both asynchronously, making study materials available in Google classrooms, and synchronously in a tele presential way, organized on virtual platforms to reach the largest number of students in the country. The asynchronous activities allowed each student to study at their own pace and the synchronous activities were intended for deepening and reflecting on the themes. The national team identified some professors' areas of expertise, who were contacted for the production of audiovisual content such as video classes and podcasts, guidance for supporting bibliographic materials and also to conduct synchronous activities together with the technical team. The contents posted in the virtual classroom were organized by modules and made available before the synchronous meeting; these modules, in turn, contain “pills of experience” that correspond to reports of teachers' experiences in relation to the different themes. In addition, activity was proposed, with questions aimed to expose doubts about the contents and a learning challenge, as a practical exercise. Synchronous activities are built with different invited teachers, based on the participants 'discussions, and are the forum where teachers can answer students' questions, providing feedback on the learning process. At the end of each complementary activity, an evaluation questionnaire is available. The responses analyses show that this institutional network experience, as pedagogical innovation, provides important tools to support teaching and research due to its potential in the participatory construction of learning, optimization of resources, the democratization of knowledge and sharing and strengthening of practical experiences on the network. One of its relevant aspects was the thematic diversity addressed through this method.

Keywords: active learning, flipped classroom, network education experience, pedagogic innovation

Procedia PDF Downloads 158
722 Study on Changes of Land Use impacting the Process of Urbanization, by Using Landsat Data in African Regions: A Case Study in Kigali, Rwanda

Authors: Delphine Mukaneza, Lin Qiao, Wang Pengxin, Li Yan, Chen Yingyi

Abstract:

Human activities on land use make the land-cover gradually change or transit. In this study, we examined the use of Landsat TM data to detect the land use change of Kigali between 1987 and 2009 using remote sensing techniques and analysis of data using ENVI and ArcGIS, a GIS software. Six different categories of land use were distinguished: bare soil, built up land, wetland, water, vegetation, and others. With remote sensing techniques, we analyzed land use data in 1987, 1999 and 2009, changed areas were found and a dynamic situation of land use in Kigali city was found during the 22 years studied. According to relevant Landsat data, the research focused on land use change in accordance with the role of remote sensing in the process of urbanization. The result of the work has shown the rapid increase of built up land between 1987 and 1999 and a big decrease of vegetation caused by the rebuild of the city after the 1994 genocide, while in the period of 1999 to 2009 there was a reduction in built up land and vegetation, after the authority of Kigali city established, a Master Plan where all constructions which were not in the range of the master Plan were destroyed. Rwanda's capital, Kigali City, through the expansion of the urban area, it is increasing the internal employment rate and attracts business investors and the service sector to improve their economy, which will increase the population growth and provide a better life. The overall planning of the city of Kigali considers the environment, land use, infrastructure, cultural and socio-economic factors, the economic development and population forecast, urban development, and constraints specification. To achieve the above purpose, the Government has set for the overall planning of city Kigali, different stages of the detailed description of the design, strategy and action plan that would guide Kigali planners and members of the public in the future to have more detailed regional plans and practical measures. Thus, land use change is significantly the performance of Kigali active human area, which plays an important role for the country to take certain decisions. Another area to take into account is the natural situation of Kigali city. Agriculture in the region does not occupy a dominant position, and with the population growth and socio-economic development, the construction area will gradually rise and speed up the process of urbanization. Thus, as a developing country, Rwanda's population continues to grow and there is low rate of utilization of land, where urbanization remains low. As mentioned earlier, the 1994 genocide massacres, population growth and urbanization processes, have been the factors driving the dramatic changes in land use. The focus on further research would be on analysis of Rwanda’s natural resources, social and economic factors that could be, the driving force of land use change.

Keywords: land use change, urbanization, Kigali City, Landsat

Procedia PDF Downloads 304
721 Transient Heat Transfer: Experimental Investigation near the Critical Point

Authors: Andreas Kohlhepp, Gerrit Schatte, Wieland Christoph, Spliethoff Hartmut

Abstract:

In recent years the research of heat transfer phenomena of water and other working fluids near the critical point experiences a growing interest for power engineering applications. To match the highly volatile characteristics of renewable energies, conventional power plants need to shift towards flexible operation. This requires speeding up the load change dynamics of steam generators and their heating surfaces near the critical point. In dynamic load transients, both a high heat flux with an unfavorable ratio to the mass flux and a high difference in fluid and wall temperatures, may cause problems. It may lead to deteriorated heat transfer (at supercritical pressures), dry-out or departure from nucleate boiling (at subcritical pressures), all cases leading to an extensive rise of temperatures. For relevant technical applications, the heat transfer coefficients need to be predicted correctly in case of transient scenarios to prevent damage to the heated surfaces (membrane walls, tube bundles or fuel rods). In transient processes, the state of the art method of calculating the heat transfer coefficients is using a multitude of different steady-state correlations for the momentarily existing local parameters for each time step. This approach does not necessarily reflect the different cases that may lead to a significant variation of the heat transfer coefficients and shows gaps in the individual ranges of validity. An algorithm was implemented to calculate the transient behavior of steam generators during load changes. It is used to assess existing correlations for transient heat transfer calculations. It is also desirable to validate the calculation using experimental data. By the use of a new full-scale supercritical thermo-hydraulic test rig, experimental data is obtained to describe the transient phenomena under dynamic boundary conditions as mentioned above and to serve for validation of transient steam generator calculations. Aiming to improve correlations for the prediction of the onset of deteriorated heat transfer in both, stationary and transient cases the test rig was specially designed for this task. It is a closed loop design with a directly electrically heated evaporation tube, the total heating power of the evaporator tube and the preheater is 1MW. To allow a big range of parameters, including supercritical pressures, the maximum pressure rating is 380 bar. The measurements contain the most important extrinsic thermo-hydraulic parameters. Moreover, a high geometric resolution allows to accurately predict the local heat transfer coefficients and fluid enthalpies.

Keywords: departure from nucleate boiling, deteriorated heat transfer, dryout, supercritical working fluid, transient operation of steam generators

Procedia PDF Downloads 217
720 Analysis of Tilting Cause of a Residential Building in Durres by the Use of Cptu Test

Authors: Neritan Shkodrani

Abstract:

On November 26, 2019, an earthquake hit the central western part of Albania. It was assessed as Mw 6.4. Its epicenter was located offshore north western Durrës, about 7 km north of the city. In this paper, the consequences of settlements of very soft soils have been discussed for the case of a residential building, mentioned as “K Building”, which was suffering a significant tilting after the earthquake. “KBuilding” is an RC framed building having 12+1 (basement) storiesand a floor area of 21000 m2. The construction of the building was completed in 2012. “KBuilding”, located in Durres city, suffered severe non-structural damage during November 26, 2019, Durrës Earthquake sequences. During the in-site inspections immediately after the earthquake, the general condition of the buildings, the presence of observable settlements on the ground, and the crack situation in the structure were determined, and damage inspection were performed. It was significant to note that the “K Building” presented tilting that might be attributed, as it was believed at the beginning, partially to the failure of the columns of the ground floor and partially to liquefaction phenomena, but it did not collapse. At the first moment was not clear if the foundation had a bearing capacity failure or the foundation failed because of the soil liquefaction. Geotechnical soil investigations by using CPTU test were executed, and their data are usedto evaluatebearing capacity, consolidation settlement of the mat foundation, and soil liquefaction since they were believed to be the main reasons of this building tilting.Geotechnical soil investigation consist in 5 (five) Static Cone Penetration tests with pore pressure measurement (piezocone test). They reached a penetration depth of 20.0 m to 30.0 mand, clearly shown the presence of very soft and organic soils in the soil profile of the site. Geotechnical CPT based analysis of bearing capacity, consolidation, and secondary settlement are applied, and results are reported for each test. These results shown very small values of allowable bearing capacity and very high values of consolidation and secondary settlements. Liquefaction analysis based on the data of CPTU tests and the characteristics of ground shaking of the mentioned earthquake has shown the possibility of liquefaction for some layers of the considered soil profile, but the estimated vertical settlements are at a small range and clearly shown that the main reason of the building tilting was not related to the consequences of liquefaction, but was an existing settlement caused from the applied bearing pressure of this building. All the CPTU tests were carried out on August 2021, almost two years after the November 26, 2019, Durrës Earthquake and when the building itself was demolished. After removing the mat foundation on September 2021, it was possible to carry out CPTU tests even on the footprint of the existing building, which made possible to observe the effects of long time applied of foundation bearing pressure to the consolidation on the considered soil profile.

Keywords: bearing capacity, cone penetration test, consolidation settlement, secondary settlement, soil liquefaction, etc

Procedia PDF Downloads 94
719 Modeling of in 738 LC Alloy Mechanical Properties Based on Microstructural Evolution Simulations for Different Heat Treatment Conditions

Authors: M. Tarik Boyraz, M. Bilge Imer

Abstract:

Conventionally cast nickel-based super alloys, such as commercial alloy IN 738 LC, are widely used in manufacturing of industrial gas turbine blades. With carefully designed microstructure and the existence of alloying elements, the blades show improved mechanical properties at high operating temperatures and corrosive environment. The aim of this work is to model and estimate these mechanical properties of IN 738 LC alloy solely based on simulations for projected heat treatment conditions or service conditions. The microstructure (size, fraction and frequency of gamma prime- γ′ and carbide phases in gamma- γ matrix, and grain size) of IN 738 LC needs to be optimized to improve the high temperature mechanical properties by heat treatment process. This process can be performed at different soaking temperature, time and cooling rates. In this work, micro-structural evolution studies were performed experimentally at various heat treatment process conditions, and these findings were used as input for further simulation studies. The operation time, soaking temperature and cooling rate provided by experimental heat treatment procedures were used as micro-structural simulation input. The results of this simulation were compared with the size, fraction and frequency of γ′ and carbide phases, and grain size provided by SEM (EDS module and mapping), EPMA (WDS module) and optical microscope for before and after heat treatment. After iterative comparison of experimental findings and simulations, an offset was determined to fit the real time and theoretical findings. Thereby, it was possible to estimate the final micro-structure without any necessity to carry out the heat treatment experiment. The output of this microstructure simulation based on heat treatment was used as input to estimate yield stress and creep properties. Yield stress was calculated mainly as a function of precipitation, solid solution and grain boundary strengthening contributors in microstructure. Creep rate was calculated as a function of stress, temperature and microstructural factors such as dislocation density, precipitate size, inter-particle spacing of precipitates. The estimated yield stress values were compared with the corresponding experimental hardness and tensile test values. The ability to determine best heat treatment conditions that achieve the desired microstructural and mechanical properties were developed for IN 738 LC based completely on simulations.

Keywords: heat treatment, IN738LC, simulations, super-alloys

Procedia PDF Downloads 245
718 Security Issues in Long Term Evolution-Based Vehicle-To-Everything Communication Networks

Authors: Mujahid Muhammad, Paul Kearney, Adel Aneiba

Abstract:

The ability for vehicles to communicate with other vehicles (V2V), the physical (V2I) and network (V2N) infrastructures, pedestrians (V2P), etc. – collectively known as V2X (Vehicle to Everything) – will enable a broad and growing set of applications and services within the intelligent transport domain for improving road safety, alleviate traffic congestion and support autonomous driving. The telecommunication research and industry communities and standardization bodies (notably 3GPP) has finally approved in Release 14, cellular communications connectivity to support V2X communication (known as LTE – V2X). LTE – V2X system will combine simultaneous connectivity across existing LTE network infrastructures via LTE-Uu interface and direct device-to-device (D2D) communications. In order for V2X services to function effectively, a robust security mechanism is needed to ensure legal and safe interaction among authenticated V2X entities in the LTE-based V2X architecture. The characteristics of vehicular networks, and the nature of most V2X applications, which involve human safety makes it significant to protect V2X messages from attacks that can result in catastrophically wrong decisions/actions include ones affecting road safety. Attack vectors include impersonation attacks, modification, masquerading, replay, MiM attacks, and Sybil attacks. In this paper, we focus our attention on LTE-based V2X security and access control mechanisms. The current LTE-A security framework provides its own access authentication scheme, the AKA protocol for mutual authentication and other essential cryptographic operations between UEs and the network. V2N systems can leverage this protocol to achieve mutual authentication between vehicles and the mobile core network. However, this protocol experiences technical challenges, such as high signaling overhead, lack of synchronization, handover delay and potential control plane signaling overloads, as well as privacy preservation issues, which cannot satisfy the adequate security requirements for majority of LTE-based V2X services. This paper examines these challenges and points to possible ways by which they can be addressed. One possible solution, is the implementation of the distributed peer-to-peer LTE security mechanism based on the Bitcoin/Namecoin framework, to allow for security operations with minimal overhead cost, which is desirable for V2X services. The proposed architecture can ensure fast, secure and robust V2X services under LTE network while meeting V2X security requirements.

Keywords: authentication, long term evolution, security, vehicle-to-everything

Procedia PDF Downloads 166
717 An Adaptable Semi-Numerical Anisotropic Hyperelastic Model for the Simulation of High Pressure Forming

Authors: Daniel Tscharnuter, Eliza Truszkiewicz, Gerald Pinter

Abstract:

High-quality surfaces of plastic parts can be achieved in a very cost-effective manner using in-mold processes, where e.g. scratch resistant or high gloss polymer films are pre-formed and subsequently receive their support structure by injection molding. The pre-forming may be done by high-pressure forming. In this process, a polymer sheet is heated and subsequently formed into the mold by pressurized air. Due to the heat transfer to the cooled mold the polymer temperature drops below its glass transition temperature. This ensures that the deformed microstructure is retained after depressurizing, giving the sheet its final formed shape. The development of a forming process relies heavily on the experience of engineers and trial-and-error procedures. Repeated mold design and testing cycles are however both time- and cost-intensive. It is, therefore, desirable to study the process using reliable computer simulations. Through simulations, the construction of the mold and the effect of various process parameters, e.g. temperature levels, non-uniform heating or timing and magnitude of pressure, on the deformation of the polymer sheet can be analyzed. Detailed knowledge of the deformation is particularly important in the forming of polymer films with integrated electro-optical functions. Care must be taken in the placement of devices, sensors and electrical and optical paths, which are far more sensitive to deformation than the polymers. Reliable numerical prediction of the deformation of the polymer sheets requires sophisticated material models. Polymer films are often either transversely isotropic or orthotropic due to molecular orientations induced during manufacturing. The anisotropic behavior affects the resulting strain field in the deformed film. For example, parts of the same shape but different strain fields may be created by varying the orientation of the film with respect to the mold. The numerical simulation of the high-pressure forming of such films thus requires material models that can capture the nonlinear anisotropic mechanical behavior. There are numerous commercial polymer grades for the engineers to choose from when developing a new part. The effort required for comprehensive material characterization may be prohibitive, especially when several materials are candidates for a specific application. We, therefore, propose a class of models for compressible hyperelasticity, which may be determined from basic experimental data and which can capture key features of the mechanical response. Invariant-based hyperelastic models with a reduced number of invariants are formulated in a semi-numerical way, such that the models are determined from a single uniaxial tensile tests for isotropic materials, or two tensile tests in the principal directions for transversely isotropic or orthotropic materials. The simulation of the high pressure forming of an orthotropic polymer film is finally done using an orthotropic formulation of the hyperelastic model.

Keywords: hyperelastic, anisotropic, polymer film, thermoforming

Procedia PDF Downloads 614
716 Aframomum melegueta Improves Antioxidant Status of Type 2 Diabetes Rats Model

Authors: Aminu Mohammed, Shahidul Islam

Abstract:

Aframomum melegueta K.Schum commonly known as Grains of Paradise has been a popularly used spice in most of the African food preparation. Available data have shown that ethyl acetate fraction from crude ethanolic extract exhibited α-amylase and α-glucosidase inhibitory actions, improved pancreatic β-cell damage and ameliorated insulin resistance in diabetic rats. Additionally, 6-gingerol, 6-shogaol, 6-paradol and oleanolic acid are shown to be the compounds responsible for the antidiabetic action of A. melegueta. However, detail antioxidant potential of this spice in a diabetic animal model has not yet been reported. Thus, the present study investigates the effect of oral consumption of A. melegueta fruit on the in vivo antioxidant status of type 2 diabetes (T2D) model of rats. T2D was induced in rats by feeding a 10% fructose solution ad libitum for two weeks followed by a single intraperitoneal injection of streptozotocin (40 mg/kg body weight (bw)). The animals were orally administered with 150 (DAML) or 300 mg/kg bw (DAMH) of the fraction once daily for four weeks. Data were analyzed by using a statistical software package (SPSS for Windows, version 22, IBM Corporation, NY, USA) using Tukey’s-HSD multiple range post-hoc test. Values were considered significantly different at p < 0.05. According to the data, after four weeks of intervention, diabetic untreated animals showed significantly (p < 0.05) elevation of blood glucose levels. The levels of thiobarbituric acid reactive substances (TBARS) were observed to increase with concomitant reduction of reduced glutathione (GSH) levels in the serum and organs (liver, kidney, heart and pancreas) of diabetic untreated animals. The activities of endogenous antioxidant enzymes (superoxide dismutase, catalase, glutathione peroxidase, and reductase) were greatly reduced in the serum and organs of diabetic untreated animals compared to the normal animals. These alterations were reverted to near-normal after the treatment of A. melegueta fruit in the treated groups (DAML & DAMH) within the study period, especially at the dose of 300 mg/kg bw. This potent antioxidant action may partly be attributed to the presence of the 6-Gingerol, 6-shogaol and 6-paradol are known to possess antioxidant action. The results of our study showed that A. melegueta intake improved the antioxidant status of T2D rats and therefore could be used to ameliorate the diabetes-induced oxidative damage.

Keywords: Aframomum melegueta, antioxidant, ethyl acetate extract, type 2 diabetes

Procedia PDF Downloads 295
715 Improving the Crashworthiness Characteristics of Long Steel Circular Tubes Subjected to Axial Compression by Inserting a Helical Spring

Authors: Mehdi Tajdari, Farzad Mokhtarnejad, Fatemeh Moradi, Mehdi Najafizadeh

Abstract:

Nowadays, energy absorbing devices have been widely used in all vehicles and moving parts such as railway couches, aircraft, ships and lifts. The aim is to protect these structures from serious damages while subjected to impact loads, or to minimize human injuries while collision is occurred in transportation systems. These energy-absorbing devices can dissipate kinetic energy in a wide variety of ways like friction, facture, plastic bending, crushing, cyclic plastic deformation and metal cutting. On the other hand, various structures may be used as collapsible energy absorbers. Metallic cylindrical tubes have attracted much more attention due to their high stiffness and strength combined with the low weight and ease of manufacturing process. As a matter of fact, favorable crash worthiness characteristics for energy dissipation purposes can be achieved from axial collapse of tubes while they crush progressively in symmetric modes. However, experimental and theoretical results have shown that depending on various parameters such as tube geometry, material properties of tube, boundary and loading conditions, circular tubes buckle in different modes of deformation, namely, diamond and Euler collapsing modes. It is shown that when the tube length is greater than the critical length, the tube deforms in overall Euler buckling mode, which is an inefficient mode of energy absorption and needs to be avoided in crash worthiness applications. This study develops a new method with the aim of improving energy absorption characteristics of long steel circular tubes. Inserting a helical spring into the tubes is proved experimentally to be an efficient solution. In fact when a long tube is subjected to axial compression load, the spring prevents of undesirable Euler or diamond collapsing modes. This is because the spring reinforces the internal wall of tubes and it causes symmetric deformation in tubes. In this research three specimens were prepared and three tests were performed. The dimensions of tubes were selected so that in axial compression load buckling is occurred. In the second and third tests a spring was inserted into tubes and they were subjected to axial compression load in quasi-static and impact loading, respectively. The results showed that in the second and third tests buckling were not happened and the tubes deformed in symmetric modes which are desirable in energy absorption.

Keywords: energy absorption, circular tubes, collapsing deformation, crashworthiness

Procedia PDF Downloads 337
714 Soybean Oil Based Phase Change Material for Thermal Energy Storage

Authors: Emre Basturk, Memet Vezir Kahraman

Abstract:

In many developing countries, with the rapid economic improvements, energy shortage and environmental issues have become a serious problem. Therefore, it has become a very critical issue to improve energy usage efficiency and also protect the environment. Thermal energy storage system is an essential approach to match the thermal energy claim and supply. Thermal energy can be stored by heating, cooling or melting a material with the energy and then enhancing accessible when the procedure is reversed. The overall thermal energy storage techniques are sorted as; latent heat or sensible heat thermal energy storage technology segments. Among these methods, latent heat storage is the most effective method of collecting thermal energy. Latent heat thermal energy storage depend on the storage material, emitting or discharging heat as it undergoes a solid to liquid, solid to solid or liquid to gas phase change or vice versa. Phase change materials (PCMs) are promising materials for latent heat storage applications due to their capacities to accumulate high latent heat storage per unit volume by phase change at an almost constant temperature. Phase change materials (PCMs) are being utilized to absorb, collect and discharge thermal energy during the cycle of melting and freezing, converting from one phase to another. Phase Change Materials (PCMs) can generally be arranged into three classes: organic materials, salt hydrates and eutectics. Many kinds of organic and inorganic PCMs and their blends have been examined as latent heat storage materials. Organic PCMs are rather expensive and they have average latent heat storage per unit volume and also have low density. Most organic PCMs are combustible in nature and also have a wide range of melting point. Organic PCMs can be categorized into two major categories: non-paraffinic and paraffin materials. Paraffin materials have been extensively used, due to their high latent heat and right thermal characteristics, such as minimal super cooling, varying phase change temperature, low vapor pressure while melting, good chemical and thermal stability, and self-nucleating behavior. Ultraviolet (UV)-curing technology has been generally used because it has many advantages, such as low energy consumption , high speed, high chemical stability, room-temperature operation, low processing costs and environmental friendly. For many years, PCMs have been used for heating and cooling industrial applications including textiles, refrigerators, construction, transportation packaging for temperature-sensitive products, a few solar energy based systems, biomedical and electronic materials. In this study, UV-curable, fatty alcohol containing soybean oil based phase change materials (PCMs) were obtained and characterized. The phase transition behaviors and thermal stability of the prepared UV-cured biobased PCMs were analyzed by differential scanning calorimetry (DSC) and thermogravimetric analysis (TGA). The heating process phase change enthalpy is measured between 30 and 68 J/g, and the freezing process phase change enthalpy is found between 18 and 70 J/g. The decomposition of UVcured PCMs started at 260 ºC and reached a maximum of 430 ºC.

Keywords: fatty alcohol, phase change material, thermal energy storage, UV curing

Procedia PDF Downloads 376
713 An Informative Marketing Platform: Methodology and Architecture

Authors: Martina Marinelli, Samanta Vellante, Francesco Pilotti, Daniele Di Valerio, Gaetanino Paolone

Abstract:

Any development in web marketing technology requires changes in information engineering to identify instruments and techniques suitable for the production of software applications for informative marketing. Moreover, for large web solutions, designing an interface that enables human interactions is a complex process that must bridge between informative marketing requirements and the developed solution. A user-friendly interface in web marketing applications is crucial for a successful business. The paper introduces mkInfo - a software platform that implements informative marketing. Informative marketing is a new interpretation of marketing which places the information at the center of every marketing action. The creative team includes software engineering researchers who have recently authored an article on automatic code generation. The authors have created the mkInfo software platform to generate informative marketing web applications. For each web application, it is possible to automatically implement an opt in page, a landing page, a sales page, and a thank you page: one only needs to insert the content. mkInfo implements an autoresponder to send mail according to a predetermined schedule. The mkInfo platform also includes e-commerce for a product or service. The stakeholder can access any opt-in page and get basic information about a product or service. If he wants to know more, he will need to provide an e-mail address to access a landing page that will generate an e-mail sequence. It will provide him with complete information about the product or the service. From this point on, the stakeholder becomes a user and is now able to purchase the product or related services through the mkInfo platform. This paper suggests a possible definition for Informative Marketing, illustrates its basic principles, and finally details the mkInfo platform that implements it. This paper also offers some Informative Marketing models, which are implemented in the mkInfo platform. Informative marketing can be applied to products or services. It is necessary to realize a web application for each product or service. The mkInfo platform enables the product or the service producer to send information concerning a specific product or service to all stakeholders. In conclusion, the technical contributions of this paper are: a different interpretation of marketing based on information; a modular architecture for web applications, particularly for one with standard features such as information storage, exchange, and delivery; multiple models to implement informative marketing; a software platform enabling the implementation of such models in a web application. Future research aims to enable stakeholders to provide information about a product or a service so that the information gathered about a product or a service includes both the producer’s and the stakeholders' point of view. The purpose is to create an all-inclusive management system of the knowledge regarding a specific product or service: a system that includes everything about the product or service and is able to address even unexpected questions.

Keywords: informative marketing, opt in page, software platform, web application

Procedia PDF Downloads 124