Search results for: original introject
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1469

Search results for: original introject

479 Coping with the Stress and Negative Emotions of Care-Giving by Using Techniques from Seneca, Epictetus, and Marcus Aurelius

Authors: Arsalan Memon

Abstract:

There are many challenges that a caregiver faces in average everyday life. One such challenge is coping with the stress and negative emotions of caregiving. The Stoics (i.e. Lucius Annaeus Seneca [4 B.C.E. - 65 C.E.], Epictetus [50-135 C.E.], and Marcus Aurelius [121-180 C.E.]) have provided coping techniques that are useful for dealing with stress and negative emotions. This paper lists and explains some of the fundamental coping techniques provided by the Stoics. For instance, some Stoic coping techniques thus follow (the list is far from exhaustive): a) mindfulness: to the best of your ability, constantly being aware of your thoughts, habits, desires, norms, memories, likes/dislikes, beliefs, values, and of everything outside of you in the world (b) constantly adjusting one’s expectations in accordance with reality, c) memento mori: constantly reminding oneself that death is inevitable and that death is not to be seen as evil, and d) praemeditatio malorum: constantly detaching oneself from everything that is so dear to one so that the least amount of suffering follows from the loss, damage, or ceasing to be of such entities. All coping techniques will be extracted from the following original texts by the Stoics: Seneca’s Letters to Lucilius, Epictetus’ Discourses and the Encheiridion, and Marcus Aurelius’ Meditations. One major finding is that the usefulness of each Stoic coping technique can be empirically tested by anyone in the sense of applying it one’s own life especially when one is facing real-life challenges. Another major finding is that all of the Stoic coping techniques are predicated upon, and follow from, one fundamental principle: constantly differentiate what is and what is not in one’s control. After differentiating it, one should constantly habituate oneself in not controlling things that are beyond one’s control. For example, the following things are beyond one’s control (all things being equal): death, certain illnesses, being born in a particular socio-economic family, etc. The conclusion is that if one habituates oneself by practicing to the best of one’s ability both the fundamental Stoic principle and the Stoic coping techniques, then such a habitual practice can eventually decrease the stress and negative emotions that one experiences by being a caregiver.

Keywords: care-giving, coping techniques, negative emotions, stoicism, stress

Procedia PDF Downloads 132
478 The Work Book Tool, a Lifelong Chronicle: Part of the "Designprogrammet" at the Design School of the University in Kalmar, Sweden

Authors: Henriette Jarild-Koblanck, Monica Moro

Abstract:

The research has been implemented at the Kalmar University now LNU Linnaeus University inside the Design Program (Designprogrammet) for several years. The Work Book tool was created using the framework of the Bologna declaration. The project concerns primarily pedagogy and design methodology, focusing on how we evaluate artistic work processes and projects and on how we can develop the preconditions for cross-disciplinary work. The original idea of the Work Book springs from the steady habit of the Swedish researcher and now retired full professor and dean Henriette Koblanck to put images, things and colours in a notebook, right from her childhood, writing down impressions and reflections. On this preliminary thought of making use of a work book, in a form freely chosen by the user, she began to develop the Design Program (Designprogrammet) that was applied at the Kalmar University now LNU Linnaeus University, where she called a number of professionals to collaborate, among them Monica Moro an Italian designer, researcher, and teacher in the field of colour and shape. The educational intention is that the Work Book should become a tool that is both inspirational for the process of thinking and intuitional creating, and personal support for both rational and technical thinking. The students were to use the Work Book not only to visually and graphically document their results from investigations, experiments and thoughts but also as a tool to present their works to others, -students, tutors and teachers, or to other stakeholders they discussed the proceedings with. To help the students a number of matrixes were developed oriented to evaluate the projects in elaboration, based on the Bologna Declaration. In conclusion, the feedback from the students is excellent; many are still using the Work Book as a professional tool as in their words they consider it a rather accurate representation of their working process, and furthermore of themselves, so much that many of them have used it as a portfolio when applying for jobs.

Keywords: academic program, art, assessment of student’s progress, Bologna Declaration, design, learning, self-assessment

Procedia PDF Downloads 330
477 Design and Development of Engine Valve Train Wear Test Rig for the Assessment of Valve Train Tribochemistry

Authors: V. Manjunath, C. V. Chandrashekara

Abstract:

Ecosystem authority calls for the use of lubricants with less effect on the nature in terms of exhaust emission, while engine user demands more mileage per liter of fuel without any compromise on engine durability. From this viewpoint, engine manufacturers require the optimum combination of materials and lubricant additive package to minimize friction and wear in the engine components like piston, crankshaft and valve train etc. The demands are placed for requirements to operate at higher speeds, loads, temperature and for extended replacement intervals of engine oil. Besides, it is necessary to accurately predict the lubricant life or the replacement interval to prevent lubrication and valve-train components failure. Experimental tribology evaluation of new engine oils requires large amount of time and energy. Hence low cost bench test is necessary for industries and original equipment manufacturing companies (OEM) to study the performance of lubricants. The present work outlines the procedure for the design and development of a valve train wear rig (MCR) to simulate the ASTMD-6891 and to develop new engine test for Indian automobile sector to evaluate lubricants for Indian automobile market. In order to improve the lubrication between cam and follower of internal combustion engine, the influence of materials or oils viscosity and additives on the friction and wear characteristics are examined with test rig by increasing the contact load at two different revolution speed. From the experimentation following results are made obvious. Temperature, Torque, speed and wear plots are used to validate the data obtained from the newly developed multi-cam cam rig (MCR) with follower against a cast iron camshaft. Camshaft lobe wear is measured at seven different locations on cam profile. Tribofilm formed using 5W-30 oil is evaluated and correlated with the standard test results.

Keywords: ASTMD-6891, multi-cam rig (MCR), 5W-30, cam-profile

Procedia PDF Downloads 170
476 Homogenization of Cocoa Beans Fermentation to Upgrade Quality Using an Original Improved Fermenter

Authors: Aka S. Koffi, N’Goran Yao, Philippe Bastide, Denis Bruneau, Diby Kadjo

Abstract:

Cocoa beans (Theobroma cocoa L.) are the main components for chocolate manufacturing. The beans must be correctly fermented at first. Traditional process to perform the first fermentation (lactic fermentation) often consists in confining cacao beans using banana leaves or a fermentation basket, both of them leading to a poor product thermal insulation and to an inability to mix the product. Box fermenter reduces this loss by using a wood with large thickness (e>3cm), but mixing to homogenize the product is still hard to perform. Automatic fermenters are not rentable for most of producers. Heat (T>45°C) and acidity produced during the fermentation by microbiology activity of yeasts and bacteria are enabling the emergence of potential flavor and taste of future chocolate. In this study, a cylindro-rotative fermenter (FCR-V1) has been built and coconut fibers were used in its structure to confine heat. An axis of rotation (360°) has been integrated to facilitate the turning and homogenization of beans in the fermenter. This axis permits to put fermenter in a vertical position during the anaerobic alcoholic phase of fermentation, and horizontally during acetic phase to take advantage of the mid height filling. For circulation of air flow during turning in acetic phase, two woven rattan with grid have been made, one for the top and second for the bottom of the fermenter. In order to reduce air flow during acetic phase, two airtight covers are put on each grid cover. The efficiency of the turning by this kind of rotation, coupled with homogenization of the temperature, caused by the horizontal position in the acetic phase of the fermenter, contribute to having a good proportion of well-fermented beans (83.23%). In addition, beans’pH values ranged between 4.5 and 5.5. These values are ideal for enzymatic activity in the production of the aromatic compounds inside beans. The regularity of mass loss during all fermentation makes it possible to predict the drying surface corresponding to the amount being fermented.

Keywords: cocoa fermentation, fermenter, microbial activity, temperature, turning

Procedia PDF Downloads 256
475 Size Optimization of Microfluidic Polymerase Chain Reaction Devices Using COMSOL

Authors: Foteini Zagklavara, Peter Jimack, Nikil Kapur, Ozz Querin, Harvey Thompson

Abstract:

The invention and development of the Polymerase Chain Reaction (PCR) technology have revolutionised molecular biology and molecular diagnostics. There is an urgent need to optimise their performance of those devices while reducing the total construction and operation costs. The present study proposes a CFD-enabled optimisation methodology for continuous flow (CF) PCR devices with serpentine-channel structure, which enables the trade-offs between competing objectives of DNA amplification efficiency and pressure drop to be explored. This is achieved by using a surrogate-enabled optimisation approach accounting for the geometrical features of a CF μPCR device by performing a series of simulations at a relatively small number of Design of Experiments (DoE) points, with the use of COMSOL Multiphysics 5.4. The values of the objectives are extracted from the CFD solutions, and response surfaces created using the polyharmonic splines and neural networks. After creating the respective response surfaces, genetic algorithm, and a multi-level coordinate search optimisation function are used to locate the optimum design parameters. Both optimisation methods produced similar results for both the neural network and the polyharmonic spline response surfaces. The results indicate that there is the possibility of improving the DNA efficiency by ∼2% in one PCR cycle when doubling the width of the microchannel to 400 μm while maintaining the height at the value of the original design (50μm). Moreover, the increase in the width of the serpentine microchannel is combined with a decrease in its total length in order to obtain the same residence times in all the simulations, resulting in a smaller total substrate volume (32.94% decrease). A multi-objective optimisation is also performed with the use of a Pareto Front plot. Such knowledge will enable designers to maximise the amount of DNA amplified or to minimise the time taken throughout thermal cycling in such devices.

Keywords: PCR, optimisation, microfluidics, COMSOL

Procedia PDF Downloads 155
474 Study on Eco-Feedback of Thermal Comfort and Cost Efficiency for Low Energy Residence

Authors: Y. Jin, N. Zhang, X. Luo, W. Zhang

Abstract:

China with annual increasing 0.5-0.6 billion squares city residence has brought in enormous energy consumption by HVAC facilities and other appliances. In this regard, governments and researchers are encouraging renewable energy like solar energy, geothermal energy using in houses. However, high cost of equipment and low energy conversion result in a very low acceptable to residents. So what’s the equilibrium point of eco-feedback to reach economic benefit and thermal comfort? That is the main question should be answered. In this paper, the objective is an on-site solar PV and heater house, which has been evaluated as a low energy building. Since HVAC system is considered as main energy consumption equipment, the residence with 24-hour monitoring system set to measure temperature, wind velocity and energy in-out value with no HVAC system for one month of summer and winter. Thermal comfort time period will be analyzed and confirmed; then the air-conditioner will be started within thermal discomfort time for the following one summer and winter month. The same data will be recorded to calculate the average energy consumption monthly for a purpose of whole day thermal comfort. Finally, two analysis work will be done: 1) Original building thermal simulation by computer at design stage with actual measured temperature after construction will be contrastive analyzed; 2) The cost of renewable energy facilities and power consumption converted to cost efficient rate to assess the feasibility of renewable energy input for residence. The results of the experiment showed that a certain deviation exists between actual measured data and simulated one for human thermal comfort, especially in summer period. Moreover, the cost-effectiveness is high for a house in targeting city Guilin now with at least 11 years of cost-covering. The conclusion proves that an eco-feedback of a low energy residence is never only consideration of its energy net value, but also the cost efficiency that is the critical factor to push renewable energy acceptable by the public.

Keywords: cost efficiency, eco-feedback, low energy residence, thermal comfort

Procedia PDF Downloads 251
473 Effect of Modulation Factors on Tomotherapy Plans and Their Quality Analysis

Authors: Asawari Alok Pawaskar

Abstract:

This study was aimed at investigating quality assurance (QA) done with IBA matrix, the discrepan­cies observed for helical tomotherapy plans. A selection of tomotherapy plans that initially failed the with Matrix process was chosen for this investigation. These plans failed the fluence analysis as assessed using gamma criteria (3%, 3 mm). Each of these plans was modified (keeping the planning constraints the same), beamlets rebatched and reoptimized. By increasing and decreasing the modula­tion factor, the fluence in a circumferential plane as measured with a diode array was assessed. A subset of these plans was investigated using varied pitch values. Factors for each plan that were examined were point doses, fluences, leaf opening times, planned leaf sinograms, and uniformity indices. In order to ensure that the treatment constraints remained the same, the dose-volume histograms (DVHs) of all the modulated plans were compared to the original plan. It was observed that a large increase in the modulation factor did not significantly improve DVH unifor­mity, but reduced the gamma analysis pass rate. This also increased the treatment delivery time by slowing down the gantry rotation speed which then increases the maximum to mean non-zero leaf open time ratio. Increasing and decreasing the pitch value did not substantially change treatment time, but the delivery accuracy was adversely affected. This may be due to many other factors, such as the complexity of the treatment plan and site. Patient sites included in this study were head and neck, breast, abdomen. The impact of leaf tim­ing inaccuracies on plans was greater with higher modulation factors. Point-dose measurements were seen to be less susceptible to changes in pitch and modulation factors. The initial modulation factor used by the optimizer, such that the TPS generated ‘actual’ modulation factor within the range of 1.4 to 2.5, resulted in an improved deliverable plan.

Keywords: dose volume histogram, modulation factor, IBA matrix, tomotherapy

Procedia PDF Downloads 171
472 Simulation Study on Effects of Surfactant Properties on Surfactant Enhanced Oil Recovery from Fractured Reservoirs

Authors: Xiaoqian Cheng, Jon Kleppe, Ole Torsaeter

Abstract:

One objective of this work is to analyze the effects of surfactant properties (viscosity, concentration, and adsorption) on surfactant enhanced oil recovery at laboratory scale. The other objective is to obtain the functional relationships between surfactant properties and the ultimate oil recovery and oil recovery rate. A core is cut into two parts from the middle to imitate the matrix with a horizontal fracture. An injector and a producer are at the left and right sides of the fracture separately. The middle slice of the core is used as the model in this paper, whose size is 4cm x 0.1cm x 4.1cm, and the space of the fracture in the middle is 0.1 cm. The original properties of matrix, brine, oil in the base case are from Ekofisk Field. The properties of surfactant are from literature. Eclipse is used as the simulator. The results are followings: 1) The viscosity of surfactant solution has a positive linear relationship with surfactant oil recovery time. And the relationship between viscosity and oil production rate is an inverse function. The viscosity of surfactant solution has no obvious effect on ultimate oil recovery. Since most of the surfactant has no big effect on viscosity of brine, the viscosity of surfactant solution is not a key parameter of surfactant screening for surfactant flooding in fractured reservoirs. 2) The increase of surfactant concentration results a decrease of oil recovery rate and an increase of ultimate oil recovery. However, there are no functions could describe the relationships. Study on economy should be conducted because of the price of surfactant and oil. 3) In the study of surfactant adsorption, assume that the matrix wettability is changed to water-wet when the surfactant adsorption is to the maximum at all cases. And the ratio of surfactant adsorption and surfactant concentration (Cads/Csurf) is used to estimate the functional relationship. The results show that the relationship between ultimate oil recovery and Cads/Csurf is a logarithmic function. The oil production rate has a positive linear relationship with exp(Cads/Csurf). The work here could be used as a reference for the surfactant screening of surfactant enhanced oil recovery from fractured reservoirs. And the functional relationships between surfactant properties and the oil recovery rate and ultimate oil recovery help to improve upscaling methods.

Keywords: fractured reservoirs, surfactant adsorption, surfactant concentration, surfactant EOR, surfactant viscosity

Procedia PDF Downloads 167
471 Waste Derived from Refinery and Petrochemical Plants Activities: Processing of Oil Sludge through Thermal Desorption

Authors: Anna Bohers, Emília Hroncová, Juraj Ladomerský

Abstract:

Oil sludge with its main characteristic of high acidity is a waste product generated from the operation of refinery and petrochemical plants. Former refinery and petrochemical plant - Petrochema Dubová is present in Slovakia as well. Its activities was to process the crude oil through sulfonation and adsorption technology for production of lubricating and special oils, synthetic detergents and special white oils for cosmetic and medical purposes. Seventy years ago – period, when this historical acid sludge burden has been created – comparing to the environmental awareness the production was in preference. That is the reason why, as in many countries, also in Slovakia a historical environmental burden is present until now – 229 211 m3 of oil sludge in the middle of the National Park of Nízke Tatry mountain chain. Neither one of tried treatment methods – bio or non-biologic one - was proved as suitable for processing or for recovery in the reason of different factors admission: i.e. strong aggressivity, difficulty with handling because of its sludgy and liquid state et sim. As a potential solution, also incineration was tested, but it was not proven as a suitable method, as the concentration of SO2 in combustion gases was too high, and it was not possible to decrease it under the acceptable value of 2000 mg.mn-3. That is the reason why the operation of incineration plant has been terminated, and the acid sludge landfills are present until nowadays. The objective of this paper is to present a new possibility of processing and valorization of acid sludgy-waste. The processing of oil sludge was performed through the effective separation - thermal desorption technology, through which it is possible to split the sludgy material into the matrix (soil, sediments) and organic contaminants. In order to boost the efficiency in the processing of acid sludge through thermal desorption, the work will present the possibility of application of an original technology – Method of Blowing Decomposition for recovering of organic matter into technological lubricating oil.

Keywords: hazardous waste, oil sludge, remediation, thermal desorption

Procedia PDF Downloads 193
470 Impact of Treatment of Fragility Fractures Due to Osteoporosis as an Economic Burden Worldwide: A Systematic Review

Authors: Fabiha Tanzeem

Abstract:

BACKGROUND: Osteoporosis is a skeletal disease that is associated with a reduction in bone mass and microstructures of the bone and deterioration of bone tissue. Fragility fracture due to osteoporosis is the most significant complication of osteoporosis. The increasing prevalence of fragility fractures presents a growing burden on the global economy. There is a rapidly evolving need to improve awareness of the costs associated with these types of fractures and to review current policies and practices for the prevention and management of the disease. This systematic review will identify and describe the direct and indirect costs associated with osteoporotic fragility fractures from a global perspective from the included studies. The review will also find out whether the costs required for the treatment of fragility fractures due to osteoporosis impose an economic burden on the global healthcare system. METHODS: Four major databases were systematically searched for direct and indirect costs of osteoporotic fragility fracture studies in the English Language. PubMed, Cochrane Library, Embase and Google Scholar were searched for suitable articles published between 1990 and July 2020. RESULTS: The original search yielded 1166 papers; from these, 27 articles were selected for this review according to the inclusion and exclusion criteria. In the 27 studies, the highest direct costs were associated with the treatment of pelvic fractures, with the majority of the expenditure due to hospitalization and surgical treatments. It is also observed that most of the articles are from developed countries. CONCLUSION: This review indicates the significance of the economic burden of osteoporosis globally, although more research needs to be done in developing countries. In the treatment of fragility fractures, direct costs were the main reported expenditure in this review. The healthcare costs incurred globally can be significantly reduced by implementing measures to effectively prevent the disease. Raising awareness in children and adults by improving the quality of the information available and standardising policies and planning of services requires further research.

Keywords: systematic review, osteoporosis, cost of illness

Procedia PDF Downloads 162
469 Non-State Actors and Their Liabilities in International Armed Conflicts

Authors: Shivam Dwivedi, Saumya Kapoor

Abstract:

The Israeli Supreme Court in Public Committee against Torture in Israel v. Government of Israel observed the presence of non-state actors in cross-border terrorist activities thereby making the role of non-state actors in terrorism the center of discussion under the scope of International Humanitarian Law. Non-state actors and their role in a conflict have also been traversed upon by the Tadic case decided by the International Criminal Tribunal for the former Yugoslavia. However, there still are lacunae in International Humanitarian Law when it comes to determining the nature of a conflict, especially when non-state groups act within the ambit of various states, for example, Taliban in Afghanistan or the groups operating in Ukraine and Georgia. Thus, the objective of writing this paper would be to observe the ways by which non-state actors particularly terrorist organizations could be brought under the ambit of Additional Protocol I. Additional Protocol I is a 1977 amendment protocol to the Geneva Conventions relating to the protection of victims of international conflicts which basically outlaws indiscriminate attacks on civilian populations, forbids conscription of children and preserves various other human rights during the war. In general, the Additional Protocol I reaffirms the provisions of the original four Geneva Conventions. Since provisions of Additional Protocol I apply only to cases pertaining to International Armed Conflicts, the answer to the problem should lie in including the scope for ‘transnational armed conflict’ in the already existing definition of ‘International Armed Conflict’ within Common Article 2 of the Geneva Conventions. This would broaden the applicability of the provisions in cases of non-state groups and render an international character to the conflict. Also, the non-state groups operating or appearing to operate should be determined by the test laid down in the Nicaragua case by the International Court of Justice and not under the Tadic case decided by the International Criminal Tribunal for Former Yugoslavia in order to provide a comprehensive system to deal with such groups. The result of the above proposal, therefore, would enhance the scope of the application of International Humanitarian Law to non-state groups and individuals.

Keywords: Geneva Conventions, International Armed Conflict, International Humanitarian Law, non-state actors

Procedia PDF Downloads 373
468 Assessing an Instrument Usability: Response Interpolation and Scale Sensitivity

Authors: Betsy Ng, Seng Chee Tan, Choon Lang Quek, Peter Looker, Jaime Koh

Abstract:

The purpose of the present study was to determine the particular scale rating that stands out for an instrument. The instrument was designed to assess student perceptions of various learning environments, namely face-to-face, online and blended. The original instrument had a 5-point Likert items (1 = strongly disagree and 5 = strongly agree). Alternate versions were modified with a 6-point Likert scale and a bar scale rating. Participants consisted of undergraduates in a local university were involved in the usability testing of the instrument in an electronic setting. They were presented with the 5-point, 6-point and percentage-bar (100-point) scale ratings, in response to their perceptions of learning environments. The 5-point and 6-point Likert scales were presented in the form of radio button controls for each number, while the percentage-bar scale was presented with a sliding selection. Among these responses, 6-point Likert scale emerged to be the best overall. When participants were confronted with the 5-point items, they either chose 3 or 4, suggesting that data loss could occur due to the insensitivity of instrument. The insensitivity of instrument could be due to the discreet options, as evidenced by response interpolation. To avoid the constraint of discreet options, the percentage-bar scale rating was tested, but the participant responses were not well-interpolated. The bar scale might have provided a variety of responses without a constraint of a set of categorical options, but it seemed to reflect a lack of perceived and objective accuracy. The 6-point Likert scale was more likely to reflect a respondent’s perceived and objective accuracy as well as higher sensitivity. This finding supported the conclusion that 6-point Likert items provided a more accurate measure of the participant’s evaluation. The 5-point and bar scale ratings might not be accurately measuring the participants’ responses. This study highlighted the importance of the respondent’s perception of accuracy, respondent’s true evaluation, and the scale’s ease of use. Implications and limitations of this study were also discussed.

Keywords: usability, interpolation, sensitivity, Likert scales, accuracy

Procedia PDF Downloads 403
467 Adolescent Obesity Leading to Adulthood Cardiovascular Diseases among Punjabi Population

Authors: Manpreet Kaur, Badaruddoza, Sandeep Kaur Brar

Abstract:

The increasing prevalence of adolescent obesity is one of the major causes to be hypertensive in adulthood. Various statistical methods have been applied to examine the performance of anthropometric indices for the identification of adverse cardiovascular risk profile. The present work was undertaken to determine the significant traditional risk factors through principal component factor analysis (PCFA) among population based Punjabi adolescents aged 10-18 years. Data was collected among adolescent children from different schools situated in urban areas of Punjab, India. Principal component factor analysis (PCFA) was applied to extract orthogonal components from anthropometric and physiometric variables. Association between components were explained by factor loadings. The PCFA extracted four factors, which explained 84.21%, 84.06% and 83.15% of the total variance of the 14 original quantitative traits among boys, girls and combined subjects respectively. Factor 1 has high loading of the traits that reflect adiposity such as waist circumference, BMI and skinfolds among both sexes. However, waist circumference and body mass index are the indicator of abdominal obesity which increases the risk of cardiovascular diseases. The loadings of these two traits have found maximum in girls adolescents (WC=0.924; BMI=0.905). Therefore, factor 1 is the strong indicator of atherosclerosis in adolescents. Factor 2 is predominantly loaded with blood pressures and related traits (SBP, DBP, MBP and pulse rate) which reflect the risk of essential hypertension in adolescent girls and combined subjects, whereas, factor 2 loaded with obesity related traits in boys (weight and hip circumferences). Comparably, factor 3 is loaded with blood pressures in boys and with height and WHR in girls, while factor 4 contains high loading of pulse pressure among boys, girls and combined group of adolescents.

Keywords: adolescent obesity, cvd, hypertension, punjabi population

Procedia PDF Downloads 368
466 Argumentation Frameworks and Theories of Judging

Authors: Sonia Anand Knowlton

Abstract:

With the rise of artificial intelligence, computer science is becoming increasingly integrated in virtually every area of life. Of course, the law is no exception. Through argumentation frameworks (AFs), computer scientists have used abstract algebra to structure the legal reasoning process in a way that allows conclusions to be drawn from a formalized system of arguments. In AFs, arguments compete against each other for logical success and are related to one another through the binary operation of the attack. The prevailing arguments make up the preferred extension of the given argumentation framework, telling us what set of arguments must be accepted from a logical standpoint. There have been several developments of AFs since its original conception in the early 90’s in efforts to make them more aligned with the human reasoning process. Generally, these developments have sought to add nuance to the factors that influence the logical success of competing arguments (e.g., giving an argument more logical strength based on the underlying value it promotes). The most cogent development was that of the Extended Argumentation Framework (EAF), in which attacks can themselves be attacked by other arguments, and the promotion of different competing values can be formalized within the system. This article applies the logical structure of EAFs to current theoretical understandings of judicial reasoning to contribute to theories of judging and to the evolution of AFs simultaneously. The argument is that the main limitation of EAFs, when applied to judicial reasoning, is that they require judges to themselves assign values to different arguments and then lexically order these values to determine the given framework’s preferred extension. Drawing on John Rawls’ Theory of Justice, the examination that follows is whether values are lexical and commensurable to this extent. The analysis that follows then suggests a potential extension of the EAF system with an approach that formalizes different “planes of attack” for competing arguments that promote lexically ordered values. This article concludes with a summary of how these insights contribute to theories of judging and of legal reasoning more broadly, specifically in indeterminate cases where judges must turn to value-based approaches.

Keywords: computer science, mathematics, law, legal theory, judging

Procedia PDF Downloads 57
465 Experimental Study Analysis of Flow over Pickup Truck’s Cargo Area Using Bed Covers

Authors: Jonathan Rodriguez, Dominga Guerrero, Surupa Shaw

Abstract:

Automobiles are modeled in various forms, and they interact with air when in motion. Aerodynamics is the study of such interactions where solid bodies affect the way air moves around them. The shape of solid bodies can impact the ease at which they move against the flow of air; due to which any additional freightage, or loads, impact its aerodynamics. It is important to transport people and cargo safely. Despite the various safety measures, there are a large number of vehicle-related accidents. This study precisely explores the effects an automobile experiences, with added cargo and covers. The addition of these items changes the original vehicle shape and the approved design for safe driving. This paper showcases the effects of the changed vehicle shape and design via experimental testing conducted on a physical 1:27 scale and CAD model of an F-150 pickup truck, the most common pickup truck in the United States, with differently shaped loads and weight traveling at a constant speed. The additional freightage produces unwanted drag or lift resulting in lower fuel efficiencies and unsafe driving conditions. This study employs an adjustable external shell on the F-150 pickup truck to create a controlled aerodynamic geometry to combat the detrimental effects of additional freightage. The results utilize colored powder [ which acts as a visual medium for the interaction of air with the vehicle], to highlight the impact of the additional freight on the automobile’s external shell. This will be done along with simulation models using Altair CFD software of twelve cases regarding the effects of an added load onto an F-150 pickup truck. This paper is an attempt toward standardizing the geometric design of the external shell, given the uniqueness of every load and its placement on the vehicle; while providing real-time data to be compared to simulation results from the existing literature.

Keywords: aerodynamics, CFD, freightage, pickup cover

Procedia PDF Downloads 160
464 Algae Growth and Biofilm Control by Ultrasonic Technology

Authors: Vojtech Stejskal, Hana Skalova, Petr Kvapil, George Hutchinson

Abstract:

Algae growth has been an important issue in water management of water plants, ponds and lakes, swimming pools, aquaculture & fish farms, gardens or golf courses for last decades. There are solutions based on chemical or biological principles. Apart of these traditional principles for inhibition of algae growth and biofilm production there are also physical methods which are very competitive compared to the traditional ones. Ultrasonic technology is one of these alternatives. Ultrasonic emitter is able to eliminate the biofilm which behaves as a host and attachment point for algae and is original reason for the algae growth. The ultrasound waves prevent majority of the bacteria in planktonic form becoming strongly attached sessile bacteria that creates welcoming layer for the biofilm production. Biofilm creation is very fast – in the serene water it takes between 30 minutes to 4 hours, depending on temperature and other parameters. Ultrasound device is not killing bacteria. Ultrasound waves are passing through bacteria, which retract as if they were in very turbulent water even though the water is visually completely serene. In these conditions, bacteria does not excrete the polysaccharide glue they use to attach to the surface of the pool or pond, where ultrasonic technology is used. Ultrasonic waves decrease the production of biofilm on the surfaces in the selected area. In case there are already at the start of the application of ultrasonic technology in a pond or basin clean inner surfaces, the biofilm production is almost absolutely inhibited. This paper talks about two different pilot applications – one in Czech Republic and second in United States of America, where the used ultrasonic technology (AlgaeControl) is coming from. On both sites, there was used Mezzo Ultrasonic Algae Control System with very positive results not only on biofilm production, but also algae growth in the surrounding area. Technology has been successfully tested in two different environments. The poster describes the differences and their influence on the efficiency of ultrasonic technology application. Conclusions and lessons learned can be possibly applied also on other sites within Europe or even further.

Keywords: algae growth, biofilm production, ultrasonic solution, ultrasound

Procedia PDF Downloads 259
463 Coupled Space and Time Homogenization of Viscoelastic-Viscoplastic Composites

Authors: Sarra Haouala, Issam Doghri

Abstract:

In this work, a multiscale computational strategy is proposed for the analysis of structures, which are described at a refined level both in space and in time. The proposal is applied to two-phase viscoelastic-viscoplastic (VE-VP) reinforced thermoplastics subjected to large numbers of cycles. The main aim is to predict the effective long time response while reducing the computational cost considerably. The proposed computational framework is a combination of the mean-field space homogenization based on the generalized incrementally affine formulation for VE-VP composites, and the asymptotic time homogenization approach for coupled isotropic VE-VP homogeneous solids under large numbers of cycles. The time homogenization method is based on the definition of micro and macro-chronological time scales, and on asymptotic expansions of the unknown variables. First, the original anisotropic VE-VP initial-boundary value problem of the composite material is decomposed into coupled micro-chronological (fast time scale) and macro-chronological (slow time-scale) problems. The former is purely VE, and solved once for each macro time step, whereas the latter problem is nonlinear and solved iteratively using fully implicit time integration. Second, mean-field space homogenization is used for both micro and macro-chronological problems to determine the micro and macro-chronological effective behavior of the composite material. The response of the matrix material is VE-VP with J2 flow theory assuming small strains. The formulation exploits the return-mapping algorithm for the J2 model, with its two steps: viscoelastic predictor and plastic corrections. The proposal is implemented for an extended Mori-Tanaka scheme, and verified against finite element simulations of representative volume elements, for a number of polymer composite materials subjected to large numbers of cycles.

Keywords: asymptotic expansions, cyclic loadings, inclusion-reinforced thermoplastics, mean-field homogenization, time homogenization

Procedia PDF Downloads 363
462 Development of Green Cement, Based on Partial Replacement of Clinker with Limestone Powder

Authors: Yaniv Knop, Alva Peled

Abstract:

Over the past few years there has been a growing interest in the development of Portland Composite Cement, by partial replacement of the clinker with mineral additives. The motivations to reduce the clinker content are threefold: (1) Ecological - due to lower emission of CO2 to the atmosphere; (2) Economical - due to cost reduction; and (3) Scientific\Technology – improvement of performances. Among the mineral additives being used and investigated, limestone is one of the most attractive, as it is considered natural, available, and with low cost. The goal of the research is to develop green cement, by partial replacement of the clinker with limestone powder while improving the performances of the cement paste. This work studied blended cements with three limestone powder particle diameters: smaller than, larger than, and similarly sized to the clinker particle. Blended cement with limestone consisting of one particle size distribution and limestone consisting of a combination of several particle sizes were studied and compared in terms of hydration rate, hydration degree, and water demand to achieve normal consistency. The performances of these systems were also compared with that of the original cement (without added limestone). It was found that the ability to replace an active material with an inert additive, while achieving improved performances, can be obtained by increasing the packing density of the cement-based particles. This may be achieved by replacing the clinker with limestone powders having a combination of several different particle size distributions. Mathematical and physical models were developed to simulate the setting history from initial to final setting time and to predict the packing density of blended cement with limestone having different sizes and various contents. Besides the effect of limestone, as inert additive, on the packing density of the blended cement, the influence of the limestone particle size on three different chemical reactions were studied; hydration of the cement, carbonation of the calcium hydroxide and the reactivity of the limestone with the hydration reaction products. The main results and developments will be presented.

Keywords: packing density, hydration degree, limestone, blended cement

Procedia PDF Downloads 280
461 Surface Tension and Bulk Density of Ammonium Nitrate Solutions: A Molecular Dynamics Study

Authors: Sara Mosallanejad, Bogdan Z. Dlugogorski, Jeff Gore, Mohammednoor Altarawneh

Abstract:

Ammonium nitrate (NH­₄NO₃, AN) is commonly used as the main component of AN emulsion and fuel oil (ANFO) explosives, that use extensively in civilian and mining operations for underground development and tunneling applications. The emulsion formulation and wettability of AN prills, which affect the physical stability and detonation of ANFO, highly depend on the surface tension, density, viscosity of the used liquid. Therefore, for engineering applications of this material, the determination of density and surface tension of concentrated aqueous solutions of AN is essential. The molecular dynamics (MD) simulation method have been used to investigate the density and the surface tension of high concentrated ammonium nitrate solutions; up to its solubility limit in water. Non-polarisable models for water and ions have carried out the simulations, and the electronic continuum correction model (ECC) uses a scaling of the charges of the ions to apply the polarisation implicitly into the non-polarisable model. The results of calculated density and the surface tension of the solutions have been compared to available experimental values. Our MD simulations show that the non-polarisable model with full-charge ions overestimates the experimental results while the reduce-charge model for the ions fits very well with the experimental data. Ions in the solutions show repulsion from the interface using the non-polarisable force fields. However, when charges of the ions in the original model are scaled in line with the scaling factor of the ECC model, the ions create a double ionic layer near the interface by the migration of anions toward the interface while cations stay in the bulk of the solutions. Similar ions orientations near the interface were observed when polarisable models were used in simulations. In conclusion, applying the ECC model to the non-polarisable force field yields the density and surface tension of the AN solutions with high accuracy in comparison to the experimental measurements.

Keywords: ammonium nitrate, electronic continuum correction, non-polarisable force field, surface tension

Procedia PDF Downloads 223
460 Exploring Bidirectional Encoder Representations from the Transformers’ Capabilities to Detect English Preposition Errors

Authors: Dylan Elliott, Katya Pertsova

Abstract:

Preposition errors are some of the most common errors created by L2 speakers. In addition, improving error correction and detection methods remains an open issue in the realm of Natural Language Processing (NLP). This research investigates whether the bidirectional encoder representations from the transformers model (BERT) have the potential to correct preposition errors accurately enough to be useful in error correction software. This research finds that BERT performs strongly when the scope of its error correction is limited to preposition choice. The researchers used an open-source BERT model and over three hundred thousand edited sentences from Wikipedia, tagged for part of speech, where only a preposition edit had occurred. To test BERT’s ability to detect errors, a technique known as multi-level masking was used to generate suggestions based on sentence context for every prepositional environment in the test data. These suggestions were compared with the original errors in the data and their known corrections to evaluate BERT’s performance. The suggestions were further analyzed to determine if BERT more often agreed with the judgements of the Wikipedia editors. Both the untrained and fined-tuned models were compared. Finetuning led to a greater rate of error-detection which significantly improved recall, but lowered precision due to an increase in false positives or falsely flagged errors. However, in most cases, these false positives were not errors in preposition usage but merely cases where more than one preposition was possible. Furthermore, when BERT correctly identified an error, the model largely agreed with the Wikipedia editors, suggesting that BERT’s ability to detect misused prepositions is better than previously believed. To evaluate to what extent BERT’s false positives were grammatical suggestions, we plan to do a further crowd-sourcing study to test the grammaticality of BERT’s suggested sentence corrections against native speakers’ judgments.

Keywords: BERT, grammatical error correction, preposition error detection, prepositions

Procedia PDF Downloads 137
459 Flow Reproduction Using Vortex Particle Methods for Wake Buffeting Analysis of Bluff Structures

Authors: Samir Chawdhury, Guido Morgenthal

Abstract:

The paper presents a novel extension of Vortex Particle Methods (VPM) where the study aims to reproduce a template simulation of complex flow field that is generated from impulsively started flow past an upstream bluff body at certain Reynolds number Re-Vibration of a structural system under upstream wake flow is often considered its governing design criteria. Therefore, the attention is given in this study especially for the reproduction of wake flow simulation. The basic methodology for the implementation of the flow reproduction requires the downstream velocity sampling from the template flow simulation; therefore, at particular distances from the upstream section the instantaneous velocity components are sampled using a series of square sampling-cells arranged vertically where each of the cell contains four velocity sampling points at its corner. Since the grid free Lagrangian VPM algorithm discretises vorticity on particle elements, the method requires transformation of the velocity components into vortex circulation, and finally the simulation of the reproduction of the template flow field by seeding these vortex circulations or particles into a free stream flow. It is noteworthy that the vortex particles have to be released into the free stream exactly at same rate of velocity sampling. Studies have been done, specifically, in terms of different sampling rates and velocity sampling positions to find their effects on flow reproduction quality. The quality assessments are mainly done, using a downstream flow monitoring profile, by comparing the characteristic wind flow profiles using several statistical turbulence measures. Additionally, the comparisons are performed using velocity time histories, snapshots of the flow fields, and the vibration of a downstream bluff section by performing wake buffeting analyses of the section under the original and reproduced wake flows. Convergence study is performed for the validation of the method. The study also describes the possibilities how to achieve flow reproductions with less computational effort.

Keywords: vortex particle method, wake flow, flow reproduction, wake buffeting analysis

Procedia PDF Downloads 307
458 Radar Track-based Classification of Birds and UAVs

Authors: Altilio Rosa, Chirico Francesco, Foglia Goffredo

Abstract:

In recent years, the number of Unmanned Aerial Vehicles (UAVs) has significantly increased. The rapid development of commercial and recreational drones makes them an important part of our society. Despite the growing list of their applications, these vehicles pose a huge threat to civil and military installations: detection, classification and neutralization of such flying objects become an urgent need. Radar is an effective remote sensing tool for detecting and tracking flying objects, but scenarios characterized by the presence of a high number of tracks related to flying birds make especially challenging the drone detection task: operator PPI is cluttered with a huge number of potential threats and his reaction time can be severely affected. Flying birds compared to UAVs show similar velocity, RADAR cross-section and, in general, similar characteristics. Building from the absence of a single feature that is able to distinguish UAVs and birds, this paper uses a multiple features approach where an original feature selection technique is developed to feed binary classifiers trained to distinguish birds and UAVs. RADAR tracks acquired on the field and related to different UAVs and birds performing various trajectories were used to extract specifically designed target movement-related features based on velocity, trajectory and signal strength. An optimization strategy based on a genetic algorithm is also introduced to select the optimal subset of features and to estimate the performance of several classification algorithms (Neural network, SVM, Logistic regression…) both in terms of the number of selected features and misclassification error. Results show that the proposed methods are able to reduce the dimension of the data space and to remove almost all non-drone false targets with a suitable classification accuracy (higher than 95%).

Keywords: birds, classification, machine learning, UAVs

Procedia PDF Downloads 212
457 Development of Intake System for Improvement of Performance of Compressed Natural Gas Spark Ignition Engine

Authors: Mardani Ali Serah, Yuriadi Kusuma, Chandrasa Soekardi

Abstract:

The improvement of flow strategy was implemented in the intake system of the engine to produce better Compressed Natural Gas engine performance. Three components were studied, designed, simulated, developed,tested and validated in this research. The components are: the mixer, swirl device and fuel cooler device. The three components were installed to produce pressurised turbulent flow with higher fuel volume in the intake system, which is ideal condition for Compressed Natural Gas (CNG) fuelled engine. A combination of experimental work with simulation technique were carried out. The work included design and fabrication of the engine test rig; the CNG fuel cooling system; fitting of instrumentation and measurement system for the performance testing of both gasoline and CNG modes. The simulation work was utilised to design appropriate mixer and swirl device. The flow test rig, known as the steady state flow rig (SSFR) was constructed to validate the simulation results. Then the investigation of the effect of these components on the CNG engine performance was carried out. A venturi-inlet holes mixer with three variables: number of inlet hole (8, 12, and 16); the inlet angles (300, 400, 500, and 600) and the outlet angles (200, 300, 400, and 500) were studied. The swirl-device with number of revolution and the plane angle variables were also studied. The CNG fuel cooling system with the ability to control water flow rate and the coolant temperature was installed. In this study it was found that the mixer and swirl-device improved the swirl ratio and pressure condition inside the intake manifold. The installation of the mixer, swirl device and CNG fuel cooling system had successfully increased 5.5%, 5%, and 3% of CNG engine performance respectively compared to that of existing operating condition. The overall results proved that there is a high potential of this mixer and swirl device method in increasing the CNG engine performance. The overall improvement on engine performance of power and torque was about 11% and 13% compared to the original mixer.

Keywords: intake system, Compressed Natural Gas, volumetric efficiency, engine performance

Procedia PDF Downloads 335
456 A Comparative Study of Simple and Pre-polymerized Fe Coagulants for Surface Water Treatment

Authors: Petros Gkotsis, Giorgos Stratidis, Manassis Mitrakas, Anastasios Zouboulis

Abstract:

This study investigates the use of original and pre-polymerized iron (Fe) reagents compared to the commonly applied polyaluminum chloride (PACl) coagulant for surface water treatment. Applicable coagulants included both ferric chloride (FeCl₃) and ferric sulfate (Fe₂(SO₄)₃) and their pre-polymerized Fe reagents, such as polyferric sulfate (PFS) and polyferric chloride (PFCl). The efficiency of coagulants was evaluated by the removal of natural organic matter (NOM) and suspended solids (SS), which were determined in terms of reducing the UV absorption at 254 nm and turbidity, respectively. The residual metal concentration (Fe and Al) was also measured. Coagulants were added at five concentrations (1, 2, 3, 4 and 5 mg/L) and three pH values (7.0, 7.3 and 7.6). Experiments were conducted in a jar-test device, with two types of synthetic surface water (i.e., of high and low organic strength) which consisted of humic acid (HA) and kaolin at different concentrations (5 mg/L and 50 mg/L). After the coagulation/flocculation process, clean water was separated with filters of pore size 0.45 μm. Filtration was also conducted before the addition of coagulants in order to compare the ‘net’ effect of the coagulation/flocculation process on the examined parameters (UV at 254 nm, turbidity, and residual metal concentration). Results showed that the use of PACl resulted in the highest removal of humics for both types of surface water. For the surface water of high organic strength (humic acid-kaolin, 50 mg/L-50 mg/L), the highest removal of humics was observed at the highest coagulant dosage of 5 mg/L and at pH=7. On the contrary, turbidity was not significantly affected by the coagulant dosage. However, the use of PACl decreased turbidity the most, especially when the surface water of high organic strength was employed. As expected, the application of coagulation/flocculation prior to filtration improved NOM removal but slightly affected turbidity. Finally, the residual Fe concentration (0.01-0.1 mg/L) was much lower than the residual Al concentration (0.1-0.25 mg/L).

Keywords: coagulation/flocculation, iron and aluminum coagulants, metal salts, pre-polymerized coagulants, surface water treatment

Procedia PDF Downloads 147
455 Organic Carbon Pools Fractionation of Lacustrine Sediment with a Stepwise Chemical Procedure

Authors: Xiaoqing Liu, Kurt Friese, Karsten Rinke

Abstract:

Lacustrine sediment archives rich paleoenvironmental information in lake and surrounding environment. Additionally, modern sediment is used as an effective medium for the monitoring of lake. Organic carbon in sediment is a heterogeneous mixture with varying turnover times and qualities which result from the different biogeochemical processes in the deposition of organic material. Therefore, the isolation of different carbon pools is important for the research of lacustrine condition in the lake. However, the numeric available fractionation procedures can hardly yield homogeneous carbon pools on terms of stability and age. In this work, a multi-step fractionation protocol that treated sediment with hot water, HCl, H2O2 and Na2S2O8 in sequence was adopted, the treated sediment from each step were analyzed for the isotopic and structural compositions with Isotope Ratio Mass Spectrometer coupled with element analyzer (IRMS-EA) and Solid-state 13C Nuclear Magnetic Resonance (NMR), respectively. The sequential extractions with hot-water, HCl, and H2O2 yielded a more homogeneous and C3 plant-originating OC fraction, which was characterized with an atomic C/N ratio shift from 12.0 to 20.8, and 13C and 15N isotopic signatures were 0.9‰ and 1.9‰ more depleted than the original bulk sediment, respectively. Additionally, the H2O2- resistant residue was dominated with stable components, such as the lignins, waxes, cutans, tannins, steroids and aliphatic proteins and complex carbohydrates. 6M HCl in the acid hydrolysis step was much more effective than 1M HCl to isolate a sedimentary OC fraction with higher degree of homogeneity. Owing to the extremely high removal rate of organic matter, the step of a Na2S2O8 oxidation is only suggested if the isolation of the most refractory OC pool is mandatory. We conclude that this multi-step chemical fractionation procedure is effective to isolate more homogeneous OC pools in terms of stability and functional structure, and it can be used as a promising method for OC pools fractionation of sediment or soil in future lake research.

Keywords: 13C-CPMAS-NMR, 13C signature, lake sediment, OC fractionation

Procedia PDF Downloads 297
454 Verification and Validation of Simulated Process Models of KALBR-SIM Training Simulator

Authors: T. Jayanthi, K. Velusamy, H. Seetha, S. A. V. Satya Murty

Abstract:

Verification and Validation of Simulated Process Model is the most important phase of the simulator life cycle. Evaluation of simulated process models based on Verification and Validation techniques checks the closeness of each component model (in a simulated network) with the real system/process with respect to dynamic behaviour under steady state and transient conditions. The process of Verification and validation helps in qualifying the process simulator for the intended purpose whether it is for providing comprehensive training or design verification. In general, model verification is carried out by comparison of simulated component characteristics with the original requirement to ensure that each step in the model development process completely incorporates all the design requirements. Validation testing is performed by comparing the simulated process parameters to the actual plant process parameters either in standalone mode or integrated mode. A Full Scope Replica Operator Training Simulator for PFBR - Prototype Fast Breeder Reactor has been developed at IGCAR, Kalpakkam, INDIA named KALBR-SIM (Kalpakkam Breeder Reactor Simulator) wherein the main participants are engineers/experts belonging to Modeling Team, Process Design and Instrumentation and Control design team. This paper discusses the Verification and Validation process in general, the evaluation procedure adopted for PFBR operator training Simulator, the methodology followed for verifying the models, the reference documents and standards used etc. It details out the importance of internal validation by design experts, subsequent validation by external agency consisting of experts from various fields, model improvement by tuning based on expert’s comments, final qualification of the simulator for the intended purpose and the difficulties faced while co-coordinating various activities.

Keywords: Verification and Validation (V&V), Prototype Fast Breeder Reactor (PFBR), Kalpakkam Breeder Reactor Simulator (KALBR-SIM), steady state, transient state

Procedia PDF Downloads 253
453 Techniques for Seismic Strengthening of Historical Monuments from Diagnosis to Implementation

Authors: Mircan Kaya

Abstract:

A multi-disciplinary approach is required in any intervention project for historical monuments. Due to the complexity of their geometry, the variable and unpredictable characteristics of original materials used in their creation, heritage structures are peculiar. Their histories are often complex, and they require correct diagnoses to decide on the techniques of intervention. This approach should not only combine technical aspects but also historical research that may help discover phenomena involving structural issues, and acquire a knowledge of the structure on its concept, method of construction, previous interventions, process of damage, and its current state. In addition to the traditional techniques like bed joint reinforcement, the repairing, strengthening and restoration of historical buildings may require several other modern methods which may be described as innovative techniques like pre-stressing and post-tensioning, use of shape memory alloy devices and shock transmission units, shoring, drilling, and the use of stainless steel or titanium. Regardless of the method to be incorporated in the strengthening process, which can be traditional or innovative, it is crucial to recognize that structural strengthening is the process of upgrading the structural system of the existing building with the aim of improving its performance under existing and additional loads like seismic loads. This process is much more complex than dealing with a new construction, owing to the fact that there are several unknown factors associated with the structural system. Material properties, load paths, previous interventions, existing reinforcement are especially important matters to be considered. There are several examples of seismic strengthening with traditional and innovative techniques around the world, which will be discussed in this paper in detail, including their pros and cons. Ultimately, however, the main idea underlying the philosophy of a successful intervention with the most appropriate techniques of strengthening a historic monument should be decided by a proper assessment of the specific needs of the building.

Keywords: bed joint reinforcement, historical monuments, post-tensioning, pre-stressing, seismic strengthening, shape memory alloy devices, shock transmitters, tie rods

Procedia PDF Downloads 258
452 Investigation of Oscillation Mechanism of a Large-scale Solar Photovoltaic and Wind Hybrid Power Plant

Authors: Ting Kai Chia, Ruifeng Yan, Feifei Bai, Tapan Saha

Abstract:

This research presents a real-world power system oscillation incident in 2022 originated by a hybrid solar photovoltaic (PV) and wind renewable energy farm with a rated capacity of approximately 300MW in Australia. The voltage and reactive power outputs recorded at the point of common coupling (PCC) oscillated at a sub-synchronous frequency region, which sustained for approximately five hours in the network. The reactive power oscillation gradually increased over time and reached a recorded maximum of approximately 250MVar peak-to-peak (from inductive to capacitive). The network service provider was not able to quickly identify the location of the oscillation source because the issue was widespread across the network. After the incident, the original equipment manufacturer (OEM) concluded that the oscillation problem was caused by the incorrect setting recovery of the hybrid power plant controller (HPPC) in the voltage and reactive power control loop after a loss of communication event. The voltage controller normally outputs a reactive (Q) reference value to the Q controller which controls the Q dispatch setpoint of PV and wind plants in the hybrid farm. Meanwhile, a feed-forward (FF) configuration is used to bypass the Q controller in case there is a loss of communication. Further study found that the FF control mode was still engaged when communication was re-established, which ultimately resulted in the oscillation event. However, there was no detailed explanation of why the FF control mode can cause instability in the hybrid farm. Also, there was no duplication of the event in the simulation to analyze the root cause of the oscillation. Therefore, this research aims to model and replicate the oscillation event in a simulation environment and investigate the underlying behavior of the HPPC and the consequent oscillation mechanism during the incident. The outcome of this research will provide significant benefits to the safe operation of large-scale renewable energy generators and power networks.

Keywords: PV, oscillation, modelling, wind

Procedia PDF Downloads 31
451 Analytical Characterization of TiO2-Based Nanocoatings for the Protection and Preservation of Architectural Calcareous Stone Monuments

Authors: Sayed M. Ahmed, Sawsan S. Darwish, Mahmoud A. Adam, Nagib A. Elmarzugi, Mohammad A. Al-Dosari, Nadia A. Al-Mouallimi

Abstract:

Historical stone surfaces and architectural heritage especially which located in open areas may undergo unwanted changes due to the exposure to many physical and chemical deterioration factors, air pollution, soluble salts, Rh/temperature, and biodeterioration are the main causes of decay of stone building materials. The development and application of self-cleaning treatments on historical and architectural stone surfaces could be a significant improvement in conservation, protection, and maintenance of cultural heritage. In this paper, nanometric titanium dioxide has become a promising photocatalytic material owing to its ability to catalyze the complete degradation of many organic contaminants and represent an appealing way to create self-cleaning surfaces, thus limiting maintenance costs, and to promote the degradation of polluting agents. The obtained nano-TiO2 coatings were applied on travertine (Marble and limestone often used in historical and monumental buildings). The efficacy of the treatments has been evaluated after coating and artificial thermal aging, through capillary water absorption, Ultraviolet-light exposure to evaluate photo-induced and the hydrophobic effects of the coated surface, while the surface morphology before and after treatment was examined by scanning electron microscopy (SEM). The changes of molecular structure occurring in treated samples were spectroscopy studied by FTIR-ATR, and Colorimetric measurements have been performed to evaluate the optical appearance. All the results get together with the apparent effect that coated TiO2 nanoparticles is an innovative method, which enhanced the durability of stone surfaces toward UV aging, improved their resistance to relative humidity and temperature, self-cleaning photo-induced effects are well evident, and no alteration of the original features.

Keywords: architectural calcareous stone monuments, coating, photocatalysis TiO2, self-cleaning, thermal aging

Procedia PDF Downloads 250
450 Improving Paper Mechanical Properties and Printing Quality by Using Carboxymethyl Cellulose as a Strength Agent

Authors: G. N. Simonian, R. F. Basalah, F. T. Abd El Halim, F. F. Abd El Latif, A. M. Adel, A. M. El Shafey.

Abstract:

Carboxymethyl cellulose (CMC) is an anionic water soluble polymer that has been introduced in paper coating as a strength agent. One of the main objectives of this research is to investigate the influence of CMC concentration in improving the strength properties of paper fiber. In this work, we coated the paper sheets; Xerox paper sheets by different concentration of carboxymethyl cellulose solution (0.1, 0.5, 1, 1.5, 2, 3%) w/v. The mechanical properties; breaking length and tearing resistance (tear factor) were measured for the treated and untreated paper specimens. The retained polymer in the coated paper samples were also calculated. The more the concentration of the coating material; CMC increases, the more the mechanical properties; breaking length and tear factor increases. It can be concluded that CMC enhance the improvement of the mechanical properties of paper sheets result in increasing paper stability. The aim of the present research was also to study the effects on the vessel element structure and vessel picking tendency of the coated paper sheets. In addition to the improved strength properties of the treated sheet, a significant decrease in the vessel picking tendency was expected whereas refining of the original paper sheets (untreated paper sheets) improved mainly the bonding ability of fibers, CMC effectively enhanced the bonding of vessels as well. Moreover, film structures were formed in the fibrillated areas of the coated paper specimens, and they were concluded to reinforce the bonding within the sheet. Also, fragmentation of vessel elements through CMC modification was found to be important and results in a decreasing picking tendency which reflects in a good printability. Moreover, Scanning – Electron Microscope (SEM) images are represented to specifically explain the improved bonding ability of vessels and fibers after CMC modification. Finally, CMC modification enhance paper mechanical properties and print quality.

Keywords: carboxymethyl cellulose (CMC), breaking length, tear factor, vessel picking, printing, concentration

Procedia PDF Downloads 419