Search results for: linear direct drive
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7059

Search results for: linear direct drive

1089 Effects of Fe Addition and Process Parameters on the Wear and Corrosion Characteristics of Icosahedral Al-Cu-Fe Coatings on Ti-6Al-4V Alloy

Authors: Olawale S. Fatoba, Stephen A. Akinlabi, Esther T. Akinlabi, Rezvan Gharehbaghi

Abstract:

The performance of material surface under wear and corrosion environments cannot be fulfilled by the conventional surface modifications and coatings. Therefore, different industrial sectors need an alternative technique for enhanced surface properties. Titanium and its alloys possess poor tribological properties which limit their use in certain industries. This paper focuses on the effect of hybrid coatings Al-Cu-Fe on a grade five titanium alloy using laser metal deposition (LMD) process. Icosahedral Al-Cu-Fe as quasicrystals is a relatively new class of materials which exhibit unusual atomic structure and useful physical and chemical properties. A 3kW continuous wave ytterbium laser system (YLS) attached to a KUKA robot which controls the movement of the cladding process was utilized for the fabrication of the coatings. The titanium cladded surfaces were investigated for its hardness, corrosion and tribological behaviour at different laser processing conditions. The samples were cut to corrosion coupons, and immersed into 3.65% NaCl solution at 28oC using Electrochemical Impedance Spectroscopy (EIS) and Linear Polarization (LP) techniques. The cross-sectional view of the samples was analysed. It was found that the geometrical properties of the deposits such as width, height and the Heat Affected Zone (HAZ) of each sample remarkably increased with increasing laser power due to the laser-material interaction. It was observed that there are higher number of aluminum and titanium presented in the formation of the composite. The indentation testing reveals that for both scanning speed of 0.8 m/min and 1m/min, the mean hardness value decreases with increasing laser power. The low coefficient of friction, excellent wear resistance and high microhardness were attributed to the formation of hard intermetallic compounds (TiCu, Ti2Cu, Ti3Al, Al3Ti) produced through the in situ metallurgical reactions during the LMD process. The load-bearing capability of the substrate was improved due to the excellent wear resistance of the coatings. The cladded layer showed a uniform crack free surface due to optimized laser process parameters which led to the refinement of the coatings.

Keywords: Al-Cu-Fe coating, corrosion, intermetallics, laser metal deposition, Ti-6Al-4V alloy, wear resistance

Procedia PDF Downloads 161
1088 Supplier Carbon Footprint Methodology Development for Automotive Original Equipment Manufacturers

Authors: Nur A. Özdemir, Sude Erkin, Hatice K. Güney, Cemre S. Atılgan, Enes Huylu, Hüseyin Y. Altıntaş, Aysemin Top, Özak Durmuş

Abstract:

Carbon emissions produced during a product’s life cycle, from extraction of raw materials up to waste disposal and market consumption activities are the major contributors to global warming. In the light of the science-based targets (SBT) leading the way to a zero-carbon economy for sustainable growth of the companies, carbon footprint reporting of the purchased goods has become critical for identifying hotspots and best practices for emission reduction opportunities. In line with Ford Otosan's corporate sustainability strategy, research was conducted to evaluate the carbon footprint of purchased products in accordance with Scope 3 of the Greenhouse Gas Protocol (GHG). The purpose of this paper is to develop a systematic and transparent methodology to calculate carbon footprint of the products produced by automotive OEMs (Original Equipment Manufacturers) within the context of automobile supply chain management. To begin with, primary material data were collected through IMDS (International Material Database System) corresponds to company’s three distinct types of vehicles including Light Commercial Vehicle (Courier), Medium Commercial Vehicle (Transit and Transit Custom), Heavy Commercial Vehicle (F-MAX). Obtained material data was classified as metals, plastics, liquids, electronics, and others to get insights about the overall material distribution of produced vehicles and matched to the SimaPro Ecoinvent 3 database which is one of the most extent versions for modelling material data related to the product life cycle. Product life cycle analysis was calculated within the framework of ISO 14040 – 14044 standards by addressing the requirements and procedures. A comprehensive literature review and cooperation with suppliers were undertaken to identify the production methods of parts used in vehicles and to find out the amount of scrap generated during part production. Cumulative weight and material information with related production process belonging the components were listed by multiplying with current sales figures. The results of the study show a key modelling on carbon footprint of products and processes based on a scientific approach to drive sustainable growth by setting straightforward, science-based emission reduction targets. Hence, this study targets to identify the hotspots and correspondingly provide broad ideas about our understanding of how to integrate carbon footprint estimates into our company's supply chain management by defining convenient actions in line with climate science. According to emission values arising from the production phase including raw material extraction and material processing for Ford OTOSAN vehicles subjected in this study, GHG emissions from the production of metals used for HCV, MCV and LCV account for more than half of the carbon footprint of the vehicle's production. Correspondingly, aluminum and steel have the largest share among all material types and achieving carbon neutrality in the steel and aluminum industry is of great significance to the world, which will also present an immense impact on the automobile industry. Strategic product sustainability plan which includes the use of secondary materials, conversion to green energy and low-energy process design is required to reduce emissions of steel, aluminum, and plastics due to the projected increase in total volume by 2030.

Keywords: automotive, carbon footprint, IMDS, scope 3, SimaPro, sustainability

Procedia PDF Downloads 87
1087 Impact of Climate Change and Anthropogenic Effect on Hilsa Fishery Management in South-East Asia: Urgent Need for Trans-Boundary Policy

Authors: Dewan Ali Ahsan

Abstract:

Hilsa (Tenualosa ilisha) is one of the most important anadromous fish species of the trans-boundary ecosystem of Bangladesh, India and Myanmar. Hilsa is not only an economically important species specially for Bangladesh and India, but also for the integral part of the culture of the Bangladesh and India. This flag-ship species in Bangladesh contributed alone of 10.82% of the total fish production of the country and about 75% of world’s total catch of hilsa comes from Bangladesh alone. As hilsa is an anadromous fish, it migrates from the Bay of Bengal to rivers for spawning, nursing and growing and for all of these purposes hilsa needs freshwaters. Ripe broods prefer turbid, fast flowing freshwater for spawning but young prefer clear and slow flowing freshwater. Climate change (salinity intrusion, sea level rise, temperature rise, impact of fresh water flow), unplanned developmental activities and other anthropogenic activities all together are severely damaging the hilsa stock and its habitats. So, climate change and human interferences are predicted to have a range of direct and indirect impacts on marine and freshwater hilsa fishery, with implications for fisheries-dependent economies, coastal communities and fisherfolk. The present study identified that salinity intrusion, siltation in river bed, decrease water flow from upstream, fragmentation of river in dry season, over exploitation, use of small mesh nets are the major reasons to affect the upstream migration of hilsa and its sustainable management. It has been also noticed that Bangladesh government has taken some actions for hilsa management. Government is trying to increase hilsa production not only by conserving jatka (juvenile hilsa) but also protecting the brood hilsa during the breeding seasons by imposing seasonal ban on fishing, restricted mesh size etc. Unfortunately, no such management plans are available for Indian and Myanmar territory. As hilsa is a highly migratory trans-boundary fish in the Bay of Bengal (and all of these countries share the same stock), it is essential to adopt a joint management policy (by Bangladesh-India-Myanmar) for the sustainable management for the hilsa stock.

Keywords: hilsa, climate change, south-east Asia, fishery management

Procedia PDF Downloads 484
1086 Science and Mathematics Instructional Strategies, Teaching Performance and Academic Achievement in Selected Secondary Schools in Upland

Authors: Maria Belen C. Costa, Liza C. Costa

Abstract:

Teachers have an important influence on students’ academic achievement. Teachers play a crucial role in educational attainment because they stand in the interface of the transmission of knowledge, values, and skills in the learning process through the instructional strategies they employ in the classroom. The level of achievement of students in school depends on the degree of effectiveness of instructional strategies used by the teacher. Thus, this study was conceptualized and conducted to examine the instructional strategies preferred and used by the Science and Mathematics teachers and the impact of those strategies in their teaching performance and students’ academic achievement in Science and Mathematics. The participants of the study comprised a total enumeration of 61 teachers who were chosen through total enumeration and 610 students who were selected using two-stage random sampling technique. The descriptive correlation design was used in this study with a self-made questionnaire as the main tool in the data gathering procedure. Relationship among variables was tested and analyzed using Spearman Rank Correlation Coefficient and Wilcoxon Signed Rank statistics. The teacher participants under study mainly belonged to the age group of ‘young’ (35 years and below) and most were females having ‘very much experienced’ (16 years and above) in teaching. Teaching performance was found to be ‘very satisfactory’ while academic achievement in Science and Mathematics was found to be ‘satisfactory’. Demographic profile and teaching performance of teacher participants were found to be ‘not significant’ to their instructional strategy preferences. Results implied that age, sex, level of education and length of service of the teachers does not affect their preference on a particular instructional strategy. However, the teacher participants’ extent of use of the different instructional strategies was found to be ‘significant’ to their teaching performance. The instructional strategies being used by the teachers were found to have a direct effect on their teaching performance. Academic achievement of student participants was found to be ‘significant’ to the teacher participants’ instructional strategy preferences. The preference of the teachers on instructional strategies had a significant effect on the students’ academic performance. On the other hand, teacher participants’ extent of use of instructional strategies was showed to be ‘not significant’ to the academic achievement of students in Science and Mathematics. The instructional strategy being used by the teachers did not affect the level of performance of students in Science and Mathematics. The results of the study revealed that there was a significant difference between the teacher participants’ preference of instructional strategy and the student participants’ instructional strategy preference as well as between teacher participants’ extent of use and student participants’ perceived level of use of the different instructional strategies. Findings found a discrepancy between the teaching strategy preferences of students and strategies implemented by teachers.

Keywords: academic achievement, extent of use, instructional strategy, preferences

Procedia PDF Downloads 291
1085 Positive Disruption: Towards a Definition of Artist-in-Residence Impact on Organisational Creativity

Authors: Denise Bianco

Abstract:

Several studies on innovation and creativity in organisations emphasise the need to expand horizons and take on alternative and unexpected views to produce something new. This paper theorises the potential impact artists can have as creative catalysts, working embedded in non-artistic organisations. It begins from an understanding that in today's ever-changing scenario, organisations are increasingly seeking to open up new creative thinking through deviant behaviours to produce innovation and that art residencies need to be critically revised in this specific context in light of their disruptive potential. On the one hand, this paper builds upon recent contributions made on workplace creativity and related concepts of deviance and disruption. Research suggests that creativity is likely to be lower in work contexts where utter conformity is a cardinal value and higher in work contexts that show some tolerance for uncertainty and deviance. On the other hand, this paper draws attention to Artist-in-Residence as a vehicle for epistemic friction between divergent and convergent thinking, which allows the creation of unparalleled ways of knowing in the dailiness of situated and contextualised social processes. In order to do so, this contribution brings together insights from the most relevant theories on organisational creativity and unconventional agile methods such as Art Thinking and direct insights from ethnographic fieldwork in the context of embedded art residencies within work organisations to propose a redefinition of Artist-in-Residence and their potential impact on organisational creativity. The result is a re-definition of embedded Artist-in-Residence in organisational settings from a more comprehensive, multi-disciplinary, and relational perspective that builds on three focal points. First the notion that organisational creativity is a dynamic and synergistic process throughout which an idea is framed by recurrent activities subjected to multiple influences. Second, the definition of embedded Artist-in-Residence as an assemblage of dynamic, productive relations and unexpected possibilities for new networks of relationality that encourage the recombination of knowledge. Third, and most importantly, the acknowledgment that embedded residencies are, at the very essence, bi-cultural knowledge contexts where creativity flourishes as the result of open-to-change processes that are highly relational, constantly negotiated, and contextualised in time and space.

Keywords: artist-in-residence, convergent and divergent thinking, creativity, creative friction, deviance and creativity

Procedia PDF Downloads 78
1084 Literary Theatre and Embodied Theatre: A Practice-Based Research in Exploring the Authorship of a Performance

Authors: Rahul Bishnoi

Abstract:

Theatre, as Ann Ubersfld calls it, is a paradox. At once, it is both a literary work and a physical representation. Theatre as a text is eternal, reproducible, and identical while as a performance, theatre is momentary and never identical to the previous performances. In this dual existence of theatre, who is the author? Is the author the playwright who writes the dramatic text, or the director who orchestrates the performance, or the actor who embodies the text? From the poststructuralist lens of Barthes, the author is dead. Barthes’ argument of discrete temporality, i.e. the author is the before, and the text is the after, does not hold true for theatre. A published literary work is written, edited, printed, distributed and then gets consumed by the reader. On the other hand, theatrical production is immediate; an actor performs and the audience witnesses it instantaneously. Time, so to speak, does not separate the author, the text, and the reader anymore. The question of authorship gets further complicated in Augusto Boal’s “Theatre of the Oppressed” movement where the audience is a direct participant like the actors in the performance. In this research, through an experimental performance, the duality of theatre is explored with the authorship discourse. And the conventional definition of authorship is subjected to additional complexity by erasing the distinction between an actor and the audience. The design/methodology of the experimental performance is as follows: The audience will be asked to produce a text under an anonymous virtual alias. The text, as it is being produced, will be read and performed by the actor. The audience who are also collectively “authoring” the text, will watch this performance and write further until everyone has contributed with one input each. The cycle of writing, reading, performing, witnessing, and writing will continue until the end. The intention is to create a dynamic system of writing/reading with the embodiment of the text through the actor. The actor is giving up the power to the audience to write the spoken word, stage instruction and direction while still keeping the agency of interpreting that input and performing in the chosen manner. This rapid conversation between the actor and the audience also creates a conversion of authorship. The main conclusion of this study is a perspective on the nature of dynamic authorship of theatre containing a critical enquiry of the collaboratively produced text, an individually performed act, and a collectively witnessed event. Using practice as a methodology, this paper contests the poststructuralist notion of the author as merely a ‘scriptor’ and breaks it further by involving the audience in the authorship as well.

Keywords: practice based research, performance studies, post-humanism, Avant-garde art, theatre

Procedia PDF Downloads 82
1083 Analysis of Extreme Rainfall Trends in Central Italy

Authors: Renato Morbidelli, Carla Saltalippi, Alessia Flammini, Marco Cifrodelli, Corrado Corradini

Abstract:

The trend of magnitude and frequency of extreme rainfalls seems to be different depending on the investigated area of the world. In this work, the impact of climate change on extreme rainfalls in Umbria, an inland region of central Italy, is examined using data recorded during the period 1921-2015 by 10 representative rain gauge stations. The study area is characterized by a complex orography, with altitude ranging from 200 to more than 2000 m asl. The climate is very different from zone to zone, with mean annual rainfall ranging from 650 to 1450 mm and mean annual air temperature from 3.3 to 14.2°C. Over the past 15 years, this region has been affected by four significant droughts as well as by six dangerous flood events, all with very large impact in economic terms. A least-squares linear trend analysis of annual maximums over 60 time series selected considering 6 different durations (1 h, 3 h, 6 h, 12 h, 24 h, 48 h) showed about 50% of positive and 50% of negative cases. For the same time series the non-parametrical Mann-Kendall test with a significance level 0.05 evidenced only 3% of cases characterized by a negative trend and no positive case. Further investigations have also demonstrated that the variance and covariance of each time series can be considered almost stationary. Therefore, the analysis on the magnitude of extreme rainfalls supplies the indication that an evident trend in the change of values in the Umbria region does not exist. However, also the frequency of rainfall events, with particularly high rainfall depths values, occurred during a fixed period has also to be considered. For all selected stations the 2-day rainfall events that exceed 50 mm were counted for each year, starting from the first monitored year to the end of 2015. Also, this analysis did not show predominant trends. Specifically, for all selected rain gauge stations the annual number of 2-day rainfall events that exceed the threshold value (50 mm) was slowly decreasing in time, while the annual cumulated rainfall depths corresponding to the same events evidenced trends that were not statistically significant. Overall, by using a wide available dataset and adopting simple methods, the influence of climate change on the heavy rainfalls in the Umbria region is not detected.

Keywords: climate changes, rainfall extremes, rainfall magnitude and frequency, central Italy

Procedia PDF Downloads 217
1082 A Case Study of the Saudi Arabian Investment Regime

Authors: Atif Alenezi

Abstract:

The low global oil price poses economic challenges for Saudi Arabia, as oil revenues still make up a great percentage of its Gross Domestic Product (GDP). At the end of 2014, the Consultative Assembly considered a report from the Committee on Economic Affairs and Energy which highlights that the economy had not been successfully diversified. There thus exist ample reasons for modernising the Foreign Direct Investment (FDI) regime, primarily to achieve and maintain prosperity and facilitate peace in the region. Therefore, this paper aims at identifying specific problems with the existing FDI regime in Saudi Arabia and subsequently some solutions to those problems. Saudi Arabia adopted its first specific legislation in 1956, which imposed significant restrictions on foreign ownership. Since then, Saudi Arabia has modernised its FDI framework with the passing of the Foreign Capital Investment Act 1979 and the Foreign Investment Law2000 and the accompanying Executive Rules 2000 and the recently adopted Implementing Regulations 2014.Nonetheless, the legislative provisions contain various gaps and the failure to address these gaps creates risks and uncertainty for investors. For instance, the important topic of mergers and acquisitions has not been addressed in the Foreign Investment Law 2000. The circumstances in which expropriation can be considered to be in the public interest have not been defined. Moreover, Saudi Arabia has not entered into many bilateral investment treaties (BITs). This has an effect on the investment climate, as foreign investors are not afforded typical rights. An analysis of the BITs which have been entered into reveals that the national treatment standard and stabilisation, umbrella or renegotiation provisions have not been included. This is problematic since the 2000 Act does not spell out the applicable standard in accordance with which foreign investors should be treated. Moreover, the most-favoured-nation (MFN) or fair and equitable treatment (FET) standards have not been put on a statutory footing. Whilst the Arbitration Act 2012 permits that investment disputes can be internationalised, restrictions have been retained. The effectiveness of international arbitration is further undermined because Saudi Arabia does not enforce non-domestic arbitral awards which contravene public policy. Furthermore, the reservation to the Convention on the Settlement of Investment Disputes allows Saudi Arabia to exclude petroleum and sovereign disputes. Interviews with foreign investors, who operate in Saudi Arabia highlight additional issues. Saudi Arabia ought not to procrastinate far-reaching structural reforms.

Keywords: FDI, Saudi, BITs, law

Procedia PDF Downloads 390
1081 The Reality of Food Scarcity in Madhya Pradesh: Is It a Glimpse or Not?

Authors: Kalyan Sundar Som, Ghanshyam Prasad Jhariya

Abstract:

Population growth is an important pervasive phenomenon in the world. Its survival depends upon many daily needs and food is one of them. Population factors play a decisive role in the human endeavor to attain food. Nutrition and health status compose integral part of human development and progress of a society. Therefore, the neglect any one of these components may leads to the deterioration of the quality of life. Food is also intimately related with economic growth and social progress as well as with political stability and peace. It refers to the availability of food and its access to it. It can be observed from global to local level. Food scarcity has emerged as a matter of great concern all over the world due to uncontrolled and unregulated growth of population .For this purpose this study try to find out the deficit or surplus production of food availability in terms of their total population in the study area. It also ascertains the population pressure, demand and supply of food stuff and demarcation of insecure areas.The data base of the study under discussion includes government published data regarding agriculture production, yield and cropped area in 2005-06 to 2011-12 available at commissioner land record Madhya Pradesh, Gwalior. It also includes the census of India for population data. For measuring food security or insecurity regions is based on the consumption of net food available in terms caloric value minus the consumption by the weighted total population. This approach has been adopted because the direct estimate of production and consumption is the only reliable way to ascertain food security in a unit area and to compare one area with another (Noor Mohammad, dec. 2002). The scenario in 2005-06 is 57.78 percent district has food insufficient in terms of their population. On the other hand after 5 years, there are only 22 % districts are deficit in term of food availability where Burhanpur is the most deficit (56 percent) district. While 20% district are highly surplus district in the state where Harda and Hoshangabad districts are very high surplus district (5 times and 3.95 times) in term of food availability(2011). The drastic change (agriculture transformation) is happen due government good intervention in the agricultural sector.

Keywords: agriculture transformation, caloric value method, deficit or surplus region, population pressure

Procedia PDF Downloads 419
1080 Characterising Performative Technological Innovation: Developing a Strategic Framework That Incorporates the Social Mechanisms That Promote Change within a Technological Environment

Authors: Joan Edwards, J. Lawlor

Abstract:

Technological innovation is frequently defined in terms of bringing a new invention to market through a relatively straightforward process of diffusion. In reality, this process is complex and non-linear in nature, and includes social and cognitive factors that influence the development of an emerging technology and its related market or environment. As recent studies contend technological trajectory is part of technological paradigms, which arise from the expectations and desires of industry agents and results in co-evolution, it may be realised that social factors play a major role in the development of a technology. It is conjectured that collective social behaviour is fuelled by individual motivations and expectations, which inform the possibilities and uses for a new technology. The individual outlook highlights the issues present at the micro-level of developing a technology. Accordingly, this may be zoomed out to realise how these embedded social structures, influence activities and expectations at a macro level and can ultimately strategically shape the development and use of a technology. These social factors rely on communication to foster the innovation process. As innovation may be defined as the implementation of inventions, technological change results from the complex interactions and feedback occurring within an extended environment. The framework presented in this paper, recognises that social mechanisms provide the basis for an iterative dialogue between an innovator, a new technology, and an environment - within which social and cognitive ‘identity-shaping’ elements of the innovation process occur. Identity-shaping characteristics indicate that an emerging technology has a performative nature that transforms, alters, and ultimately configures the environment to which it joins. This identity–shaping quality is termed as ‘performative’. This paper examines how technologies evolve within a socio-technological sphere and how 'performativity' facilitates the process. A framework is proposed that incorporates the performative elements which are identified as feedback, iteration, routine, expectations, and motivations. Additionally, the concept of affordances is employed to determine how the role of the innovator and technology change over time - constituting a more conducive environment for successful innovation.

Keywords: affordances, framework, performativity, strategic innovation

Procedia PDF Downloads 192
1079 Effect of Immunocastration Vaccine Administration at Different Doses on Performance of Feedlot Holstein Bulls

Authors: M. Bolacali

Abstract:

The aim of the study is to determine the effect of immunocastration vaccine administration at different doses on fattening performance of feedlot Holstein bulls. Bopriva® is a vaccine that stimulates the animals' own immune system to produce specific antibodies against gonadotropin releasing factor (GnRF). Ninety four Holstein male calves (309.5 ± 2.58 kg body live weight and 267 d-old) assigned to the 4 treatments. Control group; 1 mL of 0.9% saline solution was subcutaneously injected to intact bulls on 1st and 60th days of the feedlot as placebo. On the same days of the feedlot, Bopriva® at two doses of 1 mL and 1 mL for Trial-1 group, 1.5 mL, and 1.5 mL for Trial-2 group, 1.5 mL, and 1 mL for Trial-3 group were subcutaneously injected to bulls. The study was conducted in a private establishment in the Sirvan district of Siirt province and lasted 180 days. The animals were weighed at the beginning of fattening and at 30-day intervals to determine their live weights at various periods. The statistical analysis for normal distribution data of the treatment groups was carried out with the general linear model procedure of SPSS software. The fattening initial live weight in Control, Trial-1, Trial-2 and Trial-3 groups was respectively 309.21, 306.62, 312.11, and 315.39 kg. The fattening final live weight was respectively 560.88, 536.67, 548.56, and 548.25 kg. The daily live weight gain during the trial was respectively 1.40, 1.28, 1.31, and 1.29 kg/day. The cold carcass yield was respectively 51.59%, 50.32%, 50.85%, and 50.77%. Immunocastration vaccine administration at different doses did not affect the live weights and cold carcass yields of Holstein male calves reared under intensive conditions (P > 0.05). However, it was determined to reduce fattening performance between 61-120 days (P < 0.05) and 1-180 days (P < 0.01). In addition, it was determined that the best performance among the vaccine-treated groups occurred in the group administered a 1.5 mL of vaccine on the 1st and 60th study days. In animals, castration is used to control fertility, aggressive and sexual behaviors. As a result, the fact that stress is induced by physical castration in animals and active immunization against GnRF maintains performance by maximizing welfare in bulls improves carcass and meat quality and controls unwanted sexual and aggressive behavior. Considering such features, it may be suggested that immunocastration vaccine with Bopriva® can be administered as a 1.5 mL dose on the 1st and 60th days of the fattening period in Holstein bulls.

Keywords: anti-GnRF, fattening, growth, immunocastration

Procedia PDF Downloads 170
1078 Conflation Methodology Applied to Flood Recovery

Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong

Abstract:

Current flooding risk modeling focuses on resilience, defined as the probability of recovery from a severe flooding event. However, the long-term damage to property and well-being by nuisance flooding and its long-term effects on communities are not typically included in risk assessments. An approach was developed to address the probability of recovering from a severe flooding event combined with the probability of community performance during a nuisance event. A consolidated model, namely the conflation flooding recovery (&FR) model, evaluates risk-coping mitigation strategies for communities based on the recovery time from catastrophic events, such as hurricanes or extreme surges, and from everyday nuisance flooding events. The &FR model assesses the variation contribution of each independent input and generates a weighted output that favors the distribution with minimum variation. This approach is especially useful if the input distributions have dissimilar variances. The &FR is defined as a single distribution resulting from the product of the individual probability density functions. The resulting conflated distribution resides between the parent distributions, and it infers the recovery time required by a community to return to basic functions, such as power, utilities, transportation, and civil order, after a flooding event. The &FR model is more accurate than averaging individual observations before calculating the mean and variance or averaging the probabilities evaluated at the input values, which assigns the same weighted variation to each input distribution. The main disadvantage of these traditional methods is that the resulting measure of central tendency is exactly equal to the average of the input distribution’s means without the additional information provided by each individual distribution variance. When dealing with exponential distributions, such as resilience from severe flooding events and from nuisance flooding events, conflation results are equivalent to the weighted least squares method or best linear unbiased estimation. The combination of severe flooding risk with nuisance flooding improves flood risk management for highly populated coastal communities, such as in South Florida, USA, and provides a method to estimate community flood recovery time more accurately from two different sources, severe flooding events and nuisance flooding events.

Keywords: community resilience, conflation, flood risk, nuisance flooding

Procedia PDF Downloads 75
1077 Learning Mathematics Online: Characterizing the Contribution of Online Learning Environment’s Components to the Development of Mathematical Knowledge and Learning Skills

Authors: Atara Shriki, Ilana Lavy

Abstract:

Teaching for the first time an online course dealing with the history of mathematics, we were struggling with questions related to the design of a proper learning environment (LE). Thirteen high school mathematics teachers, M.Ed. students, attended the course. The teachers were engaged in independent reading of mathematical texts, a task that is recognized as complex due to the unique characteristics of such texts. In order to support the learning processes and develop skills that are essential for succeeding in learning online (e.g. self-regulated learning skills, meta-cognitive skills, reflective ability, and self-assessment skills), the LE comprised of three components aimed at “scaffolding” the learning: (1) An online "self-feedback" questionnaires that included drill-and-practice questions. Subsequent to responding the questions the online system provided a grade and the teachers were entitled to correct their answers; (2) Open-ended questions aimed at stimulating critical thinking about the mathematical contents; (3) Reflective questionnaires designed to assist the teachers in steering their learning. Using a mixed-method methodology, an inquiry study examined the learning processes, the learners' difficulties in reading the mathematical texts and on the unique contribution of each component of the LE to the ability of teachers to comprehend the mathematical contents, and support the development of their learning skills. The results indicate that the teachers found the online feedback as most helpful in developing self-regulated learning skills and ability to reflect on deficiencies in knowledge. Lacking previous experience in expressing opinion on mathematical ideas, the teachers had troubles in responding open-ended questions; however, they perceived this assignment as nurturing cognitive and meta-cognitive skills. The teachers also attested that the reflective questionnaires were useful for steering the learning. Although in general the teachers found the LE as supportive, most of them indicated the need to strengthen instructor-learners and learners-learners interactions. They suggested to generate an online forum to enable them receive direct feedback from the instructor, share ideas with other learners, and consult with them about solutions. Apparently, within online LE, supporting learning merely with respect to cognitive aspects is not sufficient. Leaners also need an emotional support and sense a social presence.

Keywords: cognitive and meta-cognitive skills, independent reading of mathematical texts, online learning environment, self-regulated learning skills

Procedia PDF Downloads 602
1076 Tunnel Convergence Monitoring by Distributed Fiber Optics Embedded into Concrete

Authors: R. Farhoud, G. Hermand, S. Delepine-lesoille

Abstract:

Future underground facility of French radioactive waste disposal, named Cigeo, is designed to store intermediate and high level - long-lived French radioactive waste. Intermediate level waste cells are tunnel-like, about 400m length and 65 m² section, equipped with several concrete layers, which can be grouted in situ or composed of tunnel elements pre-grouted. The operating space into cells, to allow putting or removing waste containers, should be monitored for several decades without any maintenance. To provide the required information, design was performed and tested in situ in Andra’s underground laboratory (URL) at 500m under the surface. Based on distributed optic fiber sensors (OFS) and backscattered Brillouin for strain and Raman for temperature interrogation technics, the design consists of 2 loops of OFS, at 2 different radiuses, around the monitored section (Orthoradiale strains) and longitudinally. Strains measured by distributed OFS cables were compared to classical vibrating wire extensometers (VWE) and platinum probes (Pt). The OFS cables were composed of 2 cables sensitive to strains and temperatures and one only for temperatures. All cables were connected, between sensitive part and instruments, to hybrid cables to reduce cost. The connection has been made according to 2 technics: splicing fibers in situ after installation or preparing each fiber with a connector and only plugging them together in situ. Another challenge was installing OFS cables along a tunnel mad in several parts, without interruption along several parts. First success consists of the survival rate of sensors after installation and quality of measurements. Indeed, 100% of OFS cables, intended for long-term monitoring, survived installation. Few new configurations were tested with relative success. Measurements obtained were very promising. Indeed, after 3 years of data, no difference was observed between cables and connection methods of OFS and strains fit well with VWE and Pt placed at the same location. Data, from Brillouin instrument sensitive to strains and temperatures, were compensated with data provided by Raman instrument only sensitive to temperature and into a separated fiber. These results provide confidence in the next steps of the qualification processes which consists of testing several data treatment approach for direct analyses.

Keywords: monitoring, fiber optic, sensor, data treatment

Procedia PDF Downloads 112
1075 Activated Carbon Content Influence in Mineral Barrier Performance

Authors: Raul Guerrero, Sandro Machado, Miriam Carvalho

Abstract:

Soil and aquifer pollution, caused by hydrocarbon liquid spilling, is induced by misguided operational practices and inefficient safety guidelines. According to the Environmental Brazilian Institute (IBAMA), during 2013 alone, over 472.13 m3 of diesel oil leaked into the environment nationwide for those reported cases only. Regarding the aforementioned information, there’s an indisputable need to adopt appropriate environmental safeguards specially in those areas intended for the production, treatment, transportation and storage of hydrocarbon fluids. According to Brazilian norm, ABNT-NBR 7505-1:2000, compacted soil or mineral barriers used in structural contingency levees, such as storage tanks, are required to present a maximum water permeability coefficient, k, of 1x10-6 cm/s. However, as discussed by several authors, water can not be adopted as the reference fluid to determine the site’s containment performance against organic fluids. Mainly, due to the great discrepancy observed in polarity values (dielectric constant) between water and most organic fluids. Previous studies, within this same research group, proposed an optimal range of values for the soil’s index properties for mineral barrier composition focused on organic fluid containment. Unfortunately, in some circumstances, it is not possible to encounter a type of soil with the required geotechnical characteristics near the containment site, increasing prevention and construction costs, as well as environmental risks. For these specific cases, the use of an organic product or material as an additive to enhance mineral-barrier containment performance may be an attractive geotechnical solution. This paper evaluates the effect of activated carbon (AC) content additions into a clayey soil towards hydrocarbon fluid permeability. Variables such as compaction energy, carbon texture and addition content (0%, 10% and 20%) were analyzed through laboratory falling-head permeability tests using distilled water and commercial diesel as percolating fluids. The obtained results showed that the AC with smaller particle-size reduced k values significantly against diesel, indicating a direct relationship between particle-size reduction (surface area increase) of the organic product and organic fluid containment.

Keywords: activated carbon, clayey soils, permeability, surface area

Procedia PDF Downloads 241
1074 Modelling Distress Sale in Agriculture: Evidence from Maharashtra, India

Authors: Disha Bhanot, Vinish Kathuria

Abstract:

This study focusses on the issue of distress sale in horticulture sector in India, which faces unique challenges, given the perishable nature of horticulture crops, seasonal production and paucity of post-harvest produce management links. Distress sale, from a farmer’s perspective may be defined as urgent sale of normal or distressed goods, at deeply discounted prices (way below the cost of production) and it is usually characterized by unfavorable conditions for the seller (farmer). The small and marginal farmers, often involved in subsistence farming, stand to lose substantially if they receive lower prices than expected prices (typically framed in relation to cost of production). Distress sale maximizes price uncertainty of produce leading to substantial income loss; and with increase in input costs of farming, the high variability in harvest price severely affects profit margin of farmers, thereby affecting their survival. The objective of this study is to model the occurrence of distress sale by tomato cultivators in the Indian state of Maharashtra, against the background of differential access to set of factors such as - capital, irrigation facilities, warehousing, storage and processing facilities, and institutional arrangements for procurement etc. Data is being collected using primary survey of over 200 farmers in key tomato growing areas of Maharashtra, asking information on the above factors in addition to seeking information on cost of cultivation, selling price, time gap between harvesting and selling, role of middleman in selling, besides other socio-economic variables. Farmers selling their produce far below the cost of production would indicate an occurrence of distress sale. Occurrence of distress sale would then be modelled as a function of farm, household and institutional characteristics. Heckman-two-stage model would be applied to find the probability/likelihood of a famer falling into distress sale as well as to ascertain how the extent of distress sale varies in presence/absence of various factors. Findings of the study would recommend suitable interventions and promotion of strategies that would help farmers better manage price uncertainties, avoid distress sale and increase profit margins, having direct implications on poverty.

Keywords: distress sale, horticulture, income loss, India, price uncertainity

Procedia PDF Downloads 219
1073 Ethical Issues in AI: Analyzing the Gap Between Theory and Practice - A Case Study of AI and Robotics Researchers

Authors: Sylvie Michel, Emmanuelle Gagnou, Joanne Hamet

Abstract:

New major ethical dilemmas are posed by artificial intelligence. This article identifies an existing gap between the ethical questions that AI/robotics researchers grapple with in their research practice and those identified by literature review. The objective is to understand which ethical dilemmas are identified or concern AI researchers in order to compare them with the existing literature. This will enable to conduct training and awareness initiatives for AI researchers, encouraging them to consider these questions during the development of AI. Qualitative analyses were conducted based on direct observation of an AI/Robotics research team focused on collaborative robotics over several months. Subsequently, semi-structured interviews were conducted with 16 members of the team. The entire process took place during the first semester of 2023. The observations were analyzed using an analytical framework, and the interviews were thematically analyzed using Nvivo software. While the literature identifies three primary ethical concerns regarding AI—transparency, bias, and responsibility—the results firstly demonstrate that AI researchers are primarily concerned with the publication and valorization of their work, with the initial ethical concerns revolving around this matter. Questions arise regarding the extent to which to "market" publications and the usefulness of some publications. Research ethics are a central consideration for these teams. Secondly, another result shows that the researchers studied adopt a consequentialist ethics (though not explicitly formulated as such). They ponder the consequences of their development in terms of safety (for humans in relation to Robots/AI), worker autonomy in relation to the robot, and the role of work in society (can robots take over jobs?). Lastly, results indicate that the ethical dilemmas highlighted in the literature (responsibility, transparency, bias) do not explicitly appear in AI/Robotics research. AI/robotics researchers raise specific and pragmatic ethical questions, primarily concerning publications initially and consequentialist considerations afterward. Results demonstrate that these concerns are distant from the existing literature. However, the dilemmas highlighted in the literature also deserve to be explicitly contemplated by researchers. This article proposes that the journals these researchers target should mandate ethical reflection for all presented works. Furthermore, results suggest offering awareness programs in the form of short educational sessions for researchers.

Keywords: ethics, artificial intelligence, research, robotics

Procedia PDF Downloads 55
1072 Populism and National Unity: A Discourse Analysis of Poverty Eradication Strategies of Three Malaysian Prime Ministers

Authors: Khairil Ahmad, Jenny Gryzelius, Mohd Helmi Mohd Sobri

Abstract:

With the waning support for centrist ‘third-way’ politics across the Western world, there has been an increase in political parties and individual candidates relying on populist political discourse and rhetoric in order to capitalize on the sense of frustration apparent within the electorate. What is of note is the divergence in the discourses employed. On the one hand, there is a polarization between a growing wave of populist right-wing parties and politicians, employing a mixture of economic populism with divisive nationalistic ideals such as restricted immigration, for example, the UK’s UKIP and Donald Trump in the US. On the other hand, there are resurgent, often grassroots-led, left-wing movements and politicians, such as Podemos in Spain and Jeremy Corbyn in the UK, focusing on anti-austerity measures and inclusive policies. In general, the concept of populism is often ascribed in a pejorative way. This is despite the success of populist left-wing governments across Latin America in recent times, especially in terms of reducing poverty. Nonetheless, recently, scholars such as Ernesto Laclau have tried to rethink populism as a social scientific concept which is essential in helping us make sense of contemporary political articulations. Using Laclau’s framework, this paper seeks to analyze poverty reduction policies in different iterations in the context of the tenures of three Prime Ministers of Malaysia. The first is Abdul Razak Hussein’s New Economic Policy, which focused on uplifting the economic position of Malaysia’s majority Malay population. The second is Mahathir Mohamad’s state-led neo-liberalization of the Malaysian economy, which focused on the creation of a core group of crony elites in order to spearhead economic development. The third is current Prime Minister Najib Razak’s targeted poverty eradication strategy through a focused program which directly provides benefits to recipients such as through direct cash transfers. The paper employs a discursive approach to trace elements of populism in these cases and highlight instances of how their strategies are articulated in ways that seek to appeal towards particular visions of national unity.

Keywords: discourse analysis, Malaysia, populism, poverty eradication

Procedia PDF Downloads 299
1071 Developing Environmental Engineering Alternatives for Deep Desulphurization of Transportation Fuels

Authors: Nalinee B. Suryawanshi, Vinay M. Bhandari, Laxmi Gayatri Sorokhaibam, Vivek V. Ranade

Abstract:

Deep desulphurization of transportation fuels is a major environmental concern all over the world and recently prescribed norms for the sulphur content require below 10 ppm sulphur concentrations in fuels such as diesel and gasoline. The existing technologies largely based on catalytic processes such as hydrodesulphurization, oxidation require newer catalysts and demand high cost of deep desulphurization whereas adsorption based processes have limitations due to lower capacity of sulphur removal. The present work is an attempt to provide alternatives for the existing methodologies using a newer non-catalytic process based on hydrodynamic cavitation. The developed process requires appropriate combining of organic and aqueous phases under ambient conditions and passing through a cavitating device such as orifice, venturi or vortex diode. The implosion of vapour cavities formed in the cavitating device generates (in-situ) oxidizing species which react with the sulphur moiety resulting in the removal of sulphur from the organic phase. In this work, orifice was used as a cavitating device and deep desulphurization was demonstrated for removal of thiophene as a model sulphur compound from synthetic fuel of n-octane, toluene and n-octanol. The effect of concentration of sulphur (up to 300 ppm), nature of organic phase and effect of pressure drop (0.5 to 10 bar) was discussed. A very high removal of sulphur content of more than 90% was demonstrated. The process is easy to operate, essentially works at ambient conditions and the ratio of aqueous to organic phase can be easily adjusted to maximise sulphur removal. Experimental studies were also carried out using commercial diesel as a solvent and the results substantiate similar high sulphur removal. A comparison of the two cavitating devices- one with a linear flow and one using vortex flow for effecting pressure drop and cavitation indicates similar trends in terms of sulphur removal behaviour. The developed process is expected to provide an attractive environmental engineering alternative for deep desulphurization of transportation fuels.

Keywords: cavitation, petroleum, separation, sulphur removal

Procedia PDF Downloads 354
1070 Evaluating the Ability to Cycle in Cities Using Geographic Information Systems Tools: The Case Study of Greek Modern Cities

Authors: Christos Karolemeas, Avgi Vassi, Georgia Christodoulopoulou

Abstract:

Although the past decades, planning a cycle network became an inseparable part of all transportation plans, there is still a lot of room for improvement in the way planning is made, in order to create safe and direct cycling networks that gather the parameters that positively influence one's decision to cycle. The aim of this article is to study, evaluate and visualize the bikeability of cities. This term is often used as the 'the ability of a person to bike' but this study, however, adopts the term in the sense of bikeability as 'the ability of the urban landscape to be biked'. The methodology used included assessing cities' accessibility by cycling, based on international literature and corresponding walkability methods and the creation of a 'bikeability index'. Initially, a literature review was made to identify the factors that positively affect the use of bicycle infrastructure. Those factors were used in order to create the spatial index and quantitatively compare the city network. Finally, the bikeability index was applied in two case studies: two Greek municipalities that, although, they have similarities in terms of land uses, population density and traffic congestion, they are totally different in terms of geomorphology. The factors suggested by international literature were (a) safety, (b) directness, (c) comfort and (d) the quality of the urban environment. Those factors were quantified through the following parameters: slope, junction density, traffic density, traffic speed, natural environment, built environment, activities coverage, centrality and accessibility to public transport stations. Each road section was graded for the above-mentioned parameters, and the overall grade shows the level of bicycle accessibility (low, medium, high). Each parameter, as well as the overall accessibility levels, were analyzed and visualized through Geographic Information Systems. This paper presents the bikeability index, its' results, the problems that have arisen and the conclusions from its' implementation through Strengths-Weaknesses-Opportunities-Threats analysis. The purpose of this index is to make it easy for researchers, practitioners, politicians, and stakeholders to quantify, visualize and understand which parts of the urban fabric are suitable for cycling.

Keywords: accessibility, cycling, green spaces, spatial data, urban environment

Procedia PDF Downloads 94
1069 Estimating the Ladder Angle and the Camera Position From a 2D Photograph Based on Applications of Projective Geometry and Matrix Analysis

Authors: Inigo Beckett

Abstract:

In forensic investigations, it is often the case that the most potentially useful recorded evidence derives from coincidental imagery, recorded immediately before or during an incident, and that during the incident (e.g. a ‘failure’ or fire event), the evidence is changed or destroyed. To an image analysis expert involved in photogrammetric analysis for Civil or Criminal Proceedings, traditional computer vision methods involving calibrated cameras is often not appropriate because image metadata cannot be relied upon. This paper presents an approach for resolving this problem, considering in particular and by way of a case study, the angle of a simple ladder shown in a photograph. The UK Health and Safety Executive (HSE) guidance document published in 2014 (INDG455) advises that a leaning ladder should be erected at 75 degrees to the horizontal axis. Personal injury cases can arise in the construction industry because a ladder is too steep or too shallow. Ad-hoc photographs of such ladders in their incident position provide a basis for analysis of their angle. This paper presents a direct approach for ascertaining the position of the camera and the angle of the ladder simultaneously from the photograph(s) by way of a workflow that encompasses a novel application of projective geometry and matrix analysis. Mathematical analysis shows that for a given pixel ratio of directly measured collinear points (i.e. features that lie on the same line segment) from the 2D digital photograph with respect to a given viewing point, we can constrain the 3D camera position to a surface of a sphere in the scene. Depending on what we know about the ladder, we can enforce another independent constraint on the possible camera positions which enables us to constrain the possible positions even further. Experiments were conducted using synthetic and real-world data. The synthetic data modeled a vertical plane with a ladder on a horizontally flat plane resting against a vertical wall. The real-world data was captured using an Apple iPhone 13 Pro and 3D laser scan survey data whereby a ladder was placed in a known location and angle to the vertical axis. For each case, we calculated camera positions and the ladder angles using this method and cross-compared them against their respective ‘true’ values.

Keywords: image analysis, projective geometry, homography, photogrammetry, ladders, Forensics, Mathematical modeling, planar geometry, matrix analysis, collinear, cameras, photographs

Procedia PDF Downloads 27
1068 Automatic Differential Diagnosis of Melanocytic Skin Tumours Using Ultrasound and Spectrophotometric Data

Authors: Kristina Sakalauskiene, Renaldas Raisutis, Gintare Linkeviciute, Skaidra Valiukeviciene

Abstract:

Cutaneous melanoma is a melanocytic skin tumour, which has a very poor prognosis while is highly resistant to treatment and tends to metastasize. Thickness of melanoma is one of the most important biomarker for stage of disease, prognosis and surgery planning. In this study, we hypothesized that the automatic analysis of spectrophotometric images and high-frequency ultrasonic 2D data can improve differential diagnosis of cutaneous melanoma and provide additional information about tumour penetration depth. This paper presents the novel complex automatic system for non-invasive melanocytic skin tumour differential diagnosis and penetration depth evaluation. The system is composed of region of interest segmentation in spectrophotometric images and high-frequency ultrasound data, quantitative parameter evaluation, informative feature extraction and classification with linear regression classifier. The segmentation of melanocytic skin tumour region in ultrasound image is based on parametric integrated backscattering coefficient calculation. The segmentation of optical image is based on Otsu thresholding. In total 29 quantitative tissue characterization parameters were evaluated by using ultrasound data (11 acoustical, 4 shape and 15 textural parameters) and 55 quantitative features of dermatoscopic and spectrophotometric images (using total melanin, dermal melanin, blood and collagen SIAgraphs acquired using spectrophotometric imaging device SIAscope). In total 102 melanocytic skin lesions (including 43 cutaneous melanomas) were examined by using SIAscope and ultrasound system with 22 MHz center frequency single element transducer. The diagnosis and Breslow thickness (pT) of each MST were evaluated during routine histological examination after excision and used as a reference. The results of this study have shown that automatic analysis of spectrophotometric and high frequency ultrasound data can improve non-invasive classification accuracy of early-stage cutaneous melanoma and provide supplementary information about tumour penetration depth.

Keywords: cutaneous melanoma, differential diagnosis, high-frequency ultrasound, melanocytic skin tumours, spectrophotometric imaging

Procedia PDF Downloads 254
1067 Microscale observations of a gas cell wall rupture in bread dough during baking and confrontation to 2/3D Finite Element simulations of stress concentration

Authors: Kossigan Bernard Dedey, David Grenier, Tiphaine Lucas

Abstract:

Bread dough is often described as a dispersion of gas cells in a continuous gluten/starch matrix. The final bread crumb structure is strongly related to gas cell walls (GCWs) rupture during baking. At the end of proofing and during baking, part of the thinnest GCWs between expanding gas cells is reduced to a gluten film of about the size of a starch granule. When such size is reached gluten and starch granules must be considered as interacting phases in order to account for heterogeneities and appropriately describe GCW rupture. Among experimental investigations carried out to assess GCW rupture, no experimental work was performed to observe the GCW rupture in the baking conditions at GCW scale. In addition, attempts to numerically understand GCW rupture are usually not performed at the GCW scale and often considered GCWs as continuous. The most relevant paper that accounted for heterogeneities dealt with the gluten/starch interactions and their impact on the mechanical behavior of dough film. However, stress concentration in GCW was not discussed. In this study, both experimental and numerical approaches were used to better understand GCW rupture in bread dough during baking. Experimentally, a macro-scope placed in front of a two-chamber device was used to observe the rupture of a real GCW of 200 micrometers in thickness. Special attention was paid in order to mimic baking conditions as far as possible (temperature, gas pressure and moisture). Various differences in pressure between both sides of GCW were applied and different modes of fracture initiation and propagation in GCWs were observed. Numerically, the impact of gluten/starch interactions (cohesion or non-cohesion) and rheological moduli ratio on the mechanical behavior of GCW under unidirectional extension was assessed in 2D/3D. A non-linear viscoelastic and hyperelastic approach was performed to match the finite strain involved in GCW during baking. Stress concentration within GCW was identified. Simulated stresses concentration was discussed at the light of GCW failure observed in the device. The gluten/starch granule interactions and rheological modulus ratio were found to have a great effect on the amount of stress possibly reached in the GCW.

Keywords: dough, experimental, numerical, rupture

Procedia PDF Downloads 104
1066 Virtual Academy Next: Addressing Transition Challenges Through a Gamified Virtual Transition Program for Students with Disabilities

Authors: Jennifer Gallup, Joel Bocanegra, Greg Callan, Abigail Vaughn

Abstract:

Students with disabilities (SWD) engaged in a distance summer program delivered over multiple virtual mediums that used gaming principles to teach and practice self-regulated learning (SRL) through the process of exploring possible jobs. Gaming quests were developed to explore jobs and teach transition skills. Students completed specially designed quests that taught and reinforced SRL and problem-solving through individual, group, and teacher-led experiences. SRL skills learned were reinforced through guided job explorations over the context of MinecraftEDU, zoom with experts in the career, collaborations with a team over Marco Polo, and Zoom. The quests were developed and laid out on an accessible web page, with active learning opportunities and feedback conducted within multiple virtual mediums including MinecraftEDU. Gaming mediums actively engage players in role-playing, problem-solving, critical thinking, and collaboration. Gaming has been used as a medium for education since the inception of formal education. Games, and specifically board games, are pre-historic, meaning we had board games before we had written language. Today, games are widely used in education, often as a reinforcer for behavior or for rewards for work completion. Games are not often used as a direct method of instruction and assessment; however, the inclusion of games as an assessment tool and as a form of instruction increases student engagement and participation. Games naturally include collaboration, problem-solving, and communication. Therefore, our summer program was developed using gaming principles and MinecraftEDU. This manuscript describes a virtual learning summer program called Virtual Academy New and Exciting Transitions (VAN) that was redesigned from a face-to-face setting to a completely online setting with a focus on SWD aged 14-21. The focus of VAN was to address transition planning needs such as problem-solving skills, self-regulation, interviewing, job exploration, and communication for transition-aged youth diagnosed with various disabilities (e.g., learning disabilities, attention-deficit hyperactivity disorder, intellectual disability, down syndrome, autism spectrum disorder).

Keywords: autism, disabilities, transition, summer program, gaming, simulations

Procedia PDF Downloads 56
1065 Viability of EBT3 Film in Small Dimensions to Be Use for in-Vivo Dosimetry in Radiation Therapy

Authors: Abdul Qadir Jangda, Khadija Mariam, Usman Ahmed, Sharib Ahmed

Abstract:

The Gafchromic EBT3 film has the characteristic of high spatial resolution, weak energy dependence and near tissue equivalence which makes them viable to be used for in-vivo dosimetry in External Beam and Brachytherapy applications. The aim of this study is to assess the smallest film dimension that may be feasible for the use in in-vivo dosimetry. To evaluate the viability, the film sizes from 3 x 3 mm to 20 x 20 mm were calibrated with 6 MV Photon and 6 MeV electron beams. The Gafchromic EBT3 (Lot no. A05151201, Make: ISP) film was cut into five different sizes in order to establish the relationship between absorbed dose vs. film dimensions. The film dimension were 3 x 3, 5 x 5, 10 x 10, 15 x 15, and 20 x 20 mm. The films were irradiated on Varian Clinac® 2100C linear accelerator for dose range from 0 to 1000 cGy using PTW solid water phantom. The irradiation was performed as per clinical absolute dose rate calibratin setup, i.e. 100 cm SAD, 5.0 cm depth and field size of 10x10 cm2 and 100 cm SSD, 1.4 cm depth and 15x15 cm2 applicator for photon and electron respectively. The irradiated films were scanned with the landscape orientation and a post development time of 48 hours (minimum). Film scanning accomplished using Epson Expression 10000 XL Flatbed Scanner and quantitative analysis carried out with ImageJ freeware software. Results show that the dose variation with different film dimension ranging from 3 x 3 mm to 20 x 20 mm is very minimal with a maximum standard deviation of 0.0058 in Optical Density for a dose level of 3000 cGy and the the standard deviation increases with the increase in dose level. So the precaution must be taken while using the small dimension films for higher doses. Analysis shows that there is insignificant variation in the absorbed dose with a change in film dimension of EBT3 film. Study concludes that the film dimension upto 3 x 3 mm can safely be used up to a dose level of 3000 cGy without the need of recalibration for particular dimension in use for dosimetric application. However, for higher dose levels, one may need to calibrate the films for a particular dimension in use for higher accuracy. It was also noticed that the crystalline structure of the film got damage at the edges while cutting the film, which can contribute to the wrong dose if the region of interest includes the damage area of the film

Keywords: external beam radiotherapy, film calibration, film dosimetery, in-vivo dosimetery

Procedia PDF Downloads 474
1064 Effects of Vegetable Oils Supplementation on in Vitro Rumen Fermentation and Methane Production in Buffaloes

Authors: Avijit Dey, Shyam S. Paul, Satbir S. Dahiya, Balbir S. Punia, Luciano A. Gonzalez

Abstract:

Methane emitted from ruminant livestock not only reduces the efficiency of feed energy utilization but also contributes to global warming. Vegetable oils, a source of poly unsaturated fatty acids, have potential to reduce methane production and increase conjugated linoleic acid in the rumen. However, characteristics of oils, level of inclusion and composition of basal diet influences their efficacy. Therefore, this study was aimed to investigate the effects of sunflower (SFL) and cottonseed (CSL) oils on methanogenesis, volatile fatty acids composition and feed fermentation pattern by in vitro gas production (IVGP) test. Four concentrations (0, 0.1, 0.2 and 0.4ml /30ml buffered rumen fluid) of each oil were used. Fresh rumen fluid was collected before morning feeding from two rumen cannulated buffalo steers fed a mixed ration. In vitro incubation was carried out with sorghum hay (200 ± 5 mg) as substrate in 100 ml calibrated glass syringes following standard IVGP protocol. After 24h incubation, gas production was recorded by displacement of piston. Methane in the gas phase and volatile fatty acids in the fermentation medium were estimated by gas chromatography. Addition of oils resulted in increase (p<0.05) in total gas production and decrease (p<0.05) in methane production, irrespective of type and concentration. Although the increase in gas production was similar, methane production (ml/g DM) and its concentration (%) in head space gas was lower (p< 0.01) in CSL than in SFL at corresponding doses. Linear decrease (p<0.001) in degradability of DM was evident with increasing doses of oils (0.2ml onwards). However, these effects were more pronounced with SFL. Acetate production tended to decrease but propionate and butyrate production increased (p<0.05) with addition of oils, irrespective of type and doses. The ratio of acetate to propionate was reduced (p<0.01) with addition of oils but no difference between the oils was noted. It is concluded that both the oils can reduce methane production. However, feed degradability was also affected with higher doses. Cotton seed oil in small dose (0.1ml/30 ml buffered rumen fluid) exerted greater inhibitory effects on methane production without impeding dry matter degradability. Further in vivo studies need to be carried out for their practical application in animal ration.

Keywords: buffalo, methanogenesis, rumen fermentation, vegetable oils

Procedia PDF Downloads 374
1063 Safety Climate Assessment and Its Impact on the Productivity of Construction Enterprises

Authors: Krzysztof J. Czarnocki, F. Silveira, E. Czarnocka, K. Szaniawska

Abstract:

Research background: Problems related to the occupational health and decreasing level of safety occur commonly in the construction industry. Important factor in the occupational safety in construction industry is scaffold use. All scaffolds used in construction, renovation, and demolition shall be erected, dismantled and maintained in accordance with safety procedure. Increasing demand for new construction projects unfortunately still is linked to high level of occupational accidents. Therefore, it is crucial to implement concrete actions while dealing with scaffolds and risk assessment in construction industry, the way on doing assessment and liability of assessment is critical for both construction workers and regulatory framework. Unfortunately, professionals, who tend to rely heavily on their own experience and knowledge when taking decisions regarding risk assessment, may show lack of reliability in checking the results of decisions taken. Purpose of the article: The aim was to indicate crucial parameters that could be modeling with Risk Assessment Model (RAM) use for improving both building enterprise productivity and/or developing potential and safety climate. The developed RAM could be a benefit for predicting high-risk construction activities and thus preventing accidents occurred based on a set of historical accident data. Methodology/Methods: A RAM has been developed for assessing risk levels as various construction process stages with various work trades impacting different spheres of enterprise activity. This project includes research carried out by teams of researchers on over 60 construction sites in Poland and Portugal, under which over 450 individual research cycles were carried out. The conducted research trials included variable conditions of employee exposure to harmful physical and chemical factors, variable levels of stress of employees and differences in behaviors and habits of staff. Genetic modeling tool has been used for developing the RAM. Findings and value added: Common types of trades, accidents, and accident causes have been explored, in addition to suitable risk assessment methods and criteria. We have found that the initial worker stress level is more direct predictor for developing the unsafe chain leading to the accident rather than the workload, or concentration of harmful factors at the workplace or even training frequency and management involvement.

Keywords: safety climate, occupational health, civil engineering, productivity

Procedia PDF Downloads 289
1062 Molecular Diagnosis of a Virus Associated with Red Tip Disease and Its Detection by Non Destructive Sensor in Pineapple (Ananas comosus)

Authors: A. K. Faizah, G. Vadamalai, S. K. Balasundram, W. L. Lim

Abstract:

Pineapple (Ananas comosus) is a common crop in tropical and subtropical areas of the world. Malaysia once ranked as one of the top 3 pineapple producers in the world in the 60's and early 70's, after Hawaii and Brazil. Moreover, government’s recognition of the pineapple crop as one of priority commodities to be developed for the domestics and international markets in the National Agriculture Policy. However, pineapple industry in Malaysia still faces numerous challenges, one of which is the management of disease and pest. Red tip disease on pineapple was first recognized about 20 years ago in a commercial pineapple stand located in Simpang Renggam, Johor, Peninsular Malaysia. Since its discovery, there has been no confirmation on its causal agent of this disease. The epidemiology of red tip disease is still not fully understood. Nevertheless, the disease symptoms and the spread within the field seem to point toward viral infection. Bioassay test on nucleic acid extracted from the red tip-affected pineapple was done on Nicotiana tabacum cv. Coker by rubbing the extracted sap. Localised lesions were observed 3 weeks after inoculation. Negative staining of the fresh inoculated Nicotiana tabacum cv. Coker showed the presence of membrane-bound spherical particles with an average diameter of 94.25nm under transmission electron microscope. The shape and size of the particles were similar to tospovirus. SDS-PAGE analysis of partial purified virions from inoculated N. tabacum produced a strong and a faint protein bands with molecular mass of approximately 29 kDa and 55 kDa. Partial purified virions of symptomatic pineapple leaves from field showed bands with molecular mass of approximately 29 kDa, 39 kDa and 55kDa. These bands may indicate the nucleocapsid protein identity of tospovirus. Furthermore, a handheld sensor, Greenseeker, was used to detect red tip symptoms on pineapple non-destructively based on spectral reflectance, measured as Normalized Difference Vegetation Index (NDVI). Red tip severity was estimated and correlated with NDVI. Linear regression models were calibrated and tested developed in order to estimate red tip disease severity based on NDVI. Results showed a strong positive relationship between red tip disease severity and NDVI (r= 0.84).

Keywords: pineapple, diagnosis, virus, NDVI

Procedia PDF Downloads 768
1061 Physico-Mechanical Properties of Wood-Plastic Composites Produced from Polyethylene Terephthalate Plastic Bottle Wastes and Sawdust of Three Tropical Hardwood Species

Authors: Amos Olajide Oluyege, Akpanobong Akpan Ekong, Emmanuel Uchechukwu Opara, Sunday Adeniyi Adedutan, Joseph Adeola Fuwape, Olawale John Olukunle

Abstract:

This study was carried out to evaluate the influence of wood species and wood plastic ratio on the physical and mechanical properties of wood plastic composites (WPCs) produced from polyethylene terephthalate (PET) plastic bottle wastes and sawdust from three hardwood species, namely, Terminalia superba, Gmelina arborea, and Ceiba pentandra. The experimental WPCs were prepared from sawdust particle size classes of ≤ 0.5, 0.5 – 1.0, and 1.0 – 2.0 mm at wood/plastic ratios of 40:60, 50:50 and 60:40 (percentage by weight). The WPCs for each study variable combination were prepared in 3 replicates and laid out in a randomized complete block design (RCBD). The physical properties investigated water absorption (WA), linear expansion (LE) and thickness swelling (TS) while the mechanical properties evaluated were Modulus of Elasticity (MOE) and Modulus of Rupture (MOR). The mean values for WA, LE and TS ranged from 1.07 to 34.04, 0.11 to 1.76 and 0.11 to 4.05 %, respectively. The mean values of the three physical properties increased with decrease in wood plastic ratio. Wood plastic ratio of 40:60 at each particle size class generally resulted in the lowest values while wood plastic ratio of 60:40 had the highest values for each of the three species. For each of the physical properties, T. superba had the least mean values followed by G. arborea, while the highest values were observed C. pentandra. The mean values for MOE and MOR ranged from 458.17 to 1875.67 and 2.64 to 18.39 N/mm2, respectively. The mean values of the two mechanical properties decreased with increase in wood plastic ratio. Wood plastic ratio of 40:60 at each wood particle size class generally had the highest values while wood plastic ratio of 60:40 had the least values for each of the three species. For each of the mechanical properties, C. pentandra had the highest mean values followed by G. arborea, while the least values were observed T. superba. There were improvements in both the physical and mechanical properties due to decrease in sawdust particle size class with the particle size class of ≤ 0.5 mm giving the best result. The results of the Analysis of variance revealed significant (P < 0.05) effects of the three study variables – wood species, sawdust particle size class and wood/plastic ratio on all the physical and mechanical properties of the WPCs. It can be concluded from the results of this study that wood plastic composites from sawdust particle size ≤ 0.5 and PET plastic bottle wastes with acceptable physical and mechanical properties are better produced using 40:60 wood/plastic ratio, and that at this ratio, all the three species are suitable for the production of wood plastic composites.

Keywords: polyethylene terephthalate plastic bottle wastes, wood plastic composite, physical properties, mechanical properties

Procedia PDF Downloads 174
1060 Object-Scene: Deep Convolutional Representation for Scene Classification

Authors: Yanjun Chen, Chuanping Hu, Jie Shao, Lin Mei, Chongyang Zhang

Abstract:

Traditional image classification is based on encoding scheme (e.g. Fisher Vector, Vector of Locally Aggregated Descriptor) with low-level image features (e.g. SIFT, HoG). Compared to these low-level local features, deep convolutional features obtained at the mid-level layer of convolutional neural networks (CNN) have richer information but lack of geometric invariance. For scene classification, there are scattered objects with different size, category, layout, number and so on. It is crucial to find the distinctive objects in scene as well as their co-occurrence relationship. In this paper, we propose a method to take advantage of both deep convolutional features and the traditional encoding scheme while taking object-centric and scene-centric information into consideration. First, to exploit the object-centric and scene-centric information, two CNNs that trained on ImageNet and Places dataset separately are used as the pre-trained models to extract deep convolutional features at multiple scales. This produces dense local activations. By analyzing the performance of different CNNs at multiple scales, it is found that each CNN works better in different scale ranges. A scale-wise CNN adaption is reasonable since objects in scene are at its own specific scale. Second, a fisher kernel is applied to aggregate a global representation at each scale and then to merge into a single vector by using a post-processing method called scale-wise normalization. The essence of Fisher Vector lies on the accumulation of the first and second order differences. Hence, the scale-wise normalization followed by average pooling would balance the influence of each scale since different amount of features are extracted. Third, the Fisher vector representation based on the deep convolutional features is followed by a linear Supported Vector Machine, which is a simple yet efficient way to classify the scene categories. Experimental results show that the scale-specific feature extraction and normalization with CNNs trained on object-centric and scene-centric datasets can boost the results from 74.03% up to 79.43% on MIT Indoor67 when only two scales are used (compared to results at single scale). The result is comparable to state-of-art performance which proves that the representation can be applied to other visual recognition tasks.

Keywords: deep convolutional features, Fisher Vector, multiple scales, scale-specific normalization

Procedia PDF Downloads 309