Search results for: and the aggregate trapezoidal hesitant fuzzy decision matrix will be built. The case is considered when information on the attribute weights is completely unknown. The attribute weights are identified based on the De Luca and Termini information entropy concept
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 52232

Search results for: and the aggregate trapezoidal hesitant fuzzy decision matrix will be built. The case is considered when information on the attribute weights is completely unknown. The attribute weights are identified based on the De Luca and Termini information entropy concept

842 Acceptability Process of a Congestion Charge

Authors: Amira Mabrouk

Abstract:

This paper deals with the acceptability of urban toll in Tunisia. The price-based regulation, i.e. urban toll, is the outcome of a political process hampered by three-fold objectives: effectiveness, equity and social acceptability. This produces both economic interest groups and functions that are of incongruent preferences. The plausibility of this speculation goes hand in hand with the fact that these economic interest groups are also taxpayers who undeniably perceive urban toll as an additional charge. This wariness is coupled with an inquiry about the conditions of usage, the redistribution of the collected tax revenue and the idea of the leviathan state completes the picture. In a nutshell, if researches related to road congestion proliferate, no de facto legitimacy can be pleaded. Nonetheless, the theory on urban tolls engenders economists’ questioning of ways to reduce negative external effects linked to it. Only then does the urban toll appear to bear an answer to these issues. Undeniably, the urban toll suggests inherent conflicts due to the apparent no-payment principal of a public asset as well as to the social perception of the new measure as a mere additional charge. However, when the main concern is effectiveness is its broad sense and the social well-being, the main factors that determine the acceptability of such a tariff measure along with the type of incentives should be the object of a thorough, in-depth analysis. Before adopting this economic role, one has to recognize the factors that intervene in the acceptability of a congestion toll which brought about a copious number of articles and reports that lacked mostly solid theoretical content. It is noticeable that nowadays uncertainties float over the exact nature of the acceptability process. Accepting a congestion tariff could differ from one era to another, from one region to another and from one population to another, etc. Notably, this article, within a convenient time frame, attempts at bringing into focus a link between the social acceptability of the urban congestion toll and the value of time through a survey method barely employed in Tunisia, that of stated preference method. How can the urban toll, as a tax, be defined, justified and made acceptable? How can an equitable and effective tariff of congestion toll be reached? How can the costs of this urban toll be covered? In what way can we make the redistribution of the urban toll revenue visible and economically equitable? How can the redistribution of the revenue of urban toll compensate the disadvantaged while introducing such a tariff measure? This paper will offer answers to these research questions and it follows the line of contribution of JULES DUPUIT in 1844.

Keywords: congestion charge, social perception, acceptability, stated preferences

Procedia PDF Downloads 272
841 Exploring Teachers’ Beliefs about Diagnostic Language Assessment Practices in a Large-Scale Assessment Program

Authors: Oluwaseun Ijiwade, Chris Davison, Kelvin Gregory

Abstract:

In Australia, like other parts of the world, the debate on how to enhance teachers using assessment data to inform teaching and learning of English as an Additional Language (EAL, Australia) or English as a Foreign Language (EFL, United States) have occupied the centre of academic scholarship. Traditionally, this approach was conceptualised as ‘Formative Assessment’ and, in recent times, ‘Assessment for Learning (AfL)’. The central problem is that teacher-made tests are limited in providing data that can inform teaching and learning due to variability of classroom assessments, which are hindered by teachers’ characteristics and assessment literacy. To address this concern, scholars in language education and testing have proposed a uniformed large-scale computer-based assessment program to meet the needs of teachers and promote AfL in language education. In Australia, for instance, the Victoria state government commissioned a large-scale project called 'Tools to Enhance Assessment Literacy (TEAL) for Teachers of English as an additional language'. As part of the TEAL project, a tool called ‘Reading and Vocabulary assessment for English as an Additional Language (RVEAL)’, as a diagnostic language assessment (DLA), was developed by language experts at the University of New South Wales for teachers in Victorian schools to guide EAL pedagogy in the classroom. Therefore, this study aims to provide qualitative evidence for understanding beliefs about the diagnostic language assessment (DLA) among EAL teachers in primary and secondary schools in Victoria, Australia. To realize this goal, this study raises the following questions: (a) How do teachers use large-scale assessment data for diagnostic purposes? (b) What skills do language teachers think are necessary for using assessment data for instruction in the classroom? and (c) What factors, if any, contribute to teachers’ beliefs about diagnostic assessment in a large-scale assessment? Semi-structured interview method was used to collect data from at least 15 professional teachers who were selected through a purposeful sampling. The findings from the resulting data analysis (thematic analysis) provide an understanding of teachers’ beliefs about DLA in a classroom context and identify how these beliefs are crystallised in language teachers. The discussion shows how the findings can be used to inform professional development processes for language teachers as well as informing important factor of teacher cognition in the pedagogic processes of language assessment. This, hopefully, will help test developers and testing organisations to align the outcome of this study with their test development processes to design assessment that can enhance AfL in language education.

Keywords: beliefs, diagnostic language assessment, English as an additional language, teacher cognition

Procedia PDF Downloads 189
840 A Robust Optimization of Chassis Durability/Comfort Compromise Using Chebyshev Polynomial Chaos Expansion Method

Authors: Hanwei Gao, Louis Jezequel, Eric Cabrol, Bernard Vitry

Abstract:

The chassis system is composed of complex elements that take up all the loads from the tire-ground contact area and thus it plays an important role in numerous specifications such as durability, comfort, crash, etc. During the development of new vehicle projects in Renault, durability validation is always the main focus while deployment of comfort comes later in the project. Therefore, sometimes design choices have to be reconsidered because of the natural incompatibility between these two specifications. Besides, robustness is also an important point of concern as it is related to manufacturing costs as well as the performance after the ageing of components like shock absorbers. In this paper an approach is proposed aiming to realize a multi-objective optimization between chassis endurance and comfort while taking the random factors into consideration. The adaptive-sparse polynomial chaos expansion method (PCE) with Chebyshev polynomial series has been applied to predict responses’ uncertainty intervals of a system according to its uncertain-but-bounded parameters. The approach can be divided into three steps. First an initial design of experiments is realized to build the response surfaces which represent statistically a black-box system. Secondly within several iterations an optimum set is proposed and validated which will form a Pareto front. At the same time the robustness of each response, served as additional objectives, is calculated from the pre-defined parameter intervals and the response surfaces obtained in the first step. Finally an inverse strategy is carried out to determine the parameters’ tolerance combination with a maximally acceptable degradation of the responses in terms of manufacturing costs. A quarter car model has been tested as an example by applying the road excitations from the actual road measurements for both endurance and comfort calculations. One indicator based on the Basquin’s law is defined to compare the global chassis durability of different parameter settings. Another indicator related to comfort is obtained from the vertical acceleration of the sprung mass. An optimum set with best robustness has been finally obtained and the reference tests prove a good robustness prediction of Chebyshev PCE method. This example demonstrates the effectiveness and reliability of the approach, in particular its ability to save computational costs for a complex system.

Keywords: chassis durability, Chebyshev polynomials, multi-objective optimization, polynomial chaos expansion, ride comfort, robust design

Procedia PDF Downloads 145
839 Study of Polyphenol Profile and Antioxidant Capacity in Italian Ancient Apple Varieties by Liquid Chromatography

Authors: A. M. Tarola, R. Preti, A. M. Girelli, P. Campana

Abstract:

Safeguarding, studying and enhancing biodiversity play an important and indispensable role in re-launching agriculture. The ancient local varieties are therefore a precious resource for genetic and health improvement. In order to protect biodiversity through the recovery and valorization of autochthonous varieties, in this study we analyzed 12 samples of four ancient apple cultivars representative of Friuli Venezia Giulia, selected by local farmers who work on a project for the recovery of ancient apple cultivars. The aim of this study is to evaluate the polyphenolic profile and the antioxidant capacity that characterize the organoleptic and functional qualities of this fruit species, besides having beneficial properties for health. In particular, for each variety, the following compounds were analyzed, both in the skins and in the pulp: gallic acid, catechin, chlorogenic acid, epicatechin, caffeic acid, coumaric acid, ferulic acid, rutin, phlorizin, phloretin and quercetin to highlight any differences in the edible parts of the apple. The analysis of individual phenolic compounds was performed by High Performance Liquid Chromatography (HPLC) coupled with a diode array UV detector (DAD), the antioxidant capacity was estimated using an in vitro essay based on a Free Radical Scavenging Method and the total phenolic compounds was determined using the Folin-Ciocalteau method. From the results, it is evident that the catechins are the most present polyphenols, reaching a value of 140-200 μg/g in the pulp and of 400-500 μg/g in the skin, with the prevalence of epicatechin. Catechins and phlorizin, a dihydrohalcone typical of apples, are always contained in larger quantities in the peel. Total phenolic compounds content was positively correlated with antioxidant activity in apple pulp (r2 = 0,850) and peel (r2 = 0,820). Comparing the results, differences between the varieties analyzed and between the edible parts (pulp and peel) of the apple were highlighted. In particular, apple peel is richer in polyphenolic compounds than pulp and flavonols are exclusively present in the peel. In conclusion, polyphenols, being antioxidant substances, have confirmed the benefits of fruit in the diet, especially as a prevention and treatment for degenerative diseases. They demonstrated to be also a good marker for the characterization of different apple cultivars. The importance of protecting biodiversity in agriculture was also highlighted through the exploitation of native products and ancient varieties of apples now forgotten.

Keywords: apple, biodiversity, polyphenols, antioxidant activity, HPLC-DAD, characterization

Procedia PDF Downloads 128
838 Imputation of Incomplete Large-Scale Monitoring Count Data via Penalized Estimation

Authors: Mohamed Dakki, Genevieve Robin, Marie Suet, Abdeljebbar Qninba, Mohamed A. El Agbani, Asmâa Ouassou, Rhimou El Hamoumi, Hichem Azafzaf, Sami Rebah, Claudia Feltrup-Azafzaf, Nafouel Hamouda, Wed a.L. Ibrahim, Hosni H. Asran, Amr A. Elhady, Haitham Ibrahim, Khaled Etayeb, Essam Bouras, Almokhtar Saied, Ashrof Glidan, Bakar M. Habib, Mohamed S. Sayoud, Nadjiba Bendjedda, Laura Dami, Clemence Deschamps, Elie Gaget, Jean-Yves Mondain-Monval, Pierre Defos Du Rau

Abstract:

In biodiversity monitoring, large datasets are becoming more and more widely available and are increasingly used globally to estimate species trends and con- servation status. These large-scale datasets challenge existing statistical analysis methods, many of which are not adapted to their size, incompleteness and heterogeneity. The development of scalable methods to impute missing data in incomplete large-scale monitoring datasets is crucial to balance sampling in time or space and thus better inform conservation policies. We developed a new method based on penalized Poisson models to impute and analyse incomplete monitoring data in a large-scale framework. The method al- lows parameterization of (a) space and time factors, (b) the main effects of predic- tor covariates, as well as (c) space–time interactions. It also benefits from robust statistical and computational capability in large-scale settings. The method was tested extensively on both simulated and real-life waterbird data, with the findings revealing that it outperforms six existing methods in terms of missing data imputation errors. Applying the method to 16 waterbird species, we estimated their long-term trends for the first time at the entire North African scale, a region where monitoring data suffer from many gaps in space and time series. This new approach opens promising perspectives to increase the accuracy of species-abundance trend estimations. We made it freely available in the r package ‘lori’ (https://CRAN.R-project.org/package=lori) and recommend its use for large- scale count data, particularly in citizen science monitoring programmes.

Keywords: biodiversity monitoring, high-dimensional statistics, incomplete count data, missing data imputation, waterbird trends in North-Africa

Procedia PDF Downloads 137
837 Comparing Xbar Charts: Conventional versus Reweighted Robust Estimation Methods for Univariate Data Sets

Authors: Ece Cigdem Mutlu, Burak Alakent

Abstract:

Maintaining the quality of manufactured products at a desired level depends on the stability of process dispersion and location parameters and detection of perturbations in these parameters as promptly as possible. Shewhart control chart is the most widely used technique in statistical process monitoring to monitor the quality of products and control process mean and variability. In the application of Xbar control charts, sample standard deviation and sample mean are known to be the most efficient conventional estimators in determining process dispersion and location parameters, respectively, based on the assumption of independent and normally distributed datasets. On the other hand, there is no guarantee that the real-world data would be normally distributed. In the cases of estimated process parameters from Phase I data clouded with outliers, efficiency of traditional estimators is significantly reduced, and performance of Xbar charts are undesirably low, e.g. occasional outliers in the rational subgroups in Phase I data set may considerably affect the sample mean and standard deviation, resulting a serious delay in detection of inferior products in Phase II. For more efficient application of control charts, it is required to use robust estimators against contaminations, which may exist in Phase I. In the current study, we present a simple approach to construct robust Xbar control charts using average distance to the median, Qn-estimator of scale, M-estimator of scale with logistic psi-function in the estimation of process dispersion parameter, and Harrell-Davis qth quantile estimator, Hodge-Lehmann estimator and M-estimator of location with Huber psi-function and logistic psi-function in the estimation of process location parameter. Phase I efficiency of proposed estimators and Phase II performance of Xbar charts constructed from these estimators are compared with the conventional mean and standard deviation statistics both under normality and against diffuse-localized and symmetric-asymmetric contaminations using 50,000 Monte Carlo simulations on MATLAB. Consequently, it is found that robust estimators yield parameter estimates with higher efficiency against all types of contaminations, and Xbar charts constructed using robust estimators have higher power in detecting disturbances, compared to conventional methods. Additionally, utilizing individuals charts to screen outlier subgroups and employing different combination of dispersion and location estimators on subgroups and individual observations are found to improve the performance of Xbar charts.

Keywords: average run length, M-estimators, quality control, robust estimators

Procedia PDF Downloads 178
836 Oncology and Phytomedicine in the Advancement of Cancer Therapy for Better Patient Care

Authors: Hailemeleak Regassa

Abstract:

Traditional medicines use medicinal plants as a source of ingredients, and many modern medications are indirectly derived from plants. Consumers in affluent nations are growing disenchanted with contemporary healthcare and looking for alternatives. Oxidative stress is the primary cause of multiple diseases, and exogenous antioxidant supplementation or strengthening the body's endogenous antioxidant defenses are potential ways to counteract the negative effects of oxidative damage. Plants can biosynthesize non-enzymatic antioxidants that can reduce ROS-induced oxidative damage. Aging often aids the propagation and development of carcinogenesis, and older animals and older people exhibit increased vulnerability to tumor promoters. Cancer is a major public health issue, with several anti-cancer medications in clinical use. Potential drugs such as flavopiridol, roscovitine, combretastatin A-4, betulinic acid, and silvestrol are in the clinical or preclinical stages of research. Methodology: Microbial Growth media, Dimethyl sulfoxide (DMSO), methanol, ethyl acetate, and n-hexane were obtained from Himedia Labs, Mumbai, India. plant were collected from the Herbal Garden of Shoolini University campus, Solan, India (Latitude - 30.8644° N and longitude - 77.1184° E). The identity was confirmed by Dr. Y.S. Parmar University of Horticulture and Forestry, Nauni, Solan (H.P.), India, and documented in Voucher specimens - UHF- Herbarium no. 13784; vide book no. 3818 Receipt No. 086. The plant materials were washed with tap water, and 0.1% mercury chloride for 2 minutes, rinsed with distilled water, air dried, and kept in a hot air oven at 40ºc on blotting paper until all the water evaporated and became well dried for grinding. After drying, the plant materials were grounded using a mixer grinder into fine powder transferred into airtight containers with proper labeling, and stored at 4ºc for future use (Horablaga et al., 2023). The extraction process was done according to Altemimi et al., 2017. The 5g powder was mixed with 15 ml of the respective solvents (n-hexane, ethyl acetate, and methanol), and kept for 4-5 days on the platform shaker. The solvents used are based on their increasing polarity index. Then the extract was centrifuged at 10,000rpm for 5 minutes and filtered using No.1 Whatman filter paper.

Keywords: cancer, phytomedicine, medicinal plants, oncology

Procedia PDF Downloads 54
835 Walking across the Government of Egypt: A Single Country Comparative Study of the Past and Current Condition of the Government of Egypt

Authors: Homyr L. Garcia, Jr., Anne Margaret A. Rendon, Carla Michaela B. Taguinod

Abstract:

Nothing is constant in this world but change. This is the reality wherein a lot of people fail to recognize and maybe, it is because of the fact that some see things that are happening with little value or no value at all until it’s gone. For the past years, Egypt was known for its stable government. It was able to withstand a lot of problems and crisis which challenged their country in ways which can never be imagined. In the present time, it seems like in just a snap of a finger, the said stability vanished and it was immediately replaced by a crisis which resulted to a failure in some parts of their government. In addition, this problem continued to worsen and the current situation of Egypt is just a reflection or a result of it. On the other hand, as the researchers continued to study the reasons why the government of Egypt is unstable, they concluded that there might be a possibility that they will be able to produce ways in which their country could be helped or improved. The instability of the government of Egypt is the product of combining all the problems which affects the lives of the people. Some of the reasons that the researchers found are the following: 1) unending doubts of the people regarding the ruling capacity of elected presidents, 2) removal of President Mohamed Morsi in position, 3) economic crisis, 4) a lot of protests and revolution happened, 5) resignation of the long term President Hosni Mubarak and 6) the office of the President is most likely available only to the chosen successor. Also, according to previous researches, there are two plausible scenarios for the instability of Egypt: 1) a military intervention specifically the Supreme Council of the Armed Forces or SCAF, resulting from a contested succession and 2) an Islamist push for political power which highlights the claim that religion is a hindrance towards the development of their country and government. From the eight possible reasons, the researchers decided that they will be focusing on economic crisis since the instability is more clearly seen in the country’s economy which directly affects the people and the government itself. In addition, they made a hypothesis which states that stable economy is a prerequisite towards a stable government. If they will be able to show how this claim is true by using the Social Autopsy Research Design for the qualitative method and Pearson’s correlation coefficient for the quantitative method, the researchers might be able to produce a proposal on how Egypt can stabilize their government and avoid such problems. Also, the hypothesis will be based from the Rational Action Theory which is a theory for understanding and modeling social and economy as well as individual behavior.

Keywords: Pearson’s correlation coefficient, rational action theory, social autopsy research design, supreme council of the armed forces (SCAF)

Procedia PDF Downloads 395
834 Towards a Comprehensive Framework on Civic Competence Development of Teachers: A Systematic Review of Literature

Authors: Emilie Vandevelde, Ellen Claes

Abstract:

This study aims to develop a comprehensive model for the civic socialization process of teachers. Citizenship has become one of the main objectives for the European education systems. It is expected that teachers are well prepared and equipped with the necessary knowledge, skills, and attitudes to also engage students in democratic citizenship. While a lot is known about young peoples’ civic competence development and how schools and teachers (don’t) support this process, less is known about how teachers themselves engage with (the teaching of) civics. Other than the civic socialization process of young adolescents that focuses on personal competence development, the civic socialization process of teachers includes the development of professional, civic competences. These professional competences make that they are able to prepare pupils to carry out their civic responsibilities in thoughtful ways. Existing models for the civic socialization process of young adolescents do not take this dual purpose into account. Based on these observations, this paper will investigate (1)What personal and professional civic competences teachers need to effectively teach civic education and (2) how teachers acquire these personal and professional civic competences. To answer the first research question, a systematic review of literature of existing civic education frameworks was carried out and linked to literature on teacher training. The second research question was addressed by adapting the Octagon model, developed by the International Association for the Evaluation of Educational Achievement (IEA), to the context of teachers. This was done by carrying out a systematic review of the recent literature linking three theoretical topics involved in teachers’ civic competence development: theories about the civic socialization process of young adolescents, Schulmans (1987) theoretical assumptions on pedagogical content knowledge (PCK), and Nogueira & Moreira’s (2012) framework for civic education teachers’ knowledge and literature on teachers’ professional development. This resulted in a comprehensive conceptual framework describing the personal and professional civic competences of civic education teachers. In addition, this framework is linked to the OctagonT model: a model that describes the processes through which teachers acquire these personal and professional civic competences. This model recognizes that teachers’ civic socialization process is influenced by interconnected variables located at different levels in a multi-level structure (the individual teacher (e.g., civic beliefs), everyday contacts (e.g., teacher educators, the intended, informal and hidden curriculum of the teacher training program, internship contacts, participation opportunities in teacher training, etc.) and the influence of the national educational context (e.g., vision on civic education)). Furthermore, implications for teacher education programs are described.

Keywords: civic education, civic competences, civic socialization, octagon model, teacher training

Procedia PDF Downloads 258
833 Approaching Sexual Violence Against People with Disabilities in Colombia from a Qualitative Perspective

Authors: Mariana Calderón, Rocío Murad, Natalia Acevedo, Laura León, Juliana Fonseca, Maria de los Angeles Balaguera Villa

Abstract:

Recently, different countries and international organizations have put on their agenda the elimination of violence against people with disabilities. This research aims to evaluate the social dimensions of sexual violence against people with disabilities, particularly those with psychosocial and cognitive, in Colombia. Results reveal that 55% of people with disabilities that are survivors of sexual violence are younger than 29 years and 20,4 are people with cognitive and psychosocial disabilities. Colombian regions with better social positions presented more cases of sexual violence against people with disabilities. There were found access barriers for health, education and employment among this population, and there was also found poor data quality. Despite Colombia having an important normative framework aimed at preventing and attending to gender-based violence, it does not take into account people with disabilities specific needs. Additionally, it was found an insufficient implementation and appropriation of these norms, negative attitudes, and in general, a lack of service adaptation according to the needs, identities and circumstances of people with disabilities. Furthermore, among the factors that are exposing people with disabilities to sexual violence, it was found that family members tend to be the main aggressors, there are deep gaps in the sex education received by people with disabilities, imaginaries and perceptions about their sexuality are both hypersexualizing and presenting them as asexual. On the other hand, among protective factors, there were found body self-knowledge and conscience, acknowledgment of their sexuality and their sexual and reproductive rights and access to sex ed. Although during the last few years, there has occurred a positive change toward social inclusion of people with disabilities, specifically through their role in the political agenda and the recognition of their rights. More work is needed in order to guarantee their sexual and reproductive rights, particularly for persons with psychosocial and cognitive disabilities. This research results showed the importance of transforming persisting negative imaginaries about their sexuality and also enforcing and promoting their autonomy. In this sense, it is important to acknowledge gaps and barriers faced by them and create strategies to encourage their social inclusion through education, employment, and skill development. Nevertheless, it is necessary to keep contributing new evidence of the social determinants of health that are influencing the occurrence of sexual violence. This research understands sexual violence against people with disabilities in a multidimensional manner and offers the following recommendations: 1- To foment public sensitization and understanding of disabilities. 2- To increase parents, caregivers and officers’ commitment to the prevention and reduction of sexual violence. 3- To focus on the needs, identities and circumstances of people with disabilities.

Keywords: disabilities, sexual and reproductive rights, sexual violence, prevention

Procedia PDF Downloads 65
832 Systematic Evaluation of Convolutional Neural Network on Land Cover Classification from Remotely Sensed Images

Authors: Eiman Kattan, Hong Wei

Abstract:

In using Convolutional Neural Network (CNN) for classification, there is a set of hyperparameters available for the configuration purpose. This study aims to evaluate the impact of a range of parameters in CNN architecture i.e. AlexNet on land cover classification based on four remotely sensed datasets. The evaluation tests the influence of a set of hyperparameters on the classification performance. The parameters concerned are epoch values, batch size, and convolutional filter size against input image size. Thus, a set of experiments were conducted to specify the effectiveness of the selected parameters using two implementing approaches, named pertained and fine-tuned. We first explore the number of epochs under several selected batch size values (32, 64, 128 and 200). The impact of kernel size of convolutional filters (1, 3, 5, 7, 10, 15, 20, 25 and 30) was evaluated against the image size under testing (64, 96, 128, 180 and 224), which gave us insight of the relationship between the size of convolutional filters and image size. To generalise the validation, four remote sensing datasets, AID, RSD, UCMerced and RSCCN, which have different land covers and are publicly available, were used in the experiments. These datasets have a wide diversity of input data, such as number of classes, amount of labelled data, and texture patterns. A specifically designed interactive deep learning GPU training platform for image classification (Nvidia Digit) was employed in the experiments. It has shown efficiency in both training and testing. The results have shown that increasing the number of epochs leads to a higher accuracy rate, as expected. However, the convergence state is highly related to datasets. For the batch size evaluation, it has shown that a larger batch size slightly decreases the classification accuracy compared to a small batch size. For example, selecting the value 32 as the batch size on the RSCCN dataset achieves the accuracy rate of 90.34 % at the 11th epoch while decreasing the epoch value to one makes the accuracy rate drop to 74%. On the other extreme, setting an increased value of batch size to 200 decreases the accuracy rate at the 11th epoch is 86.5%, and 63% when using one epoch only. On the other hand, selecting the kernel size is loosely related to data set. From a practical point of view, the filter size 20 produces 70.4286%. The last performed image size experiment shows a dependency in the accuracy improvement. However, an expensive performance gain had been noticed. The represented conclusion opens the opportunities toward a better classification performance in various applications such as planetary remote sensing.

Keywords: CNNs, hyperparamters, remote sensing, land cover, land use

Procedia PDF Downloads 157
831 The Analysis of Swales Model (Cars Model) in the UMT Final Year Engineering Students

Authors: Kais Amir Kadhim

Abstract:

Context: The study focuses on the rhetorical structure of chapters in engineering final year projects, specifically the Introduction chapter, written by UMT (University of Marine Technology) engineering students. Existing research has explored the use of genre-based approaches to analyze the writing of final year projects in various disciplines. Research Aim: The aim of this study is to investigate the rhetorical structure of Introduction chapters in engineering final year projects by UMT students. The study aims to identify the frequency of communicative moves and their constituent steps within the Introduction chapters, as well as understand how students justify their research projects. Methodology: The research design will utilize a mixed method approach, combining both quantitative and qualitative methods. Forty Introduction chapters from two different fields in UMT engineering undergraduate programs will be selected for analysis. Findings: The study intends to identify the types of moves present in the Introduction chapters of engineering final year projects by UMT students. Additionally, it aims to determine if these moves and steps are obligatory, conventional, or optional. Theoretical Importance: The study draws upon Bunton's modified CARS (Creating a Research Space) model, which is a conceptual framework used for analyzing the introduction sections of theses. By applying this model, the study contributes to the understanding of the rhetorical structure of Introduction chapters in engineering final year projects. Data Collection: The study will collect data from forty Introduction chapters of engineering final year projects written by UMT engineering students. These chapters will be selected from two different fields within UMT's engineering undergraduate programs. Analysis Procedures: The analysis will involve identifying and categorizing the communicative moves and their constituent steps within the Introduction chapters. The study will utilize both quantitative and qualitative analysis methods to examine the frequency and nature of these moves. Question Addressed: The study aims to address the question of how UMT engineering students structure and justify their research projects within the Introduction chapters of their final year projects. Conclusion: The study aims to contribute to the knowledge of rhetorical structure in engineering final year projects by investigating the Introduction chapters written by UMT engineering students. By using a mixed method research design and applying the modified CARS model, the study intends to identify the types of moves and steps employed by students and explore their justifications for their research projects. The findings have the potential to enhance the understanding of effective academic writing in engineering disciplines.

Keywords: cohesive markers, learning, meaning, students

Procedia PDF Downloads 63
830 Laser Paint Stripping on Large Zones on AA 2024 Based Substrates

Authors: Selen Unaldi, Emmanuel Richaud, Matthieu Gervais, Laurent Berthe

Abstract:

Aircrafts are painted with several layers to guarantee their protection from external attacks. For aluminum AA 2024-T3 (metallic structural part of the plane), a protective primer is applied to ensure its corrosion protection. On top of this layer, the top coat is applied for aesthetic aspects. During the lifetime of an aircraft, top coat stripping has an essential role which should be operated as an average of every four years. However, since conventional stripping processes create hazardous disposals and need long hours of labor work, alternative methods have been investigated. Amongst them, laser stripping appears as one of the most promising techniques not only because of the reasons mentioned above but also its controllable and monitorable aspects. The application of a laser beam from the coated side provides stripping, but the depth of the process should be well controlled in order to prevent damage to a substrate and the anticorrosion primer. Apart from that, thermal effects should be taken into account on the painted layers. As an alternative, we worked on developing a process that includes the usage of shock wave propagation to create the stripping via mechanical effects with the application of the beam from the substrate side (back face) of the samples. Laser stripping was applied on thickness-specified samples with a thickness deviation of 10-20%. First, the stripping threshold is determined as a function of power density which is the first flight off of the top coats. After obtaining threshold values, the same power densities were applied to specimens to create large stripping zones with a spot overlap of 10-40%. Layer characteristics were determined on specimens in terms of physicochemical properties and thickness range both before and after laser stripping in order to validate the substrate material health and coating properties. The substrate health is monitored by measuring the roughness of the laser-impacted zones and free surface energy tests (both before and after laser stripping). Also, Hugoniot Elastic Limit (HEL) is determined from VISAR diagnostic on AA 2024-T3 substrates (for the back face surface deformations). In addition, the coating properties are investigated as a function of adhesion levels and anticorrosion properties (neutral salt spray test). The influence of polyurethane top-coat thickness is studied in order to verify the laser stripping process window for industrial aircraft applications.

Keywords: aircraft coatings, laser stripping, laser adhesion tests, epoxy, polyurethane

Procedia PDF Downloads 67
829 Functionalization of Carbon-Coated Iron Nanoparticles with Fluorescent Protein

Authors: A. G. Pershina, P. S. Postnikov, M. E. Trusova, D. O. Burlakova, A. E. Sazonov

Abstract:

Invention of magnetic-fluorescent nanocomposites is a rapidly developing area of research. The magnetic-fluorescent nanocomposite attractiveness is connected with the ability of simultaneous management and control of such nanocomposites by two independent methods based on different physical principles. These nanocomposites are applied for the solution of various essential scientific and experimental biomedical problems. The aim of this research is development of principle approach to nanobiohybrid structures with magnetic and fluorescent properties design. The surface of carbon-coated iron nanoparticles (Fe@C) were covalently modified by 4-carboxy benzenediazonium tosylate. Recombinant fluorescent protein TagGFP2 (Eurogen) was obtained in E. coli (Rosetta DE3) by standard laboratory techniques. Immobilization of TagGFP2 on the nanoparticles surface was provided by the carbodiimide activation. The amount of COOH-groups on the nanoparticle surface was estimated by elemental analysis (Elementar Vario Macro) and TGA-analysis (SDT Q600, TA Instruments. Obtained nanocomposites were analyzed by FTIR spectroscopy (Nicolet Thermo 5700) and fluorescence microscopy (AxioImager M1, Carl Zeiss). Amount of the protein immobilized on the modified nanoparticle surface was determined by fluorimetry (Cary Eclipse) and spectrophotometry (Unico 2800) with the help of preliminary obtained calibration plots. In the FTIR spectra of modified nanoparticles the adsorption band of –COOH group around 1700 cm-1 and bands in the region of 450-850 cm-1 caused by bending vibrations of benzene ring were observed. The calculated quantity of active groups on the surface was equal to 0,1 mmol/g of material. The carbodiimide activation of COOH-groups on nanoparticles surface results to covalent immobilization of TagGFP2 fluorescent protein (0.2 nmol/mg). The success of immobilization was proved by FTIR spectroscopy. Protein characteristic adsorption bands in the region of 1500-1600 cm-1 (amide I) were presented in the FTIR spectrum of nanocomposite. The fluorescence microscopy analysis shows that Fe@C-TagGFP2 nanocomposite possesses fluorescence properties. This fact confirms that TagGFP2 protein retains its conformation due to immobilization on nanoparticles surface. Magnetic-fluorescent nanocomposite was obtained as a result of unique design solution implementation – the fluorescent protein molecules were fixed to the surface of superparamagnetic carbon-coated iron nanoparticles using original diazonium salts.

Keywords: carbon-coated iron nanoparticles, diazonium salts, fluorescent protein, immobilization

Procedia PDF Downloads 333
828 Robust Electrical Segmentation for Zone Coherency Delimitation Base on Multiplex Graph Community Detection

Authors: Noureddine Henka, Sami Tazi, Mohamad Assaad

Abstract:

The electrical grid is a highly intricate system designed to transfer electricity from production areas to consumption areas. The Transmission System Operator (TSO) is responsible for ensuring the efficient distribution of electricity and maintaining the grid's safety and quality. However, due to the increasing integration of intermittent renewable energy sources, there is a growing level of uncertainty, which requires a faster responsive approach. A potential solution involves the use of electrical segmentation, which involves creating coherence zones where electrical disturbances mainly remain within the zone. Indeed, by means of coherent electrical zones, it becomes possible to focus solely on the sub-zone, reducing the range of possibilities and aiding in managing uncertainty. It allows faster execution of operational processes and easier learning for supervised machine learning algorithms. Electrical segmentation can be applied to various applications, such as electrical control, minimizing electrical loss, and ensuring voltage stability. Since the electrical grid can be modeled as a graph, where the vertices represent electrical buses and the edges represent electrical lines, identifying coherent electrical zones can be seen as a clustering task on graphs, generally called community detection. Nevertheless, a critical criterion for the zones is their ability to remain resilient to the electrical evolution of the grid over time. This evolution is due to the constant changes in electricity generation and consumption, which are reflected in graph structure variations as well as line flow changes. One approach to creating a resilient segmentation is to design robust zones under various circumstances. This issue can be represented through a multiplex graph, where each layer represents a specific situation that may arise on the grid. Consequently, resilient segmentation can be achieved by conducting community detection on this multiplex graph. The multiplex graph is composed of multiple graphs, and all the layers share the same set of vertices. Our proposal involves a model that utilizes a unified representation to compute a flattening of all layers. This unified situation can be penalized to obtain (K) connected components representing the robust electrical segmentation clusters. We compare our robust segmentation to the segmentation based on a single reference situation. The robust segmentation proves its relevance by producing clusters with high intra-electrical perturbation and low variance of electrical perturbation. We saw through the experiences when robust electrical segmentation has a benefit and in which context.

Keywords: community detection, electrical segmentation, multiplex graph, power grid

Procedia PDF Downloads 61
827 Monte Carlo Simulation Study on Improving the Flatting Filter-Free Radiotherapy Beam Quality Using Filters from Low- z Material

Authors: H. M. Alfrihidi, H.A. Albarakaty

Abstract:

Flattening filter-free (FFF) photon beam radiotherapy has increased in the last decade, which is enabled by advancements in treatment planning systems and radiation delivery techniques like multi-leave collimators. FFF beams have higher dose rates, which reduces treatment time. On the other hand, FFF beams have a higher surface dose, which is due to the loss of beam hardening effect caused by the presence of the flatting filter (FF). The possibility of improving FFF beam quality using filters from low-z materials such as steel and aluminium (Al) was investigated using Monte Carlo (MC) simulations. The attenuation coefficient of low-z materials for low-energy photons is higher than that of high-energy photons, which leads to the hardening of the FFF beam and, consequently, a reduction in the surface dose. BEAMnrc user code, based on Electron Gamma Shower (EGSnrc) MC code, is used to simulate the beam of a 6 MV True-Beam linac. A phase-space (phosphor) file provided by Varian Medical Systems was used as a radiation source in the simulation. This phosphor file was scored just above the jaws at 27.88 cm from the target. The linac from the jaw downward was constructed, and radiation passing was simulated and scored at 100 cm from the target. To study the effect of low-z filters, steel and Al filters with a thickness of 1 cm were added below the jaws, and the phosphor file was scored at 100 cm from the target. For comparison, the FF beam was simulated using a similar setup. (BEAM Data Processor (BEAMdp) is used to analyse the energy spectrum in the phosphorus files. Then, the dose distribution resulting from these beams was simulated in a homogeneous water phantom using DOSXYZnrc. The dose profile was evaluated according to the surface dose, the lateral dose distribution, and the percentage depth dose (PDD). The energy spectra of the beams show that the FFF beam is softer than the FF beam. The energy peaks for the FFF and FF beams are 0.525 MeV and 1.52 MeV, respectively. The FFF beam's energy peak becomes 1.1 MeV using a steel filter, while the Al filter does not affect the peak position. Steel and Al's filters reduced the surface dose by 5% and 1.7%, respectively. The dose at a depth of 10 cm (D10) rises by around 2% and 0.5% due to using a steel and Al filter, respectively. On the other hand, steel and Al filters reduce the dose rate of the FFF beam by 34% and 14%, respectively. However, their effect on the dose rate is less than that of the tungsten FF, which reduces the dose rate by about 60%. In conclusion, filters from low-z material decrease the surface dose and increase the D10 dose, allowing for a high-dose delivery to deep tumors with a low skin dose. Although using these filters affects the dose rate, this effect is much lower than the effect of the FF.

Keywords: flattening filter free, monte carlo, radiotherapy, surface dose

Procedia PDF Downloads 60
826 O-Functionalized CNT Mediated CO Hydro-Deoxygenation and Chain Growth

Authors: K. Mondal, S. Talapatra, M. Terrones, S. Pokhrel, C. Frizzel, B. Sumpter, V. Meunier, A. L. Elias

Abstract:

Worldwide energy independence is reliant on the ability to leverage locally available resources for fuel production. Recently, syngas produced through gasification of carbonaceous materials provided a gateway to a host of processes for the production of various chemicals including transportation fuels. The basis of the production of gasoline and diesel-like fuels is the Fischer Tropsch Synthesis (FTS) process: A catalyzed chemical reaction that converts a mixture of carbon monoxide (CO) and hydrogen (H2) into long chain hydrocarbons. Until now, it has been argued that only transition metal catalysts (usually Co or Fe) are active toward the CO hydrogenation and subsequent chain growth in the presence of hydrogen. In this paper, we demonstrate that carbon nanotube (CNT) surfaces are also capable of hydro-deoxygenating CO and producing long chain hydrocarbons similar to that obtained through the FTS but with orders of magnitude higher conversion efficiencies than the present state-of-the-art FTS catalysts. We have used advanced experimental tools such as XPS and microscopy techniques to characterize CNTs and identify C-O functional groups as the active sites for the enhanced catalytic activity. Furthermore, we have conducted quantum Density Functional Theory (DFT) calculations to confirm that C-O groups (inherent on CNT surfaces) could indeed be catalytically active towards reduction of CO with H2, and capable of sustaining chain growth. The DFT calculations have shown that the kinetically and thermodynamically feasible route for CO insertion and hydro-deoxygenation are different from that on transition metal catalysts. Experiments on a continuous flow tubular reactor with various nearly metal-free CNTs have been carried out and the products have been analyzed. CNTs functionalized by various methods were evaluated under different conditions. Reactor tests revealed that the hydrogen pre-treatment reduced the activity of the catalysts to negligible levels. Without the pretreatment, the activity for CO conversion as found to be 7 µmol CO/g CNT/s. The O-functionalized samples showed very activities greater than 85 µmol CO/g CNT/s with nearly 100% conversion. Analyses show that CO hydro-deoxygenation occurred at the C-O/O-H functional groups. It was found that while the products were similar to FT products, differences in selectivities were observed which, in turn, was a result of a different catalytic mechanism. These findings now open a new paradigm for CNT-based hydrogenation catalysts and constitute a defining point for obtaining clean, earth abundant, alternative fuels through the use of efficient and renewable catalyst.

Keywords: CNT, CO Hydrodeoxygenation, DFT, liquid fuels, XPS, XTL

Procedia PDF Downloads 331
825 Formulation and Invivo Evaluation of Salmeterol Xinafoate Loaded MDI for Asthma Using Response Surface Methodology

Authors: Paresh Patel, Priya Patel, Vaidehi Sorathiya, Navin Sheth

Abstract:

The aim of present work was to fabricate Salmeterol Xinafoate (SX) metered dose inhaler (MDI) for asthma and to evaluate the SX loaded solid lipid nanoparticles (SLNs) for pulmonary delivery. Solid lipid nanoparticles can be used to deliver particles to the lungs via MDI. A modified solvent emulsification diffusion technique was used to prepare Salmeterol Xinafoate loaded solid lipid nanoparticles by using compritol 888 ATO as lipid, tween 80 as surfactant, D-mannitol as cryoprotecting agent and L-leucine was used to improve aerosolization behaviour. Box-Behnken design was applied with 17 runs. 3-D surface response plots and contour plots were drawn and optimized formulation was selected based on minimum particle size and maximum % EE. % yield, in vitro diffusion study, scanning electron microscopy, X-ray diffraction, DSC, FTIR also characterized. Particle size, zeta potential analyzed by Zetatrac particle size analyzer and aerodynamic properties was carried out by cascade impactor. Pre convulsion time was examined for control group, treatment group and compare with marketed group. MDI was evaluated for leakage test, flammability test, spray test and content per puff. By experimental design, particle size and % EE found to be in range between 119-337 nm and 62.04-76.77% by solvent emulsification diffusion technique. Morphologically, particles have spherical shape and uniform distribution. DSC & FTIR study showed that no interaction between drug and excipients. Zeta potential shows good stability of SLNs. % respirable fraction found to be 52.78% indicating reach to the deep part of lung such as alveoli. Animal study showed that fabricated MDI protect the lungs against histamine induced bronchospasm in guinea pigs. MDI showed sphericity of particle in spray pattern, 96.34% content per puff and non-flammable. SLNs prepared by Solvent emulsification diffusion technique provide desirable size for deposition into the alveoli. This delivery platform opens up a wide range of treatment application of pulmonary disease like asthma via solid lipid nanoparticles.

Keywords: salmeterol xinafoate, solid lipid nanoparticles, box-behnken design, solvent emulsification diffusion technique, pulmonary delivery

Procedia PDF Downloads 440
824 Extraction and Quantification of Peramine Present in Dalaca pallens, a Pest of Grassland in Southtern Chile

Authors: Leonardo Parra, Daniel Martínez, Jorge Pizarro, Fernando Ortega, Manuel Chacón-Fuentes, Andrés Quiroz

Abstract:

Control of Dalaca pallens or blackworms, one of the most important hypogeous pest in grassland in southern Chile, is based on the use of broad-spectrum insecticides such as organophosphates and pyrethroids. However, the rapid development of insecticide resistance in field populations of this insect and public concern over the environmental impact of these insecticides has resulted in the search for other control methods. Specifically, the use of endophyte fungi for controlling pest has emerged as an interesting and promising strategy. Endophytes from ryegrass (Lolium perenne), establish a biotrophic relationship with the host, defined as mutualistic symbiosis. The plant-fungi association produces alkaloids where peramine is the main toxic substance against Listronotus bonariensis, the most important epigean pest of ryegrass. Nevertheless, the effect of peramina on others pest insects, such as D. pallens, to our knowledge has not been studied, and also its possible metabolization in the body of the larvae. Therefore, we addressed the following research question: Do larvae of D. pallens store peramine after consumption of ryegrass endophyte infected (E+)? For this, specimens of blackworms were fed with ryegrass plant of seven experimental lines and one commercial cultivar endophyte free (E-) sown at the Instituto de Investigaciones Agropecuarias Carillanca (Vilcún, Chile). Once the feeding period was over, ten larvae of each treatment were examined. Individuals were dissected, and their gut was removed to exclude any influence of remaining material. The rest of the larva's body was dried at 60°C by 24-48 h and ground into a fine powder using a mortar. 25 mg of dry powder was transferred to a microcentrifuge tube and extracted in 1 mL of a mixture of methanol:water:formic acid. Then, the samples were centrifuged at 16,000 rpm for 3 min, and the supernatant was colected and injected in the liquid chromatography of high resolution (HPLC). The results confirmed the presence of peramine in the larva's body of D. pallens. The insects that fed the experimental lines LQE-2 and LQE-6 were those where peramine was present in high proportion (0.205 and 0.199 ppm, respectively); while LQE-7 and LQE-3 obtained the lowest concentrations of the alkaloid (0.047 and 0.053 ppm, respectively). Peramine was not detected in the insects when the control cultivar Jumbo (E-) was tested. These results evidenced the storage and metabolism of peramine during consumption of the larvae. However, the effect of this alkaloid present in 'future ryegrass cultivars' (LQE-2 and LQE-6) on the performance and survival of blackworms must be studied and confirmed experimentally.

Keywords: blackworms, HPLC, alkaloid, pest

Procedia PDF Downloads 289
823 Adaptation of Hough Transform Algorithm for Text Document Skew Angle Detection

Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye

Abstract:

The skew detection and correction form an important part of digital document analysis. This is because uncompensated skew can deteriorate document features and can complicate further document image processing steps. Efficient text document analysis and digitization can rarely be achieved when a document is skewed even at a small angle. Once the documents have been digitized through the scanning system and binarization also achieved, document skew correction is required before further image analysis. Research efforts have been put in this area with algorithms developed to eliminate document skew. Skew angle correction algorithms can be compared based on performance criteria. Most important performance criteria are accuracy of skew angle detection, range of skew angle for detection, speed of processing the image, computational complexity and consequently memory space used. The standard Hough Transform has successfully been implemented for text documentation skew angle estimation application. However, the standard Hough Transform algorithm level of accuracy depends largely on how much fine the step size for the angle used. This consequently consumes more time and memory space for increase accuracy and, especially where number of pixels is considerable large. Whenever the Hough transform is used, there is always a tradeoff between accuracy and speed. So a more efficient solution is needed that optimizes space as well as time. In this paper, an improved Hough transform (HT) technique that optimizes space as well as time to robustly detect document skew is presented. The modified algorithm of Hough Transform presents solution to the contradiction between the memory space, running time and accuracy. Our algorithm starts with the first step of angle estimation accurate up to zero decimal place using the standard Hough Transform algorithm achieving minimal running time and space but lacks relative accuracy. Then to increase accuracy, suppose estimated angle found using the basic Hough algorithm is x degree, we then run again basic algorithm from range between ±x degrees with accuracy of one decimal place. Same process is iterated till level of desired accuracy is achieved. The procedure of our skew estimation and correction algorithm of text images is implemented using MATLAB. The memory space estimation and process time are also tabulated with skew angle assumption of within 00 and 450. The simulation results which is demonstrated in Matlab show the high performance of our algorithms with less computational time and memory space used in detecting document skew for a variety of documents with different levels of complexity.

Keywords: hough-transform, skew-detection, skew-angle, skew-correction, text-document

Procedia PDF Downloads 143
822 Tuning of Indirect Exchange Coupling in FePt/Al₂O₃/Fe₃Pt System

Authors: Rajan Goyal, S. Lamba, S. Annapoorni

Abstract:

The indirect exchange coupled system consists of two ferromagnetic layers separated by non-magnetic spacer layer. The type of exchange coupling may be either ferro or anti-ferro depending on the thickness of the spacer layer. In the present work, the strength of exchange coupling in FePt/Al₂O₃/Fe₃Pt has been investigated by varying the thickness of the spacer layer Al₂O₃. The FePt/Al₂O₃/Fe₃Pt trilayer structure is fabricated on Si <100> single crystal substrate using sputtering technique. The thickness of FePt and Fe₃Pt is fixed at 60 nm and 2 nm respectively. The thickness of spacer layer Al₂O₃ was varied from 0 to 16 nm. The normalized hysteresis loops recorded at room temperature both in the in-plane and out of plane configuration reveals that the orientation of easy axis lies along the plane of the film. It is observed that the hysteresis loop for ts=0 nm does not exhibit any knee around H=0 indicating that the hard FePt layer and soft Fe₃Pt layer are strongly exchange coupled. However, the insertion of Al₂O₃ spacer layer of thickness ts = 0.7 nm results in appearance of a minor knee around H=0 suggesting the weakening of exchange coupling between FePt and Fe₃Pt. The disappearance of knee in hysteresis loop with further increase in thickness of the spacer layer up to 8 nm predicts the co-existence of ferromagnetic (FM) and antiferromagnetic (AFM) exchange interaction between FePt and Fe₃Pt. In addition to this, the out of plane hysteresis loop also shows an asymmetry around H=0. The exchange field Hex = (Hc↑-HC↓)/2, where Hc↑ and Hc↓ are the coercivity estimated from lower and upper branch of hysteresis loop, increases from ~ 150 Oe to ~ 700 Oe respectively. This behavior may be attributed to the uncompensated moments in the hard FePt layer and soft Fe₃Pt layer at the interface. A better insight into the variation in indirect exchange coupling has been investigated using recoil curves. It is observed that the almost closed recoil curves are obtained for ts= 0 nm up to a reverse field of ~ 5 kOe. On the other hand, the appearance of appreciable open recoil curves at lower reverse field ~ 4 kOe for ts = 0.7 nm indicates that uncoupled soft phase undergoes irreversible magnetization reversal at lower reverse field suggesting the weakening of exchange coupling. The openness of recoil curves decreases with increase in thickness of the spacer layer up to 8 nm. This behavior may be attributed to the competition between FM and AFM exchange interactions. The FM exchange coupling between FePt and Fe₃Pt due to porous nature of Al₂O₃ decreases much slower than the weak AFM coupling due to interaction between Fe ions of FePt and Fe₃Pt via O ions of Al₂O₃. The hysteresis loop has been simulated using Monte Carlo based on Metropolis algorithm to investigate the variation in strength of exchange coupling in FePt/Al₂O₃/Fe₃Pt trilayer system.

Keywords: indirect exchange coupling, MH loop, Monte Carlo simulation, recoil curve

Procedia PDF Downloads 179
821 Enhancement of Fracture Toughness for Low-Temperature Applications in Mild Steel Weldments

Authors: Manjinder Singh, Jasvinder Singh

Abstract:

Existing theories of Titanic/Liberty ship, Sydney bridge accidents and practical experience generated an interest in developing weldments those has high toughness under sub-zero temperature conditions. The purpose was to protect the joint from undergoing DBT (Ductile to brittle transition), when ambient temperature reach sub-zero levels. Metallurgical improvement such as low carbonization or addition of deoxidization elements like Mn and Si was effective to prevent fracture in weldments (crack) at low temperature. In the present research, an attempt has been made to investigate the reason behind ductile to brittle transition of mild steel weldments when subjected to sub-zero temperatures and method of its mitigation. Nickel is added to weldments using manual metal arc welding (MMAW) preventing the DBT, but progressive reduction in charpy impact values as temperature is lowered. The variation in toughness with respect to nickel content being added to the weld pool is analyzed quantitatively to evaluate the rise in toughness value with increasing nickel amount. The impact performance of welded specimens was evaluated by Charpy V-notch impact tests at various temperatures (20 °C, 0 °C, -20 °C, -40 °C, -60 °C). Notch is made in the weldments, as notch sensitive failure is particularly likely to occur at zones of high stress concentration caused by a notch. Then the effect of nickel to weldments is investigated at various temperatures was studied by mechanical and metallurgical tests. It was noted that a large gain in impact toughness could be achieved by adding nickel content. The highest yield strength (462J) in combination with good impact toughness (over 220J at – 60 °C) was achieved with an alloying content of 16 wt. %nickel. Based on metallurgical behavior it was concluded that the weld metals solidify as austenite with increase in nickel. The microstructure was characterized using optical and high resolution SEM (scanning electron microscopy). At inter-dendritic regions mainly martensite was found. In dendrite core regions of the low carbon weld metals a mixture of upper bainite, lower bainite and a novel constituent coalesced bainite formed. Coalesced bainite was characterized by large bainitic ferrite grains with cementite precipitates and is believed to form when the bainite and martensite start temperatures are close to each other. Mechanical properties could be rationalized in terms of micro structural constituents as a function of nickel content.

Keywords: MMAW, Toughness, DBT, Notch, SEM, Coalesced bainite

Procedia PDF Downloads 516
820 The Impact of E-Commerce in Changing Shopping Lifestyle of Urban Communities in Jakarta

Authors: Juliana Kurniawati, Helen Diana Vida

Abstract:

Visiting mall is one of the Indonesian communities’ lifestyle who live in urban areas. Indonesian people, especially who live in Jakarta, use a shopping mall as one of the favourite places to get pleasure. This mall visitors come from various social classes. They use the shopping mall as a place to identify themselves as urban people. Jakarta has a number of great shopping malls such as Plaza Indonesia, Plaza Senayan, Pondok Indah Mall, etc. The shopping malls become one of the popular places since Jakarta's public sphere such as parks and playgrounds are very limited in number compared to that of shopping malls. In Jakarta, people do not come to a shopping mall only for shopping. Sometimes they go there to look around, meet up with some friends, or watch a movie. We can find everything in the shopping malls. The principle of one-stop shopping becomes an attractive offer for urban people. The items for selling are various, from the cheap goods to the expensive ones. A new era in consumer culture began with the advent of shopping was localized in France in the 19th century. Since the development of the online store and the easier way to access the internet, everyone can shop 24 hours anywhere they want. The emergence of online store indirectly has an impact on the viability of conventional stores. In October 2017, in Indonesia, two outlets branded goods namely Lotus and Debenhams were closed. This may a result of increasingly rampant online stores and shopping style urban society shift. The rising of technology gives some influence on the development of e-commerce in Indonesia. Everyone can access e-commerce. However, those who can do it are the middle up class to high class people. The development of e-commerce in Indonesia is quite fast, we can observe the emergence of various online shopping sites on various social media platforms such as Zalora, Berrybenka, Bukalapak, Lazada, and Tokopedia. E-commerce is increasingly affecting people's lives in line with the development of lifestyle and increasing revenue. This research aims to know the reasons of urban society choosing e-commerce as a medium for grocery shopping, how e-commerce is affecting their shopping styles, as well as why society provides confidence in the online store for shopping. This research uses theories of lifestyle by David Chaney. The subject of this research is urban society who actively shop online on Zalora, the communities based in Jakarta. Zalora site was chosen because the site is selling branded goods. This research is expected to explain in detail about the changing style of the urban community from the shopping mall to digital media by emphasizing the aspect of public confidence towards the online store.

Keywords: e-commerce, shopping, lifestyle, changing

Procedia PDF Downloads 284
819 Computational and Experimental Determination of Acoustic Impedance of Internal Combustion Engine Exhaust

Authors: A. O. Glazkov, A. S. Krylova, G. G. Nadareishvili, A. S. Terenchenko, S. I. Yudin

Abstract:

The topic of the presented materials concerns the design of the exhaust system for a certain internal combustion engine. The exhaust system can be divided into two parts. The first is the engine exhaust manifold, turbocharger, and catalytic converters, which are called “hot part.” The second part is the gas exhaust system, which contains elements exclusively for reducing exhaust noise (mufflers, resonators), the accepted designation of which is the "cold part." The design of the exhaust system from the point of view of acoustics, that is, reducing the exhaust noise to a predetermined level, consists of working on the second part. Modern computer technology and software make it possible to design "cold part" with high accuracy in a given frequency range but with the condition of accurately specifying the input parameters, namely, the amplitude spectrum of the input noise and the acoustic impedance of the noise source in the form of an engine with a "hot part". Getting this data is a difficult problem: high temperatures, high exhaust gas velocities (turbulent flows), and high sound pressure levels (non-linearity mode) do not allow the calculated results to be applied with sufficient accuracy. The aim of this work is to obtain the most reliable acoustic output parameters of an engine with a "hot part" based on a complex of computational and experimental studies. The presented methodology includes several parts. The first part is a finite element simulation of the "cold part" of the exhaust system (taking into account the acoustic impedance of radiation of outlet pipe into open space) with the result in the form of the input impedance of "cold part". The second part is a finite element simulation of the "hot part" of the exhaust system (taking into account acoustic characteristics of catalytic units and geometry of turbocharger) with the result in the form of the input impedance of the "hot part". The next third part of the technique consists of the mathematical processing of the results according to the proposed formula for the convergence of the mathematical series of summation of multiple reflections of the acoustic signal "cold part" - "hot part". This is followed by conducting a set of tests on an engine stand with two high-temperature pressure sensors measuring pulsations in the nozzle between "hot part" and "cold part" of the exhaust system and subsequent processing of test results according to a well-known technique in order to separate the "incident" and "reflected" waves. The final stage consists of the mathematical processing of all calculated and experimental data to obtain a result in the form of a spectrum of the amplitude of the engine noise and its acoustic impedance.

Keywords: acoustic impedance, engine exhaust system, FEM model, test stand

Procedia PDF Downloads 40
818 Startup Ecosystem in India: Development and Impact

Authors: Soham Chakraborty

Abstract:

This article examines the development of start-up culture in India, its development as well as related impact on the Indian society. Another vibrant synonym of start-up in the present century can be starting afresh. Startups have become the new flavor of this decade. A startup ecosystem is formed by mainly the new generation in the making. A startup ecosystem involves a variety of elements without which a startup can never prosper, they are—ideas, inventions, innovations as well as authentic research in the field into which one is interested, mentors, advisors, funding bodies, service provider organizations, angel, venture and so on. The culture of startup is quiet nascent but rampant in India. This is largely due to the widespread of media as a medium through which the newfangled entrepreneurs can spread their word of mouth far and wide. Different kinds of media such as Television, Radio, Internet, Print media and so on, act as the weapon to any startup company in India. The article explores how there is a sudden shift in the growing Indian economy due to the rise of startup ecosystem. There are various reasons, which are the result of the growing success of startup in India, firstly, entrepreneurs are building up startup ideas on the basis of various international startup but giving them a pinch of Indian flavor; secondly, business models are framed based on the current problems that people face in the modern century; thirdly, balance between social and technological entrepreneurs and lastly, quality of mentorship. The Government of India boasts startup as a flagship initiative. Bunch full of benefits and assistance was declared in an event named as 'Start Up India, Stand Up India' on 16th January 2016 by the current Prime Minister of India Mr. Narendra Modi. One of the biggest boon that increasing startups are creating in the society is the proliferation of self-employment. Noted Startups which are thriving in India are like OYO, Where’s The Food (WTF), TVF Pitchers, Flipkart and so on are examples of India is getting covered up by various innovative startups. The deep impact can be felt by each Indian after a few years as various governmental and non-governmental policies and agendas are helping in the sprawling up of startups and have mushroom growth in India. The impact of startup uprising in India is also possible due to increasing globalization which is leading to the eradication of national borders, thereby creating the environment to enlarge one’s business model. To conclude, this article points out on the correlation between rising startup in Indian market and its increasing developmental benefits for the people at large. Internationally, various business portals are tagging India to be the world’s fastest growing startup ecosystem.

Keywords: business, ecosystem, entrepreneurs, media, globalization, startup

Procedia PDF Downloads 256
817 Coping Strategies among Caregivers of Children with Autism Spectrum Disorders: A Cluster Analysis

Authors: Noor Ismael, Lisa Mische Lawson, Lauren Little, Murad Moqbel

Abstract:

Background/Significance: Caregivers of children with Autism Spectrum Disorders (ASD) develop coping mechanisms to overcome daily challenges to successfully parent their child. There is variability in coping strategies used among caregivers of children with ASD. Capturing homogeneity among such variable groups may help elucidate targeted intervention approaches for caregivers of children with ASD. Study Purpose: This study aimed to identify groups of caregivers of children with ASD based on coping mechanisms, and to examine whether there are differences among these groups in terms of strain level. Methods: This study utilized a secondary data analysis, and included survey responses of 273 caregivers of children with ASD. Measures consisted of the COPE Inventory and the Caregiver Strain Questionnaire. Data analyses consisted of cluster analysis to group caregiver coping strategies, and analysis of variance to compare the caregiver coping groups on strain level. Results: Cluster analysis results showed four distinct groups with different combinations of coping strategies: Social-Supported/Planning (group one), Spontaneous/Reactive (group two), Self-Supporting/Reappraisal (group three), and Religious/Expressive (group four). Caregivers in group one (Social-Supported/Planning) demonstrated significantly higher levels than the remaining three groups in the use of the following coping strategies: planning, use of instrumental social support, and use of emotional social support, relative to the other three groups. Caregivers in group two (Spontaneous/Reactive) used less restraint relative to the other three groups, and less suppression of competing activities relative to the other three groups as coping strategies. Also, group two showed significantly lower levels of religious coping as compared to the other three groups. In contrast to group one, caregivers in group three (Self-Supporting/Reappraisal) demonstrated significantly lower levels of the use of instrumental social support and the use of emotional social support relative to the other three groups. Additionally, caregivers in group three showed more acceptance, positive reinterpretation and growth coping strategies. Caregivers in group four (Religious/Expressive) demonstrated significantly higher levels of religious coping relative to the other three groups and utilized more venting of emotions strategies. Analysis of Variance results showed no significant differences between the four groups on the strain scores. Conclusions: There are four distinct groups with different combinations of coping strategies: Social-Supported/Planning, Spontaneous/Reactive, Self-Supporting/Reappraisal, and Religious/Expressive. Each caregiver group engaged in a combination of coping strategies to overcome the strain of caregiving.

Keywords: autism, caregivers, cluster analysis, coping strategies

Procedia PDF Downloads 272
816 Reducing Pressure Drop in Microscale Channel Using Constructal Theory

Authors: K. X. Cheng, A. L. Goh, K. T. Ooi

Abstract:

The effectiveness of microchannels in enhancing heat transfer has been demonstrated in the semiconductor industry. In order to tap the microscale heat transfer effects into macro geometries, overcoming the cost and technological constraints, microscale passages were created in macro geometries machined using conventional fabrication methods. A cylindrical insert was placed within a pipe, and geometrical profiles were created on the outer surface of the insert to enhance heat transfer under steady-state single-phase liquid flow conditions. However, while heat transfer coefficient values of above 10 kW/m2·K were achieved, the heat transfer enhancement was accompanied by undesirable pressure drop increment. Therefore, this study aims to address the high pressure drop issue using Constructal theory, a universal design law for both animate and inanimate systems. Two designs based on Constructal theory were developed to study the effectiveness of Constructal features in reducing the pressure drop increment as compared to parallel channels, which are commonly found in microchannel fabrication. The hydrodynamic and heat transfer performance for the Tree insert and Constructal fin (Cfin) insert were studied using experimental methods, and the underlying mechanisms were substantiated by numerical results. In technical terms, the objective is to achieve at least comparable increment in both heat transfer coefficient and pressure drop, if not higher increment in the former parameter. Results show that the Tree insert improved the heat transfer performance by more than 16 percent at low flow rates, as compared to the Tree-parallel insert. However, the heat transfer enhancement reduced to less than 5 percent at high Reynolds numbers. On the other hand, the pressure drop increment stayed almost constant at 20 percent. This suggests that the Tree insert has better heat transfer performance in the low Reynolds number region. More importantly, the Cfin insert displayed improved heat transfer performance along with favourable hydrodynamic performance, as compared to Cfinparallel insert, at all flow rates in this study. At 2 L/min, the enhancement of heat transfer was more than 30 percent, with 20 percent pressure drop increment, as compared to Cfin-parallel insert. Furthermore, comparable increment in both heat transfer coefficient and pressure drop was observed at 8 L/min. In other words, the Cfin insert successfully achieved the objective of this study. Analysis of the results suggests that bifurcation of flows is effective in reducing the increment in pressure drop relative to heat transfer enhancement. Optimising the geometries of the Constructal fins is therefore the potential future study in achieving a bigger stride in energy efficiency at much lower costs.

Keywords: constructal theory, enhanced heat transfer, microchannel, pressure drop

Procedia PDF Downloads 322
815 Home-Country’s Competitive Assets of the Emerging Countries' Multinational Enterprises (EMNEs)

Authors: Philippe Gugler

Abstract:

The aim of this study is to investigate how home country patterns may influence the competitiveness of EMNEs in international markets and more specifically their ability to invest abroad. The study examines the dynamic relationship between home country specific advantage and firms’ competitiveness. Are EMNEs still driven by strong country specific advantages or are EMNEs increasingly relying on their own firm specific competitiveness? EMNEs are not commonly recognized as a ‘homogeneous group’. Therefore, the approaches to these questions need to be specific while still attempting to extract some common evidence. The aim of the study is to elaborate a framework to investigate this issue in a dynamic context of international business’s strategies. The study focuses on two major research questions. The first one relates to the role of the home-base context in the internationalization process of EMNEs and more specifically the home-base assets’ influence on EMNEs competitiveness. Another question is to investigate the interactions among home-base context, recipient country context and EMNEs competitiveness. The evolution of EMNEs’ competitiveness is shaped by the evolution of the home country’s business environment. The nature of the home-based components in EMNEs’ specific advantages has changed over time due to the increased integration of emerging countries in the world market and the inherent changes related to their institutional, structural and regulatory patterns. The home country offers not only inherited assets but also a productive business environment, allowing firms to innovate, be more productive, create unique value for customers and finally, to face international competition successfully. The more sophisticated the home business environment is, the more opportunities there are for firms to developed exclusive and unique competitive assets. The international expansion of EMNEs is a fascinating but challenging issue. Among the numerous questions raised by the involvement of EMNEs in international competition is the evolving role of the home market. The purpose of this study is to examine some of the theoretical ideas and empirical evidence to allow us to deepen our understanding of the role of emerging home countries in the internationalization process of their domestic firms and more specifically in their ability to compete successfully abroad. How much do home specific assets still influence EMNEs’ foreign investment? Which home country assets provide the main competitive drivers to invest and compete abroad? How do EMNEs combine home country assets and host country assets to strengthen their competitive advantages? These questions as well as various others deserve further examination by the scientific community.

Keywords: competitiveness, emerging countries' multinational enterprises, foreign direct investments, international business

Procedia PDF Downloads 252
814 Exploring the Contribution of Dynamic Capabilities to a Firm's Value Creation: The Role of Competitive Strategy

Authors: Mona Rashidirad, Hamid Salimian

Abstract:

Dynamic capabilities, as the most considerable capabilities of firms in the current fast-moving economy may not be sufficient for performance improvement, but their contribution to performance is undeniable. While much of the extant literature investigates the impact of dynamic capabilities on organisational performance, little attention has been devoted to understand whether and how dynamic capabilities create value. Dynamic capabilities as the mirror of competitive strategies should enable firms to search and seize new ideas, integrate and coordinate the firm’s resources and capabilities in order to create value. A careful investigation to the existing knowledge base remains us puzzled regarding the relationship among competitive strategies, dynamic capabilities and value creation. This study thus attempts to fill in this gap by empirically investigating the impact of dynamic capabilities on value creation and the mediating impact of competitive strategy on this relationship. We aim to contribute to dynamic capability view (DCV), in both theoretical and empirical senses, by exploring the impact of dynamic capabilities on firms’ value creation and whether competitive strategy can play any role in strengthening/weakening this relationship. Using a sample of 491 firms in the UK telecommunications market, the results demonstrate that dynamic sensing, learning, integrating and coordinating capabilities play a significant role in firm’s value creation, and competitive strategy mediates the impact of dynamic capabilities on value creation. Adopting DCV, this study investigates whether the value generating from dynamic capabilities depends on firms’ competitive strategy. This study argues a firm’s competitive strategy can mediate its ability to derive value from its dynamic capabilities and it explains the extent a firm’s competitive strategy may influence its value generation. The results of the dynamic capabilities-value relationships support our expectations and justify the non-financial value added of the four dynamic capability processes in a highly turbulent market, such as UK telecommunications. Our analytical findings of the relationship among dynamic capabilities, competitive strategy and value creation provide further evidence of the undeniable role of competitive strategy in deriving value from dynamic capabilities. The results reinforce the argument for the need to consider the mediating impact of organisational contextual factors, such as firm’s competitive strategy to examine how they interact with dynamic capabilities to deliver value. The findings of this study provide significant contributions to theory. Unlike some previous studies which conceptualise dynamic capabilities as a unidimensional construct, this study demonstrates the benefits of understanding the details of the link among the four types of dynamic capabilities, competitive strategy and value creation. In terms of contributions to managerial practices, this research draws attention to the importance of competitive strategy in conjunction with development and deployment of dynamic capabilities to create value. Managers are now equipped with solid empirical evidence which explains why DCV has become essential to firms in today’s business world.

Keywords: dynamic capabilities, resource based theory, value creation, competitive strategy

Procedia PDF Downloads 233
813 Overcoming Mistrusted Masculinity: Analyzing Muslim Men and Their Aspirations for Fatherhood in Denmark

Authors: Anne Hovgaard Jorgensen

Abstract:

This study investigates how Muslim fathers in Denmark are struggling to overcome notions of mistrust from teachers and educators. Starting from school-home-cooperation (parent conferences, school-home communication, etc.), the study finds that many Muslim fathers do not feel acknowledged as a resource in the upbringing of their children. To explain these experiences further, the study suggest the notion of ‘mistrusted masculinity’ to grasp the controlling image these fathers meet in various schools and child-care-institutions in the Danish Welfare state. The paper is based on 9 months of fieldwork in a Danish school, a social housing area and in various ‘father groups’ in Denmark. Additional, 50 interviews were conducted with fathers, children, mothers, schoolteachers, and educators. By using Connell's concepts 'hegemonic' and 'marginalized' masculinity as steppingstones, the paper argues that these concepts might entail a too static and dualistic picture of gender. By applying the concepts of 'emergent masculinity' and 'emergent fatherhood' the paper brings along a long needed discussion of how Muslim men in Denmark are struggling to overcome and change the controlling images of them as patriarchal and/or ignorant fathers regarding the upbringing of their children. As such, the paper shows how Muslim fathers are taking action to change this controlling image, e.g. through various ‘father groups’. The paper is inspired by the phenomenological notions of ‘experience´ and in the light of this notion, the paper tells the fathers’ stories about their upbringing of their children and aspirations for fatherhood. These stories share light on how these fathers take care of their children in everyday life. The study also shows that the controlling image of these fathers have affected how some Muslim fathers are actually being fathers. The study shows that fear of family-interventions from teachers or social workers e.g. have left some Muslim fathers in a limbo, being afraid of scolding their children, and being confused of ‘what good parenting in Denmark is’. This seems to have led to a more lassie fair upbringing than these fathers actually wanted. This study is important since anthropologists generally have underexposed the notion of fatherhood, and how fathers engage in the upbringing of their children. Over more, the vast majority of qualitative studies of fatherhood have been on white middleclass fathers, living in nuclear families. In addition, this study is crucial at this very moment due to the major refugee crisis in Denmark and in the Western world in general. A crisis, which has resulted in a vast number of scare campaigns against Islam from different nationalistic political parties, which enforces the negative controlling image of Muslim fathers.

Keywords: fatherhood, Muslim fathers, mistrust, education

Procedia PDF Downloads 182