Search results for: embedded piezoelectric sensor
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2494

Search results for: embedded piezoelectric sensor

184 A Framework for Incorporating Non-Linear Degradation of Conductive Adhesive in Environmental Testing

Authors: Kedar Hardikar, Joe Varghese

Abstract:

Conductive adhesives have found wide-ranging applications in electronics industry ranging from fixing a defective conductor on printed circuit board (PCB) attaching an electronic component in an assembly to protecting electronics components by the formation of “Faraday Cage.” The reliability requirements for the conductive adhesive vary widely depending on the application and expected product lifetime. While the conductive adhesive is required to maintain the structural integrity, the electrical performance of the associated sub-assembly can be affected by the degradation of conductive adhesive. The degradation of the adhesive is dependent upon the highly varied use case. The conventional approach to assess the reliability of the sub-assembly involves subjecting it to the standard environmental test conditions such as high-temperature high humidity, thermal cycling, high-temperature exposure to name a few. In order to enable projection of test data and observed failures to predict field performance, systematic development of an acceleration factor between the test conditions and field conditions is crucial. Common acceleration factor models such as Arrhenius model are based on rate kinetics and typically rely on an assumption of linear degradation in time for a given condition and test duration. The application of interest in this work involves conductive adhesive used in an electronic circuit of a capacitive sensor. The degradation of conductive adhesive in high temperature and humidity environment is quantified by the capacitance values. Under such conditions, the use of established models such as Hallberg-Peck model or Eyring Model to predict time to failure in the field typically relies on linear degradation rate. In this particular case, it is seen that the degradation is nonlinear in time and exhibits a square root t dependence. It is also shown that for the mechanism of interest, the presence of moisture is essential, and the dominant mechanism driving the degradation is the diffusion of moisture. In this work, a framework is developed to incorporate nonlinear degradation of the conductive adhesive for the development of an acceleration factor. This method can be extended to applications where nonlinearity in degradation rate can be adequately characterized in tests. It is shown that depending on the expected product lifetime, the use of conventional linear degradation approach can overestimate or underestimate the field performance. This work provides guidelines for suitability of linear degradation approximation for such varied applications

Keywords: conductive adhesives, nonlinear degradation, physics of failure, acceleration factor model.

Procedia PDF Downloads 113
183 Blister Formation Mechanisms in Hot Rolling

Authors: Rebecca Dewfall, Mark Coleman, Vladimir Basabe

Abstract:

Oxide scale growth is an inevitable byproduct of the high temperature processing of steel. Blister is a phenomenon that occurs due to oxide growth, where high temperatures result in the swelling of surface scale, producing a bubble-like feature. Blisters can subsequently become embedded in the steel substrate during hot rolling in the finishing mill. This rolled in scale defect causes havoc within industry, not only with wear on machinery but loss of customer satisfaction, poor surface finish, loss of material, and profit. Even though blister is a highly prevalent issue, there is still much that is not known or understood. The classic iron oxidation system is a complex multiphase system formed of wustite, magnetite, and hematite, producing multi-layered scales. Each phase will have independent properties such as thermal coefficients, growth rate, and mechanical properties, etc. Furthermore, each additional alloying element will have different affinities for oxygen and different mobilities in the oxide phases so that oxide morphologies are specific to alloy chemistry. Therefore, blister regimes can be unique to each steel grade resulting in a diverse range of formation mechanisms. Laboratory conditions were selected to simulate industrial hot rolling with temperature ranges approximate to the formation of secondary and tertiary scales in the finishing mills. Samples with composition: 0.15Wt% C, 0.1Wt% Si, 0.86Wt% Mn, 0.036Wt% Al, and 0.028Wt% Cr, were oxidised in a thermo-gravimetric analyser (TGA), with an air velocity of 10litresmin-1, at temperaturesof 800°C, 850°C, 900°C, 1000°C, 1100°C, and 1200°C respectively. Samples were held at temperature in an argon atmosphere for 10minutes, then oxidised in air for 600s, 60s, 30s, 15s, and 4s, respectively. Oxide morphology and Blisters were characterised using EBSD, WDX, nanoindentation, FIB, and FEG-SEM imaging. Blister was found to have both a nucleation and growth process. During nucleation, the scale detaches from the substrate and blisters after a very short period, roughly 10s. The steel substrate is then exposed inside of the blister and further oxidised in the reducing atmosphere of the blister, however, the atmosphere within the blister is highly dependent upon the porosity of the blister crown. The blister crown was found to be consistently between 35-40um for all heating regimes, which supports the theory that the blister inflates, and the oxide then subsequently grows underneath. Upon heating, two modes of blistering were identified. In Mode 1 it was ascertained that the stresses produced by oxide growth will increase with increasing oxide thickness. Therefore, in Mode 1 the incubation time for blister formation is shortened by increasing temperature. In Mode 2 increase in temperature will result in oxide with a high ductility and high oxide porosity. The high oxide ductility and/or porosity accommodates for the intrinsic stresses from oxide growth. Thus Mode 2 is the inverse of Mode 1, and incubation time is increased with temperature. A new phenomenon was reported whereby blister formed exclusively through cooling at elevated temperatures above mode 2.

Keywords: FEG-SEM, nucleation, oxide morphology, surface defect

Procedia PDF Downloads 118
182 Comparison of Cu Nanoparticle Formation and Properties with and without Surrounding Dielectric

Authors: P. Dubcek, B. Pivac, J. Dasovic, V. Janicki, S. Bernstorff

Abstract:

When grown only to nanometric sizes, metallic particles (e.g. Ag, Au and Cu) exhibit specific optical properties caused by the presence of plasmon band. The plasmon band represents collective oscillation of the conduction electrons, and causes a narrow band absorption of light in the visible range. When the nanoparticles are embedded in a dielectric, they also cause modifications of dielectrics optical properties. This can be fine-tuned by tuning the particle size. We investigated Cu nanoparticle growth with and without surrounding dielectric (SiO2 capping layer). The morphology and crystallinity were investigated by GISAXS and GIWAXS, respectively. Samples were produced by high vacuum thermal evaporation of Cu onto monocrystalline silicon substrate held at room temperature, 100°C or 180°C. One series was in situ capped by 10nm SiO2 layer. Additionally, samples were annealed at different temperatures up to 550°C, also in high vacuum. The room temperature deposited samples annealed at lower temperatures exhibit continuous film structure: strong oscillations in the GISAXS intensity are present especially in the capped samples. At higher temperatures enhanced surface dewetting and Cu nanoparticles (nanoislands) formation partially destroy the flatness of the interface. Therefore the particle type of scattering is enhanced, while the film fringes are depleted. However, capping layer hinders particle formation, and continuous film structure is preserved up to higher annealing temperatures (visible as strong and persistent fringes in GISAXS), compared to the non- capped samples. According to GISAXS, lateral particle sizes are reduced at higher temperatures, while particle height is increasing. This is ascribed to close packing of the formed particles at lower temperatures, and GISAXS deduced sizes are partially the result of the particle agglomerate dimensions. Lateral maxima in GISAXS are an indication of good positional correlation, and the particle to particle distance is increased as the particles grow with temperature elevation. This coordination is much stronger in the capped and lower temperature deposited samples. The dewetting is much more vigorous in the non-capped sample, and since nanoparticles are formed in a range of sizes, correlation is receding both with deposition and annealing temperature. Surface topology was checked by atomic force microscopy (AFM). Capped sample's surfaces were smoother and lateral size of the surface features were larger compared to the non-capped samples. Altogether, AFM results suggest somewhat larger particles and wider size distribution, and this can be attributed to the difference in probe size. Finally, the plasmonic effect was monitored by UV-Vis reflectance spectroscopy, and relative weak plasmonic effect could be explained by uncomplete dewetting or partial interconnection of the formed particles.

Keywords: coper, GISAXS, nanoparticles, plasmonics

Procedia PDF Downloads 103
181 Ecological Relationships Between Material, Colonizing Organisms, and Resulting Performances

Authors: Chris Thurlbourne

Abstract:

Due to the continual demand for material to build, and a limit of good environmental material credentials of 'normal' building materials, there is a need to look at new and reconditioned material types - both biogenic and non-biogenic - and a field of research that accompanies this. This research development focuses on biogenic and non-biogenic material engineering and the impact of our environment on new and reconditioned material types. In our building industry and all the industries involved in constructing our built environment, building material types can be broadly categorized into two types, biogenic and non-biogenic material properties. Both play significant roles in shaping our built environment. Regardless of their properties, all material types originate from our earth, whereas many are modified through processing to provide resistance to 'forces of nature', be it rain, wind, sun, gravity, or whatever the local environmental conditions throw at us. Modifications are succumbed to offer benefits in endurance, resistance, malleability in handling (building with), and ergonomic values - in all types of building material. We assume control of all building materials through rigorous quality control specifications and regulations to ensure materials perform under specific constraints. Yet materials confront an external environment that is not controlled with live forces undetermined, and of which materials naturally act and react through weathering, patination and discoloring, promoting natural chemical reactions such as rusting. The purpose of the paper is to present recent research that explores the after-life of specific new and reconditioned biogenic and non-biogenic material types and how the understanding of materials' natural processes of transformation when exposed to the external climate, can inform initial design decisions. With qualities to receive in a transient and contingent manner, ecological relationships between material, the colonizing organisms and resulting performances invite opportunities for new design explorations for the benefit of both the needs of human society and the needs of our natural environment. The research follows designing for the benefit of both and engaging in both biogenic and non-biogenic material engineering whilst embracing the continual demand for colonization - human and environment, and the aptitude of a material to be colonized by one or several groups of living organisms without necessarily undergoing any severe deterioration, but embracing weathering, patination and discoloring, and at the same time establishing new habitat. The research follows iterative prototyping processes where knowledge has been accumulated via explorations of specific material performances, from laboratory to construction mock-ups focusing on the architectural qualities embedded in control of production techniques and facilitating longer-term patinas of material surfaces to extend the aesthetic beyond common judgments. Experiments are therefore focused on how the inherent material qualities drive a design brief toward specific investigations to explore aesthetics induced through production, patinas and colonization obtained over time while exposed and interactions with external climate conditions.

Keywords: biogenic and non-biogenic, natural processes of transformation, colonization, patina

Procedia PDF Downloads 60
180 Japanese and Europe Legal Frameworks on Data Protection and Cybersecurity: Asymmetries from a Comparative Perspective

Authors: S. Fantin

Abstract:

This study is the result of the legal research on cybersecurity and data protection within the EUNITY (Cybersecurity and Privacy Dialogue between Europe and Japan) project, aimed at fostering the dialogue between the European Union and Japan. Based on the research undertaken therein, the author offers an outline of the main asymmetries in the laws governing such fields in the two regions. The research is a comparative analysis of the two legal frameworks, taking into account specific provisions, ratio legis and policy initiatives. Recent doctrine was taken into account, too, as well as empirical interviews with EU and Japanese stakeholders and project partners. With respect to the protection of personal data, the European Union has recently reformed its legal framework with a package which includes a regulation (General Data Protection Regulation), and a directive (Directive 680 on personal data processing in the law enforcement domain). In turn, the Japanese law under scrutiny for this study has been the Act on Protection of Personal Information. Based on a comparative analysis, some asymmetries arise. The main ones refer to the definition of personal information and the scope of the two frameworks. Furthermore, the rights of the data subjects are differently articulated in the two regions, while the nature of sanctions take two opposite approaches. Regarding the cybersecurity framework, the situation looks similarly misaligned. Japan’s main text of reference is the Basic Cybersecurity Act, while the European Union has a more fragmented legal structure (to name a few, Network and Information Security Directive, Critical Infrastructure Directive and Directive on the Attacks at Information Systems). On an relevant note, unlike a more industry-oriented European approach, the concept of cyber hygiene seems to be neatly embedded in the Japanese legal framework, with a number of provisions that alleviate operators’ liability by turning such a burden into a set of recommendations to be primarily observed by citizens. With respect to the reasons to fill such normative gaps, these are mostly grounded on three basis. Firstly, the cross-border nature of cybercrime brings to consider both magnitude of the issue and its regulatory stance globally. Secondly, empirical findings from the EUNITY project showed how recent data breaches and cyber-attacks had shared implications between Europe and Japan. Thirdly, the geopolitical context is currently going through the direction of bringing the two regions to significant agreements from a trade standpoint, but also from a data protection perspective (with an imminent signature by both parts of a so-called ‘Adequacy Decision’). The research conducted in this study reveals two asymmetric legal frameworks on cyber security and data protection. With a view to the future challenges presented by the strengthening of the collaboration between the two regions and the trans-national fashion of cybercrime, it is urged that solutions are found to fill in such gaps, in order to allow European Union and Japan to wisely increment their partnership.

Keywords: cybersecurity, data protection, European Union, Japan

Procedia PDF Downloads 97
179 Migrant Women English Instructors' Transformative Workplace Learning Experiences in Post-Secondary English Language Programs in Ontario, Canada

Authors: Justine Jun

Abstract:

This study aims to reveal migrant women English instructors' workplace learning experiences in Canadian post-secondary institutions in Ontario. Although many scholars have conducted research studies on internationally educated teachers and their professional and employment challenges, few studies have recorded migrant women English language instructors’ professional learning and support experiences in post-secondary English language programs in Canada. This study employs a qualitative research paradigm. Mezirow’s Transformative Learning Theory is an essential lens for the researcher to explain, analyze, and interpret the research data. It is a collaborative research project. The researcher and participants cooperatively create photographic or other artwork data responding to the research questions. Photovoice and arts-informed data collection methodology are the main methods. Research participants engage in the study as co-researchers and inquire about their own workplace learning experiences, actively utilizing their critical self-reflective and dialogic skills. Co-researchers individually select the forms of artwork they prefer to engage with to represent their transformative workplace learning experiences about the Canadian workplace cultures that they underwent while working with colleagues and administrators in the workplace. Once the co-researchers generate their cultural artifacts as research data, they collaboratively interpret their artworks with the researcher and other volunteer co-researchers. Co-researchers jointly investigate the themes emerging from the artworks. They also interpret the meanings of their own and others’ workplace learning experiences embedded in the artworks through interactive one-on-one or group interviews. The following are the research questions that the migrant women English instructor participants examine and answer: (1) What have they learned about their workplace culture and how do they explain their learning experiences?; (2) How transformative have their learning experiences been at work?; (3) How have their colleagues and administrators influenced their transformative learning?; (4) What kind of support have they received? What supports have been valuable to them and what changes would they like to see?; (5) What have their learning experiences transformed?; (6) What has this arts-informed research process transformed? The study findings implicate English language instructor support currently practiced in post-secondary English language programs in Ontario, Canada, especially for migrant women English instructors. This research is a doctoral empirical study in progress. This research has the urgency to address the research problem that few studies have investigated migrant English instructors’ professional learning and support issues in the workplace, precisely that of English instructors working with adult learners in Canada. While appropriate social and professional support for migrant English instructors is required throughout the country, the present workplace realities in Ontario's English language programs need to be heard soon. For that purpose, the conceptualization of this study is crucial. It makes the investigation of under-represented instructors’ under-researched social phenomena, workplace learning and support, viable and rigorous. This paper demonstrates the robust theorization of English instructors’ workplace experiences using Mezirow’s Transformative Learning Theory in the English language teacher education field.

Keywords: English teacher education, professional learning, transformative learning theory, workplace learning

Procedia PDF Downloads 108
178 Land Rights, Policy and Cultural Identity in Uganda: Case of the Basongora Community

Authors: Edith Kamakune

Abstract:

As much as Indigenous rights are presumed to be part of the broad human rights regime, members of the indigenous communities have continually suffered violations, exclusions, and threat. There are a number of steps taken from the international community in trying to bridge the gap, and this has been through the inclusion of provisions as well as the passing of conventions and declarations with specific reference to the rights of indigenous peoples. Some examples of indigenous people include theSiberian Yupik of St Lawrence Island; the Ute of Utah; the Cree of Alberta, and the Xosa andKhoiKhoi of Southern Africa. Uganda’s wide cultural heritage has played a key role in the failure to pay special attention to the needs of the rights of indigenous peoples. The 1995 Constitution and the Land Act of 1998 provide for abstract land rights without necessarily paying attention to indigenous communities’ special needs. Basongora are a pastoralist community in Western Uganda whose ancestral land is the present Queen Elizabeth National Park of Western Uganda, Virunga National Park of Eastern Democratic Republic of Congo, and the small percentage of the low lands under the Rwenzori Mountains. Their values and livelihood are embedded in their strong attachment to the land, and this has been at stake for the last about 90 Years. This research was aimed atinvestigating the relationship between land rights and the right to cultural identity among indigenous communities, looking at the policy available on land and culture, and whether the policies are sensitive of the specific issues of vulnerable ethnic groups; and largely the effect of land on the right to cultural identity. The research was guided by three objectives: to examine and contextualize the concept of land rights among the Basongora community; to assess the policy frame work available for the protection of the Basongora community; to investigate the forms of vulnerability of the Basongora community. Quantitative and qualitative methods were used. a case of Kaseseand Kampala Districts were purposefully selected .138 people were recruited through random and nonrandom techniques to participate in the study, and these were 70 questionnaire respondents; 20 face to face interviews respondents; 5 key informants, and 43 participants in focus group discussions; The study established that Land is communally held and used and thatit continues to be a central source of livelihood for the Basongora; land rights are important in multiplication of herds; preservation, development, and promotion of culture and language. Research found gaps in the policy framework since the policies are concerned with tenure issues and the general provisions areambiguous. Oftenly, the Basongora are not called upon to participate in decision making processes, even on issues that affect them. The research findings call forauthorities to allow Basongora to access Queen Elizabeth National Park land for pasture during particular seasons of the year, especially during the dry seasons; land use policy; need for a clear alignment of the description of indigenous communitiesunder the constitution (Uganda, 1995) to the international definition.

Keywords: cultural identity, land rights, protection, uganda

Procedia PDF Downloads 127
177 Searching Knowledge for Engagement in a Worker Cooperative Society: A Proposal for Rethinking Premises

Authors: Soumya Rajan

Abstract:

While delving into the heart of any organization, the structural pre-requisites which form the framework of its system, allures and sometimes invokes great interest. In an attempt to understand the ecosystem of Knowledge that existed in organizations with diverse ownership and legal blueprints, Cooperative Societies, which form a crucial part of the neo-liberal movement in India, was studied. The exploration surprisingly led to the re-designing of at least a set of premises of the researcher on the drivers of engagement in an otherwise structured trade environment. The liberal organizational structure of Cooperative Societies has been empowered with certain terminologies: Voluntary, Democratic, Equality and Distributive Justice. To condense in Hubert Calvert’ words, ‘Co-operation is a form of organization wherein persons voluntarily associated together as human beings on the basis of equality for the promotion of the economic interest of themselves.’ In India, largely the institutions which work under this principle is registered under Cooperative Societies Act of the Central or State laws. A Worker Cooperative Society which originated as a movement in the state of Kerala and spread its wings across the country - Indian Coffee House was chosen as the enterprise for further inquiry for it being a living example and a highly successful working model in the designated space. The exploratory study reached out to employees and key stakeholders of Indian Coffee House to understand the nuances of the structure and the scope it provides for engagement. The key questions which formed shape in the mind of researcher while engaging in the inquiry were: How has the organization sustained despite its principle of accepting employees with no skills into employment and later training and empowering them? How can a system which has pre-independence and post-independence (independence here means the colonial independence from Great Britain) existence seek to engage employees within the premise of equality? How was the value of socialism ingrained in a commercial enterprise which has a turnover of several hundreds of Crores each year? How did the vision of a flat structure, way back in the 1940’s find its way into the organizational structure and has continued to remain as the way of life? These questions were addressed by the Case study research that ensued and placing Knowledge as the key premise, the possibilities of engagement of the organization man was pictured. Understanding that although the macro or holistic unit of analysis is the organization, it is pivotal to understand the structures and processes which best reflect on the actors. The embedded design which was adopted in this study delivered insights from the different stakeholder actors from diverse departments. While moving through variables which define and sometimes defy bounds in rationality, the study brought to light the inherent features of the organization structure and how it influences the actors who form a crucial part of the scheme of things. The research brought forth the key enablers for engagement and specifically explored the standpoint of knowledge in the larger structure of the Cooperative Society.

Keywords: knowledge, organizational structure, engagement, worker cooperative

Procedia PDF Downloads 211
176 Developing Creative and Critically Reflective Digital Learning Communities

Authors: W. S. Barber, S. L. King

Abstract:

This paper is a qualitative case study analysis of the development of a fully online learning community of graduate students through arts-based community building activities. With increasing numbers and types of online learning spaces, it is incumbent upon educators to continue to push the edge of what best practices look like in digital learning environments. In digital learning spaces, instructors can no longer be seen as purveyors of content knowledge to be examined at the end of a set course by a final test or exam. The rapid and fluid dissemination of information via Web 3.0 demands that we reshape our approach to teaching and learning, from one that is content-focused to one that is process-driven. Rather than having instructors as formal leaders, today’s digital learning environments require us to share expertise, as it is the collective experiences and knowledge of all students together with the instructors that help to create a very different kind of learning community. This paper focuses on innovations pursued in a 36 hour 12 week graduate course in higher education entitled “Critical and Reflective Practice”. The authors chronicle their journey to developing a fully online learning community (FOLC) by emphasizing the elements of social, cognitive, emotional and digital spaces that form a moving interplay through the community. In this way, students embrace anywhere anytime learning and often take the learning, as well as the relationships they build and skills they acquire, beyond the digital class into real world situations. We argue that in order to increase student online engagement, pedagogical approaches need to stem from two primary elements, both creativity and critical reflection, that are essential pillars upon which instructors can co-design learning environments with students. The theoretical framework for the paper is based on the interaction and interdependence of Creativity, Intuition, Critical Reflection, Social Constructivism and FOLCs. By leveraging students’ embedded familiarity with a wide variety of technologies, this case study of a graduate level course on critical reflection in education, examines how relationships, quality of work produced, and student engagement can improve by using creative and imaginative pedagogical strategies. The authors examine their professional pedagogical strategies through the lens that the teacher acts as facilitator, guide and co-designer. In a world where students can easily search for and organize information as self-directed processes, creativity and connection can at times be lost in the digitized course environment. The paper concludes by posing further questions as to how institutions of higher education may be challenged to restructure their credit granting courses into more flexible modules, and how students need to be considered an important part of assessment and evaluation strategies. By introducing creativity and critical reflection as central features of the digital learning spaces, notions of best practices in digital teaching and learning emerge.

Keywords: online, pedagogy, learning, communities

Procedia PDF Downloads 378
175 Exploring the Correlation between Population Distribution and Urban Heat Island under Urban Data: Taking Shenzhen Urban Heat Island as an Example

Authors: Wang Yang

Abstract:

Shenzhen is a modern city of China's reform and opening-up policy, the development of urban morphology has been established on the administration of the Chinese government. This city`s planning paradigm is primarily affected by the spatial structure and human behavior. The subjective urban agglomeration center is divided into several groups and centers. In comparisons of this effect, the city development law has better to be neglected. With the continuous development of the internet, extensive data technology has been introduced in China. Data mining and data analysis has become important tools in municipal research. Data mining has been utilized to improve data cleaning such as receiving business data, traffic data and population data. Prior to data mining, government data were collected by traditional means, then were analyzed using city-relationship research, delaying the timeliness of urban development, especially for the contemporary city. Data update speed is very fast and based on the Internet. The city's point of interest (POI) in the excavation serves as data source affecting the city design, while satellite remote sensing is used as a reference object, city analysis is conducted in both directions, the administrative paradigm of government is broken and urban research is restored. Therefore, the use of data mining in urban analysis is very important. The satellite remote sensing data of the Shenzhen city in July 2018 were measured by the satellite Modis sensor and can be utilized to perform land surface temperature inversion, and analyze city heat island distribution of Shenzhen. This article acquired and classified the data from Shenzhen by using Data crawler technology. Data of Shenzhen heat island and interest points were simulated and analyzed in the GIS platform to discover the main features of functional equivalent distribution influence. Shenzhen is located in the east-west area of China. The city’s main streets are also determined according to the direction of city development. Therefore, it is determined that the functional area of the city is also distributed in the east-west direction. The urban heat island can express the heat map according to the functional urban area. Regional POI has correspondence. The research result clearly explains that the distribution of the urban heat island and the distribution of urban POIs are one-to-one correspondence. Urban heat island is primarily influenced by the properties of the underlying surface, avoiding the impact of urban climate. Using urban POIs as analysis object, the distribution of municipal POIs and population aggregation are closely connected, so that the distribution of the population corresponded with the distribution of the urban heat island.

Keywords: POI, satellite remote sensing, the population distribution, urban heat island thermal map

Procedia PDF Downloads 85
174 Uncertainty Quantification of Crack Widths and Crack Spacing in Reinforced Concrete

Authors: Marcel Meinhardt, Manfred Keuser, Thomas Braml

Abstract:

Cracking of reinforced concrete is a complex phenomenon induced by direct loads or restraints affecting reinforced concrete structures as soon as the tensile strength of the concrete is exceeded. Hence it is important to predict where cracks will be located and how they will propagate. The bond theory and the crack formulas in the actual design codes, for example, DIN EN 1992-1-1, are all based on the assumption that the reinforcement bars are embedded in homogeneous concrete without taking into account the influence of transverse reinforcement and the real stress situation. However, it can often be observed that real structures such as walls, slabs or beams show a crack spacing that is orientated to the transverse reinforcement bars or to the stirrups. In most Finite Element Analysis studies, the smeared crack approach is used for crack prediction. The disadvantage of this model is that the typical strain localization of a crack on element level can’t be seen. The crack propagation in concrete is a discontinuous process characterized by different factors such as the initial random distribution of defects or the scatter of material properties. Such behavior presupposes the elaboration of adequate models and methods of simulation because traditional mechanical approaches deal mainly with average material parameters. This paper concerned with the modelling of the initiation and the propagation of cracks in reinforced concrete structures considering the influence of transverse reinforcement and the real stress distribution in reinforced concrete (R/C) beams/plates in bending action. Therefore, a parameter study was carried out to investigate: (I) the influence of the transversal reinforcement to the stress distribution in concrete in bending mode and (II) the crack initiation in dependence of the diameter and distance of the transversal reinforcement to each other. The numerical investigations on the crack initiation and propagation were carried out with a 2D reinforced concrete structure subjected to quasi static loading and given boundary conditions. To model the uncertainty in the tensile strength of concrete in the Finite Element Analysis correlated normally and lognormally distributed random filed with different correlation lengths were generated. The paper also presents and discuss different methods to generate random fields, e.g. the Covariance Matrix Decomposition Method. For all computations, a plastic constitutive law with softening was used to model the crack initiation and the damage of the concrete in tension. It was found that the distributions of crack spacing and crack widths are highly dependent of the used random field. These distributions are validated to experimental studies on R/C panels which were carried out at the Laboratory for Structural Engineering at the University of the German Armed Forces in Munich. Also, a recommendation for parameters of the random field for realistic modelling the uncertainty of the tensile strength is given. The aim of this research was to show a method in which the localization of strains and cracks as well as the influence of transverse reinforcement on the crack initiation and propagation in Finite Element Analysis can be seen.

Keywords: crack initiation, crack modelling, crack propagation, cracks, numerical simulation, random fields, reinforced concrete, stochastic

Procedia PDF Downloads 120
173 Radar on Bike: Coarse Classification based on Multi-Level Clustering for Cyclist Safety Enhancement

Authors: Asma Omri, Noureddine Benothman, Sofiane Sayahi, Fethi Tlili, Hichem Besbes

Abstract:

Cycling, a popular mode of transportation, can also be perilous due to cyclists' vulnerability to collisions with vehicles and obstacles. This paper presents an innovative cyclist safety system based on radar technology designed to offer real-time collision risk warnings to cyclists. The system incorporates a low-power radar sensor affixed to the bicycle and connected to a microcontroller. It leverages radar point cloud detections, a clustering algorithm, and a supervised classifier. These algorithms are optimized for efficiency to run on the TI’s AWR 1843 BOOST radar, utilizing a coarse classification approach distinguishing between cars, trucks, two-wheeled vehicles, and other objects. To enhance the performance of clustering techniques, we propose a 2-Level clustering approach. This approach builds on the state-of-the-art Density-based spatial clustering of applications with noise (DBSCAN). The objective is to first cluster objects based on their velocity, then refine the analysis by clustering based on position. The initial level identifies groups of objects with similar velocities and movement patterns. The subsequent level refines the analysis by considering the spatial distribution of these objects. The clusters obtained from the first level serve as input for the second level of clustering. Our proposed technique surpasses the classical DBSCAN algorithm in terms of geometrical metrics, including homogeneity, completeness, and V-score. Relevant cluster features are extracted and utilized to classify objects using an SVM classifier. Potential obstacles are identified based on their velocity and proximity to the cyclist. To optimize the system, we used the View of Delft dataset for hyperparameter selection and SVM classifier training. The system's performance was assessed using our collected dataset of radar point clouds synchronized with a camera on an Nvidia Jetson Nano board. The radar-based cyclist safety system is a practical solution that can be easily installed on any bicycle and connected to smartphones or other devices, offering real-time feedback and navigation assistance to cyclists. We conducted experiments to validate the system's feasibility, achieving an impressive 85% accuracy in the classification task. This system has the potential to significantly reduce the number of accidents involving cyclists and enhance their safety on the road.

Keywords: 2-level clustering, coarse classification, cyclist safety, warning system based on radar technology

Procedia PDF Downloads 55
172 Development of a Bus Information Web System

Authors: Chiyoung Kim, Jaegeol Yim

Abstract:

Bus service is often either main or the only public transportation available in cities. In metropolitan areas, both subways and buses are available whereas in the medium sized cities buses are usually the only type of public transportation available. Bus Information Systems (BIS) provide current locations of running buses, efficient routes to travel from one place to another, points of interests around a given bus stop, a series of bus stops consisting of a given bus route, and so on to users. Thanks to BIS, people do not have to waste time at a bus stop waiting for a bus because BIS provides exact information on bus arrival times at a given bus stop. Therefore, BIS does a lot to promote the use of buses contributing to pollution reduction and saving natural resources. BIS implementation costs a huge amount of budget as it requires a lot of special equipment such as road side equipment, automatic vehicle identification and location systems, trunked radio systems, and so on. Consequently, medium and small sized cities with a low budget cannot afford to install BIS even though people in these cities need BIS service more desperately than people in metropolitan areas. It is possible to provide BIS service at virtually no cost under the assumption that everybody carries a smartphone and there is at least one person with a smartphone in a running bus who is willing to reveal his/her location details while he/she is sitting in a bus. This assumption is usually true in the real world. The smartphone penetration rate is greater than 100% in the developed countries and there is no reason for a bus driver to refuse to reveal his/her location details while driving. We have developed a mobile app that periodically reads values of sensors including GPS and sends GPS data to the server when the bus stops or when the elapsed time from the last send attempt is greater than a threshold. This app detects the bus stop state by investigating the sensor values. The server that receives GPS data from this app has also been developed. Under the assumption that the current locations of all running buses collected by the mobile app are recorded in a database, we have also developed a web site that provides all kinds of information that most BISs provide to users through the Internet. The development environment is: OS: Windows 7 64bit, IDE: Eclipse Luna 4.4.1, Spring IDE 3.7.0, Database: MySQL 5.1.7, Web Server: Apache Tomcat 7.0, Programming Language: Java 1.7.0_79. Given a start and a destination bus stop, it finds a shortest path from the start to the destination using the Dijkstra algorithm. Then, it finds a convenient route considering number of transits. For the user interface, we use the Google map. Template classes that are used by the Controller, DAO, Service and Utils classes include BUS, BusStop, BusListInfo, BusStopOrder, RouteResult, WalkingDist, Location, and so on. We are now integrating the mobile app system and the web app system.

Keywords: bus information system, GPS, mobile app, web site

Procedia PDF Downloads 194
171 The Roles of Mandarin and Local Dialect in the Acquisition of L2 English Consonants Among Chinese Learners of English: Evidence From Suzhou Dialect Areas

Authors: Weijing Zhou, Yuting Lei, Francis Nolan

Abstract:

In the domain of second language acquisition, whenever pronunciation errors or acquisition difficulties are found, researchers habitually attribute them to the negative transfer of the native language or local dialect. To what extent do Mandarin and local dialects affect English phonological acquisition for Chinese learners of English as a foreign language (EFL)? Little evidence, however, has been found via empirical research in China. To address this core issue, the present study conducted phonetic experiments to explore the roles of local dialects and Mandarin in Chinese EFL learners’ acquisition of L2 English consonants. Besides Mandarin, the sole national language in China, Suzhou dialect was selected as the target local dialect because of its distinct phonology from Mandarin. The experimental group consisted of 30 junior English majors at Yangzhou University, who were born and lived in Suzhou, acquired Suzhou Dialect since their early childhood, and were able to communicate freely and fluently with each other in Suzhou Dialect, Mandarin as well as English. The consonantal target segments were all the consonants of English, Mandarin and Suzhou Dialect in typical carrier words embedded in the carrier sentence Say again. The control group consisted of two Suzhou Dialect experts, two Mandarin radio broadcasters, and two British RP phoneticians, who served as the standard speakers of the three languages. The reading corpus was recorded and sampled in the phonetic laboratories at Yangzhou University, Soochow University and Cambridge University, respectively, then transcribed, segmented and analyzed acoustically via Praat software, and finally analyzed statistically via EXCEL and SPSS software. The main findings are as follows: First, in terms of correct acquisition rates (CARs) of all the consonants, Mandarin ranked top (92.83%), English second (74.81%) and Suzhou Dialect last (70.35%), and significant differences were found only between the CARs of Mandarin and English and between the CARs of Mandarin and Suzhou Dialect, demonstrating Mandarin was overwhelmingly more robust than English or Suzhou Dialect in subjects’ multilingual phonological ecology. Second, in terms of typical acoustic features, the average duration of all the consonants plus the voice onset time (VOT) of plosives, fricatives, and affricatives in 3 languages were much longer than those of standard speakers; the intensities of English fricatives and affricatives were higher than RP speakers but lower than Mandarin and Suzhou Dialect standard speakers; the formants of English nasals and approximants were significantly different from those of Mandarin and Suzhou Dialects, illustrating the inconsistent acoustic variations between the 3 languages. Thirdly, in terms of typical pronunciation variations or errors, there were significant interlingual interactions between the 3 consonant systems, in which Mandarin consonants were absolutely dominant, accounting for the strong transfer from L1 Mandarin to L2 English instead of from earlier-acquired L1 local dialect to L2 English. This is largely because the subjects were knowingly exposed to Mandarin since their nursery and were strictly required to speak in Mandarin through all the formal education periods from primary school to university.

Keywords: acquisition of L2 English consonants, role of Mandarin, role of local dialect, Chinese EFL learners from Suzhou Dialect areas

Procedia PDF Downloads 70
170 The Messy and Irregular Experience of Entrepreneurial Life

Authors: Hannah Dean

Abstract:

The growth ideology, and its association with progress, is an important construct in the narrative of modernity. This ideology is embedded in neoclassical economic growth theory which conceptualises growth as linear and predictable, and the entrepreneur as a rational economic manager. This conceptualisation has been critiqued for reinforcing the managerial discourse in entrepreneurship studies. Despite these critiques, both the neoclassical growth theory and its adjacent managerial discourse dominate entrepreneurship studies notably the literature on female entrepreneurs. The latter is the focus of this paper. Given this emphasis on growth, female entrepreneurs are portrayed as problematic because their growth lags behind their male counterparts. This image which ignores the complexity and diversity of female entrepreneurs’ experience persists in the literature due to the lack of studies that analyse the process and contextual factors surrounding female entrepreneurs’ experience. This study aims to address the subordination of female entrepreneurs by questioning the hegemonic logic of economic growth and the managerial discourse as a true representation for the entrepreneurial experience. This objective is achieved by drawing on Schumpeter’s theorising and narrative inquiry. This exploratory study undertakes in depth interviews to gain insights into female entrepreneurs’ experience and the impact of the economic growth model and the managerial discourse on their performance. The narratives challenge a number of assumptions about female entrepreneurs. The participants occupied senior positions in the corporate world before setting up their businesses. This is at odds with much writing which assumes that women underperform because they leave their career without gaining managerial experience to achieve work-life balance. In line with Schumpeter, who distinguishes the entrepreneur from the manager, the participants’ main function was innovation. They did not believe that the managerial paradigm governing their corporate careers was applicable to their entrepreneurial experience. Formal planning and managerial rationality can hinder their decision making process. The narratives point to the gap between the two worlds which makes stepping into entrepreneurship a scary move. Schumpeter argues that the entrepreneurial process is evolutionary and that failure is an integral part of it. The participants’ entrepreneurial process was in fact irregular. The performance of new combinations was not always predictable. They therefore relied on their initiative. The inhibition to deploy these traits had an adverse effect on business growth. The narratives also indicate that over-reliance on growth threaten the business survival as it faces competing pressures. The study offers theoretical and empirical contributions to (female) entrepreneurship studies by presenting Schumpeter’s theorising as an alternative theoretical framework to the neoclassical economic growth theory. The study also reduces entrepreneurs’ vulnerability by making them aware of the negative influence that the linear growth model and the managerial discourse hold upon their performance. The study has implications for policy makers as it generates new knowledge that incorporates the current social and economic changes in the context of entrepreneurs that can no longer be sustained by the linear growth models especially in the current economic climate.

Keywords: economic growth, female entrepreneurs, managerial discourse, Schumpeter

Procedia PDF Downloads 271
169 Hybridization of Mathematical Transforms for Robust Video Watermarking Technique

Authors: Harpal Singh, Sakshi Batra

Abstract:

The widespread and easy accesses to multimedia contents and possibility to make numerous copies without loss of significant fidelity have roused the requirement of digital rights management. Thus this problem can be effectively solved by Digital watermarking technology. This is a concept of embedding some sort of data or special pattern (watermark) in the multimedia content; this information will later prove ownership in case of a dispute, trace the marked document’s dissemination, identify a misappropriating person or simply inform user about the rights-holder. The primary motive of digital watermarking is to embed the data imperceptibly and robustly in the host information. Extensive counts of watermarking techniques have been developed to embed copyright marks or data in digital images, video, audio and other multimedia objects. With the development of digital video-based innovations, copyright dilemma for the multimedia industry increases. Video watermarking had been proposed in recent years to serve the issue of illicit copying and allocation of videos. It is the process of embedding copyright information in video bit streams. Practically video watermarking schemes have to address some serious challenges as compared to image watermarking schemes like real-time requirements in the video broadcasting, large volume of inherently redundant data between frames, the unbalance between the motion and motionless regions etc. and they are particularly vulnerable to attacks, for example, frame swapping, statistical analysis, rotation, noise, median and crop attacks. In this paper, an effective, robust and imperceptible video watermarking algorithm is proposed based on hybridization of powerful mathematical transforms; Fractional Fourier Transform (FrFT), Discrete Wavelet transforms (DWT) and Singular Value Decomposition (SVD) using redundant wavelet. This scheme utilizes various transforms for embedding watermarks on different layers by using Hybrid systems. For this purpose, the video frames are portioned into layers (RGB) and the watermark is being embedded in two forms in the video frames using SVD portioning of the watermark, and DWT sub-band decomposition of host video, to facilitate copyright safeguard as well as reliability. The FrFT orders are used as the encryption key that allows the watermarking method to be more robust against various attacks. The fidelity of the scheme is enhanced by introducing key generation and wavelet based key embedding watermarking scheme. Thus, for watermark embedding and extraction, same key is required. Therefore the key must be shared between the owner and the verifier via some safe network. This paper demonstrates the performance by considering different qualitative metrics namely Peak Signal to Noise ratio, Structure similarity index and correlation values and also apply some attacks to prove the robustness. The Experimental results are presented to demonstrate that the proposed scheme can withstand a variety of video processing attacks as well as imperceptibility.

Keywords: discrete wavelet transform, robustness, video watermarking, watermark

Procedia PDF Downloads 209
168 Barriers and Facilitators of Implementing Digital Mental Health Resources in Underserved Regions of Ontario during the COVID-19 Pandemic

Authors: Samaneh Abedini, Diana Urajnik, Nicole Naccarato

Abstract:

A high prevalence of mental health problems was observed in marginalized youth living in underserved regions of Ontario during the COVID-19 pandemic. To address this issue, a growing number of community-based traditional mental health services are offering digital mental health resources due to their accessibility, affordability, and scalability. The feasibility of providing these resources in underserved regions has been examined by researchers rather than by representatives of effective services within a mental health system. Indeed, digitalized mental health contents are not routinely embedded within local mental health organizations' services in Northern Ontario, where they can make a substantial impact. To date, many technology-based mental health initiatives have not been effectively implemented in this region. The obstacles associated with implementing digitalized mental health resources in Northern Ontario may be unique to that region. Thus, specific context-based considerations might need to be applied for developing and implementing digital resources by regional mental health organizations in Northern Ontario. The target population was child-serving organizations situated in northeastern Ontario, specifically within Greater Sudbury and the Sudbury District. A sample of six organizations were selected with representation from the mental health, social, and healthcare sectors. The project supervisor was in a unique position to access the organizations by virtue of existing relationships with the practice and lay communities at large. Thus, recruitment was conducted through professional outreach in partnership with the Center for Rural and Northern Health Research (CRaNHR). Semi-structured interviews were conducted with 1-2 key personnel (e.g., administrator, clinician) from participating organizations. Audio recordings from the semi-structured interviews were transcribed verbatim and thematically analyzed supported by NVivo. Thematic analysis of the data resulted in a total of 13 excerpts which were categorized into two major themes including 1) digital mental health services as a valuable resource for organizations both during and after the pandemic, and 2) barriers and facilitators to a successful implementation of digital mental health resources in northern Ontario. Four secondary themes were identified: 1) perceived barriers to implementation of digital mental health resources to the offered services by mental health agencies; 2) acceptability and feasibility of digital health sources for people living in northern Ontario; 3) data security, safety, and risk; and 4) connecting with clients. The employees of mental health organizations in northern Ontario considered digital mental health resources as generally acceptable to youth. However, they raised several concerns that may affect their implementation into routine practice and service delivery. The implementation of digital systems should be simple and straightforward and should enhance rather than hinder clinical workflows for staff. A clear plan for implementing technological services is also required for the successful adoption of digital systems. For successful adoption and implementation of digital systems, staff views must be considered.

Keywords: COVID-19 pandemic, digital mental health resources, Ontario, underserved

Procedia PDF Downloads 82
167 Temperature Dependence of Photoluminescence Intensity of Europium Dinuclear Complex

Authors: Kwedi L. M. Nsah, Hisao Uchiki

Abstract:

Quantum computation is a new and exciting field making use of quantum mechanical phenomena. In classical computers, information is represented as bits, with values either 0 or 1, but a quantum computer uses quantum bits in an arbitrary superposition of 0 and 1, enabling it to reach beyond the limits predicted by classical information theory. lanthanide ion quantum computer is an organic crystal, having a lanthanide ion. Europium is a favored lanthanide, since it exhibits nuclear spin coherence times, and Eu(III) is photo-stable and has two stable isotopes. In a europium organic crystal, the key factor is the mutual dipole-dipole interaction between two europium atoms. Crystals of the complex were formed by making a 2 :1 reaction of Eu(fod)3 and bpm. The transparent white crystals formed showed brilliant red luminescence with a 405 nm laser. The photoluminescence spectroscopy was observed both at room and cryogenic temperatures (300-14 K). The luminescence spectrum of [Eu(fod)3(μ-bpm) Eu(fod)3] showed characteristic of Eu(III) emission transitions in the range 570–630 nm, due to the deactivation of 5D0 emissive state to 7Fj. For the application of dinuclear Eu3+ complex to q-bit device, attention was focused on 5D0 -7F0 transition, around 580 nm. The presence of 5D0 -7F0 transition at room temperature revealed that at least one europium symmetry had no inversion center. Since the line was unsplit by the crystal field effect, any multiplicity observed was due to a multiplicity of Eu3+ sites. For q-bit element, more narrow line width of 5D0 → 7F0 PL band in Eu3+ ion was preferable. Cryogenic temperatures (300 K – 14 K) was applicable to reduce inhomogeneous broadening and distinguish between ions. A CCD image sensor was used for low temperature Photoluminescence measurement, and a far better resolved luminescent spectrum was gotten by cooling the complex at 14 K. A red shift by 15 cm-1 in the 5D0 - 7F0 peak position was observed upon cooling, the line shifted towards lower wavenumber. An emission spectrum at the 5D0 - 7F0 transition region was obtained to verify the line width. At this temperature, a peak with magnitude three times that at room temperature was observed. The temperature change of the 5D0 state of Eu(fod)3(μ-bpm)Eu(fod)3 showed a strong dependence in the vicinity of 60 K to 100 K. Thermal quenching was observed at higher temperatures than 100 K, at which point it began to decrease slowly with increasing temperature. The temperature quenching effect of Eu3+ with increase temperature was caused by energy migration. 100 K was the appropriate temperature for the observation of the 5D0 - 7F0 emission peak. Europium dinuclear complex bridged by bpm was successfully prepared and monitored at cryogenic temperatures. At 100 K the Eu3+-dope complex has a good thermal stability and this temperature is appropriate for the observation of the 5D0 - 7F0 emission peak. Sintering the sample above 600o C could also be a method to consider but the Eu3+ ion can be reduced to Eu2+, reasons why cryogenic temperature measurement is preferably over other methods.

Keywords: Eu(fod)₃, europium dinuclear complex, europium ion, quantum bit, quantum computer, 2, 2-bipyrimidine

Procedia PDF Downloads 153
166 Ecofriendly Synthesis of Au-Ag@AgCl Nanocomposites and Their Catalytic Activity on Multicomponent Domino Annulation-Aromatization for Quinoline Synthesis

Authors: Kanti Sapkota, Do Hyun Lee, Sung Soo Han

Abstract:

Nanocomposites have been widely used in various fields such as electronics, catalysis, and in chemical, biological, biomedical and optical fields. They display broad biomedical properties like antidiabetic, anticancer, antioxidant, antimicrobial and antibacterial activities. Moreover, nanomaterials have been used for wastewater treatment. Particularly, bimetallic hybrid nanocomposites exhibit unique features as compared to their monometallic components. Hybrid nanomaterials not only afford the multifunctionality endowed by their constituents but can also show synergistic properties. In addition, these hybrid nanomaterials have noteworthy catalytic and optical properties. Notably, Au−Ag based nanoparticles can be employed in sensor and catalysis due to their characteristic composition-tunable plasmonic properties. Due to their importance and usefulness, various efforts were developed for their preparation. Generally, chemical methods have been described to synthesize such bimetallic nanocomposites. In such chemical synthesis, harmful and hazardous chemicals cause environmental contamination and increase toxicity levels. Therefore, ecologically benevolent processes for the synthesis of nanomaterials are highly desirable to diminish such environmental and safety concerns. In this regard, here we disclose a simple, cost-effective, external additive free and eco-friendly method for the synthesis of Au-Ag@AgCl nanocomposites using Nephrolepis cordifolia root extract. Au-Ag@AgCl NCs were obtained by the simultaneous reduction of cationic Ag and Au into AgCl in the presence of plant extract. The particle size of 10 to 50 nm was observed with the average diameter of 30 nm. The synthesized nanocomposite was characterized by various modern characterization techniques. For example, UV−visible spectroscopy was used to determine the optical activity of the synthesized NCs, and Fourier transform infrared (FT-IR) spectroscopy was employed to investigate the functional groups present in the biomolecules that were responsible for both reducing and capping agents during the formation of nanocomposites. Similarly, powder X-ray diffraction (XRD), transmission electron microscopy (TEM), X-ray photoelectron spectroscopy (XPS), thermogravimetric analysis (TGA) and energy-dispersive X-ray (EDX) spectroscopy were used to determine crystallinity, size, oxidation states, thermal stability and weight loss of the synthesized nanocomposites. As a synthetic application, the synthesized nanocomposite exhibited excellent catalytic activity for the multicomponent synthesis of biologically interesting quinoline molecules via domino annulation-aromatization reaction of aniline, arylaldehyde, and phenyl acetylene derivatives. Interestingly, the nanocatalyst was efficiently recycled for five times without substantial loss of catalytic properties.

Keywords: nanoparticles, catalysis, multicomponent, quinoline

Procedia PDF Downloads 102
165 Performance Optimization of Polymer Materials Thanks to Sol-Gel Chemistry for Fuel Cells

Authors: Gondrexon, Gonon, Mendil-Jakani, Mareau

Abstract:

Proton Exchange Membrane Fuel Cells (PEMFCs) seems to be a promising device used for converting hydrogen into electricity. PEMFC is made of a Membrane Electrode Assembly (MEA) composed of a Proton Exchange Membrane (PEM) sandwiched by two catalytic layers. Nowadays, specific performances are targeted in order to ensure the long-term expansion of this technology. Current polymers used (perfluorinated as Nafion®) are unsuitable (loss of mechanical properties) for the high-temperature range. To overcome this issue, sulfonated polyaromatic polymers appear to be a good alternative since it has very good thermomechanical properties. However, their proton conductivity and chemical stability (oxidative resistance to H2O2 formed during fuel cell (FC) operating) are very low. In our team, we patented an original concept of hybrid membranes able to fulfill the specific requirements for PEMFC. This idea is based on the improvement of commercialized polymer membrane via an easy and processable stabilization thanks to sol-gel (SG) chemistry with judicious embeded chemical functions. This strategy is thus breaking up with traditional approaches (design of new copolymers, use of inorganic charges/additives). In 2020, we presented the elaboration and functional properties of a 1st generation of hybrid membranes with promising performances and durability. The latter was made by self-condensing a SG phase with 3(mercaptopropyl)trimethoxysilane (MPTMS) inside a commercial sPEEK host membrane. The successful in-situ condensation reactions of the MPTMS was demonstrated by measures of mass uptakes, FTIR spectroscopy (presence of C-Haliphatics) and solid state NMR 29Si (T2 & T3 signals of self-condensation products). The ability of the SG phase to prevent the oxidative degradation of the sPEEK phase (thanks to thiol chemical functions) was then proved with H2O2 accelerating tests and FC operating tests. A 2nd generation made of thiourea functionalized SG precursors (named HTU & TTU) was made after. By analysing in depth the morphologies of these different hybrids by direct space analysis (AFM/SEM/TEM) and reciprocal space analysis (SANS/SAXS/WAXS), we highlighted that both SG phase morphology and its localisation into the host has a huge impact on the PEM functional properties observed. This relationship is also dependent on the chemical function embedded. The hybrids obtained have shown very good chemical resistance during aging test (exposed to H2O2) compared to the commercial sPEEK. But the chemical function used is considered as “sacrificial” and cannot react indefinitely with H2O2. Thus, we are now working on a 3rd generation made of both sacrificial/regenerative chemical functions which are expected to inhibit the chemical aging of sPEEK more efficiently. With this work, we are confident to reach a predictive approach of the key parameters governing the final properties.

Keywords: fuel cells, ionomers, membranes, sPEEK, chemical stability

Procedia PDF Downloads 48
164 Waveguiding in an InAs Quantum Dots Nanomaterial for Scintillation Applications

Authors: Katherine Dropiewski, Michael Yakimov, Vadim Tokranov, Allan Minns, Pavel Murat, Serge Oktyabrsky

Abstract:

InAs Quantum Dots (QDs) in a GaAs matrix is a well-documented luminescent material with high light yield, as well as thermal and ionizing radiation tolerance due to quantum confinement. These benefits can be leveraged for high-efficiency, room temperature scintillation detectors. The proposed scintillator is composed of InAs QDs acting as luminescence centers in a GaAs stopping medium, which also acts as a waveguide. This system has appealing potential properties, including high light yield (~240,000 photons/MeV) and fast capture of photoelectrons (2-5ps), orders of magnitude better than currently used inorganic scintillators, such as LYSO or BaF2. The high refractive index of the GaAs matrix (n=3.4) ensures light emitted by the QDs is waveguided, which can be collected by an integrated photodiode (PD). Scintillation structures were grown using Molecular Beam Epitaxy (MBE) and consist of thick GaAs waveguiding layers with embedded sheets of modulation p-type doped InAs QDs. An AlAs sacrificial layer is grown between the waveguide and the GaAs substrate for epitaxial lift-off to separate the scintillator film and transfer it to a low-index substrate for waveguiding measurements. One consideration when using a low-density material like GaAs (~5.32 g/cm³) as a stopping medium is the matrix thickness in the dimension of radiation collection. Therefore, luminescence properties of very thick (4-20 microns) waveguides with up to 100 QD layers were studied. The optimization of the medium included QD shape, density, doping, and AlGaAs barriers at the waveguide surfaces to prevent non-radiative recombination. To characterize the efficiency of QD luminescence, low temperature photoluminescence (PL) (77-450 K) was measured and fitted using a kinetic model. The PL intensity degrades by only 40% at RT, with an activation energy for electron escape from QDs to the barrier of ~60 meV. Attenuation within the waveguide (WG) is a limiting factor for the lateral size of a scintillation detector, so PL spectroscopy in the waveguiding configuration was studied. Spectra were measured while the laser (630 nm) excitation point was scanned away from the collecting fiber coupled to the edge of the WG. The QD ground state PL peak at 1.04 eV (1190 nm) was inhomogeneously broadened with FWHM of 28 meV (33 nm) and showed a distinct red-shift due to self-absorption in the QDs. Attenuation stabilized after traveling over 1 mm through the WG, at about 3 cm⁻¹. Finally, a scintillator sample was used to test detection and evaluate timing characteristics using 5.5 MeV alpha particles. With a 2D waveguide and a small area of integrated PD, the collected charge averaged 8.4 x10⁴ electrons, corresponding to a collection efficiency of about 7%. The scintillation response had 80 ps noise-limited time resolution and a QD decay time of 0.6 ns. The data confirms unique properties of this scintillation detector which can be potentially much faster than any currently used inorganic scintillator.

Keywords: GaAs, InAs, molecular beam epitaxy, quantum dots, III-V semiconductor

Procedia PDF Downloads 235
163 The Professionalization of Teachers in the Context of the Development of a Future-Oriented Technical and Vocational Education and Training System in Egypt

Authors: Sherin Ahmed El-Badry Sadek

Abstract:

In this research, it is scientifically examined what contribution the professionalization of teachers can make to the development of a future-oriented vocational education and training system in Egypt. For this purpose, a needs assessment of the Egyptian vocational training system with the central actors and prevailing structures forms the foundation of the study, which theoretically underpinned with the attempt to resolve to some extent the tension between Luhmann's systems theory approach and the actor-centered theory of professional teacher competence. The vocational education system, in particular, must be adaptable and flexible due to the rapidly changing qualification requirements. In view of the pace of technological progress and the associated market changes, vocational training is no longer to be understood only as an educational tool aimed at those who achieve poorer academic performance or are not motivated to take up a degree. Rather, it is to be understood as a cornerstone for the development of society, and international experience shows that it is the core of lifelong learning. But to what extent have the education systems been able to react to these changes in their political, social, and technological systems? And how effective and sustainable are these changes actually? The vocational training system, in particular, has a particular impact on other social systems, which is why the appropriate parameters with the greatest leverage must be identified and adapted. Even if systems and structures are highly relevant, teachers must not hide behind them and must instead strive to develop further and to constantly learn. Despite numerous initiatives and programs to reform vocational training in Egypt, including the EU-funded Technical and Vocational Education and Training (TVET) reform phase I and phase II, the fit of the skilled workers to the needs of the labor market is still insufficient. Surveys show that the majority of employers are very dissatisfied with the graduates that the vocational training system produces. The data was collected through guideline-based interviews with experts from the education system and relevant neighboring systems, which allowed me to reconstruct central in-depth structures, as well as patterns of action and interpretation, in order to subsequently feed these into a matrix of recommendations for action. These recommendations are addressed to different decision-makers and stakeholders and are intended to serve as an impetus for the sustainable improvement of the Egyptian vocational training system. The research findings have shown that education, and in particular vocational training, is a political field that is characterized by a high degree of complexity and which is embedded in a barely manageable, highly branched landscape of structures and actors. At the same time, the vocational training system is not only determined by endogenous factors but also increasingly shaped by the dynamics of the environment and the neighboring social subsystems, with a mutual dependency relationship becoming apparent. These interactions must be taken into account in all decisions, even if prioritization of measures and thus a clear sequence and process orientation are of great urgency.

Keywords: competence orientation, educational policies, education systems, expert interviews, globalization, organizational development, professionalization, systems theory, teacher training, TVET system, vocational training

Procedia PDF Downloads 118
162 The Connection between Qom Seminaries and Interpretation of Sacred Sources in Ja‘farī Jurisprudence

Authors: Sumeyra Yakar, Emine Enise Yakar

Abstract:

Iran presents itself as Islamic, first and foremost, and thus, it can be said that sharī’a is the political and social centre of the states. However, actual practice reveals distinct interpretations and understandings of the sharī’a. The research can be categorised inside the framework of logic in Islamic law and theology. The first task of this paper will be to identify how the sharī’a is understood in Iran by mapping out how the judges apply the law in their respective jurisdictions. The attention will then move from a simple description of the diversity of sharī’a understandings to the question of how that diversity relates to social concepts and cultures. This, of course, necessitates a brief exploration of Iran’s historical background which will also allow for an understanding of sectarian influences and the significance of certain events. The main purpose is to reach an understanding of the process of applying sources to formulate solutions which are in accordance with sharī’a and how religious education is pursued in order to become official judges. Ultimately, this essay will explore the attempts to gain an understanding by linking the practices to the secondary sources of Islamic law. It is important to emphasise that these cultural components of Islamic law must be compatible with the aims of Islamic law and their fundamental sources. The sharī’a consists of more than just legal doctrines (fiqh) and interpretive activities (ijtihād). Its contextual and theoretical framework reveals a close relationship with cultural and historical elements of society. This has meant that its traditional reproduction over time has relied on being embedded into a highly particular form of life. Thus, as acknowledged by pre-modern jurists, the sharī’a encompasses a comprehensive approach to the requirements of justice in legal, historical and political contexts. In theological and legal areas that have the specific authority of tradition, Iran adheres to Shīa’ doctrine, and this explains why the Shīa’ religious establishment maintains a dominant position in matters relating to law and the interpretation of sharī’a. The statements and interpretations of the tradition are distinctly different from sunnī interpretations, and so the use of different sources could be understood as the main reason for the discrepancies in the application of sharī’a between Iran and other Muslim countries. The sharī’a has often accommodated prevailing customs; moreover, it has developed legal mechanisms to all for its adaptation to particular needs and circumstances in society. While jurists may operate within the realm of governance and politics, the moral authority of the sharī’a ensures that these actors legitimate their actions with reference to God’s commands. The Iranian regime enshrines the principle of vilāyāt-i faqīh (guardianship of the jurist) which enables jurists to solve the conflict between law as an ideal system, in theory, and law in practice. The paper aims to show how the religious, educational system works in harmony with the governmental authorities with the concept of vilāyāt-i faqīh in Iran and contributes to the creation of religious custom in the society.

Keywords: guardianship of the jurist (vilāyāt-i faqīh), imitation (taqlīd), seminaries (hawza), Shi’i jurisprudence

Procedia PDF Downloads 198
161 Thermal Energy Storage Based on Molten Salts Containing Nano-Particles: Dispersion Stability and Thermal Conductivity Using Multi-Scale Computational Modelling

Authors: Bashar Mahmoud, Lee Mortimer, Michael Fairweather

Abstract:

New methods have recently been introduced to improve the thermal property values of molten nitrate salts (a binary mixture of NaNO3:KNO3in 60:40 wt. %), by doping them with minute concentration of nanoparticles in the range of 0.5 to 1.5 wt. % to form the so-called: Nano-heat-transfer-fluid, apt for thermal energy transfer and storage applications. The present study aims to assess the stability of these nanofluids using the advanced computational modelling technique, Lagrangian particle tracking. A multi-phase solid-liquid model is used, where the motion of embedded nanoparticles in the suspended fluid is treated by an Euler-Lagrange hybrid scheme with fixed time stepping. This technique enables measurements of various multi-scale forces whose characteristic (length and timescales) are quite different. Two systems are considered, both consisting of 50 nm Al2O3 ceramic nanoparticles suspended in fluids of different density ratios. This includes both water (5 to 95 °C) and molten nitrate salt (220 to 500 °C) at various volume fractions ranging between 1% to 5%. Dynamic properties of both phases are coupled to the ambient temperature of the fluid suspension. The three-dimensional computational region consists of a 1μm cube and particles are homogeneously distributed across the domain. Periodic boundary conditions are enforced. The particle equations of motion are integrated using the fourth order Runge-Kutta algorithm with a very small time-step, Δts, set at 10-11 s. The implemented technique demonstrates the key dynamics of aggregated nanoparticles and this involves: Brownian motion, soft-sphere particle-particle collisions, and Derjaguin, Landau, Vervey, and Overbeek (DLVO) forces. These mechanisms are responsible for the predictive model of aggregation of nano-suspensions. An energy transport-based method of predicting the thermal conductivity of the nanofluids is also used to determine thermal properties of the suspension. The simulation results confirms the effectiveness of the technique. The values are in excellent agreement with the theoretical and experimental data obtained from similar studies. The predictions indicates the role of Brownian motion and DLVO force (represented by both the repulsive electric double layer and an attractive Van der Waals) and its influence in the level of nanoparticles agglomeration. As to the nano-aggregates formed that was found to play a key role in governing the thermal behavior of nanofluids at various particle concentration. The presentation will include a quantitative assessment of these forces and mechanisms, which would lead to conclusions about nanofluids, heat transfer performance and thermal characteristics and its potential application in solar thermal energy plants.

Keywords: thermal energy storage, molten salt, nano-fluids, multi-scale computational modelling

Procedia PDF Downloads 169
160 Wind Load Reduction Effect of Exterior Porous Skin on Facade Performance

Authors: Ying-Chang Yu, Yuan-Lung Lo

Abstract:

Building envelope design is one of the most popular design fields of architectural profession in nowadays. The main design trend of such system is to highlight the designer's aesthetic intention from the outlook of building project. Due to the trend of current façade design, the building envelope contains more and more layers of components, such as double skin façade, photovoltaic panels, solar control system, or even ornamental components. These exterior components are designed for various functional purposes. Most researchers focus on how these exterior elements should be structurally sound secured. However, not many researchers consider these elements would help to improve the performance of façade system. When the exterior elements are deployed in large scale, it creates an additional layer outside of original façade system and acts like a porous interface which would interfere with the aerodynamic of façade surface in micro-scale. A standard façade performance consists with 'water penetration, air infiltration rate, operation force, and component deflection ratio', and these key performances are majorly driven by the 'Design Wind Load' coded in local regulation. A design wind load is usually determined by the maximum wind pressure which occurs on the surface due to the geometry or location of building in extreme conditions. This research was designed to identify the air damping phenomenon of micro turbulence caused by porous exterior layer leading to surface wind load reduction for improvement of façade system performance. A series of wind tunnel test on dynamic pressure sensor array covered by various scale of porous exterior skin was conducted to verify the effect of wind pressure reduction. The testing specimens were designed to simulate the typical building with two-meter extension offsetting from building surface. Multiple porous exterior skins were prepared to replicate various opening ratio of surface which may cause different level of damping effect. This research adopted 'Pitot static tube', 'Thermal anemometers', and 'Hot film probe' to collect the data of surface dynamic pressure behind porous skin. Turbulence and distributed resistance are the two main factors of aerodynamic which would reduce the actual wind pressure. From initiative observation, the reading of surface wind pressure was effectively reduced behind porous media. In such case, an actual building envelope system may be benefited by porous skin from the reduction of surface wind pressure, which may improve the performance of envelope system consequently.

Keywords: multi-layer facade, porous media, facade performance, turbulence and distributed resistance, wind tunnel test

Procedia PDF Downloads 196
159 An Adiabatic Quantum Optimization Approach for the Mixed Integer Nonlinear Programming Problem

Authors: Maxwell Henderson, Tristan Cook, Justin Chan Jin Le, Mark Hodson, YoungJung Chang, John Novak, Daniel Padilha, Nishan Kulatilaka, Ansu Bagchi, Sanjoy Ray, John Kelly

Abstract:

We present a method of using adiabatic quantum optimization (AQO) to solve a mixed integer nonlinear programming (MINLP) problem instance. The MINLP problem is a general form of a set of NP-hard optimization problems that are critical to many business applications. It requires optimizing a set of discrete and continuous variables with nonlinear and potentially nonconvex constraints. Obtaining an exact, optimal solution for MINLP problem instances of non-trivial size using classical computation methods is currently intractable. Current leading algorithms leverage heuristic and divide-and-conquer methods to determine approximate solutions. Creating more accurate and efficient algorithms is an active area of research. Quantum computing (QC) has several theoretical benefits compared to classical computing, through which QC algorithms could obtain MINLP solutions that are superior to current algorithms. AQO is a particular form of QC that could offer more near-term benefits compared to other forms of QC, as hardware development is in a more mature state and devices are currently commercially available from D-Wave Systems Inc. It is also designed for optimization problems: it uses an effect called quantum tunneling to explore all lowest points of an energy landscape where classical approaches could become stuck in local minima. Our work used a novel algorithm formulated for AQO to solve a special type of MINLP problem. The research focused on determining: 1) if the problem is possible to solve using AQO, 2) if it can be solved by current hardware, 3) what the currently achievable performance is, 4) what the performance will be on projected future hardware, and 5) when AQO is likely to provide a benefit over classical computing methods. Two different methods, integer range and 1-hot encoding, were investigated for transforming the MINLP problem instance constraints into a mathematical structure that can be embedded directly onto the current D-Wave architecture. For testing and validation a D-Wave 2X device was used, as well as QxBranch’s QxLib software library, which includes a QC simulator based on simulated annealing. Our results indicate that it is mathematically possible to formulate the MINLP problem for AQO, but that currently available hardware is unable to solve problems of useful size. Classical general-purpose simulated annealing is currently able to solve larger problem sizes, but does not scale well and such methods would likely be outperformed in the future by improved AQO hardware with higher qubit connectivity and lower temperatures. If larger AQO devices are able to show improvements that trend in this direction, commercially viable solutions to the MINLP for particular applications could be implemented on hardware projected to be available in 5-10 years. Continued investigation into optimal AQO hardware architectures and novel methods for embedding MINLP problem constraints on to those architectures is needed to realize those commercial benefits.

Keywords: adiabatic quantum optimization, mixed integer nonlinear programming, quantum computing, NP-hard

Procedia PDF Downloads 495
158 Delegation or Assignment: Registered Nurses’ Ambiguity in Interpreting Their Scope of Practice in Long Term Care Settings

Authors: D. Mulligan, D. Casey

Abstract:

Introductory Statement: Delegation is when a registered nurse (RN) transfers a task or activity that is normally within their scope of practice to another person (delegatee). RN delegation is common practice with unregistered staff, e.g., student nurses and health care assistants (HCAs). As the role of the HCA is increasingly embedded as a direct care and support role, especially in long-term residential care for older adults, there is RN uncertainty as to their role as a delegator. The assignment is when a task is transferred to a person that is within the role specification of the delegatee. RNs in long-term care (LTC) for older people are increasingly working in teams where there are less RNs and more HCAs providing direct care to the residents. The RN is responsible and accountable for their decision to delegate and assign tasks to HCAs. In an interpretive, multiple case studies to explore how delegation of tasks by RNs to HCAs occurred in long-term care settings in Ireland the importance of the RN understanding their scope of practice emerged. Methodology: Focus group interviews and individual interviews were undertaken as part of a multiple case study. Both cases, anonymized as Case A and Case B, were within the public health service in Ireland. The case study sites were long-term care settings for older adults located in different social care divisions, and in different geographical areas. Four focus group interviews with staff nurses and three individual interviews with CNMs were undertaken. The interactive data analysis approach was the analytical framework used, with within-case and cross-case analysis. The theoretical lens of organizational role theory, applying the role episode model (REM), was used to understand, interpret, and explain the findings. Study Findings: RNs and CNMs understood the role of the nurse regulator and the scope of practice. RNs understood that the RN was accountable for the care and support provided to residents. However, RNs and CNM2s could not describe delegation in the context of their scope of practice. In both cases, the RNs did not have a standardized process for assessing HCA competence to undertake nursing tasks or interventions. RNs did not routinely supervise HCAs. Tasks were assigned and not delegated. There were differences between the cases in relation to understanding which nursing tasks required delegation. HCAs in Case A undertook clinical vital sign assessments and documentation. HCAs in Case B did not routinely undertake these activities. Delegation and assignment were influenced by the organizational factors, e.g., model of care, absence of delegation policies, inadequate RN education on delegation, and a lack of RN and HCA role clarity. Concluding Statement: Nurse staffing levels and skill mix in long-term care settings continue to change with more HCAs providing more direct care and support. With decreasing RN staffing levels RNs will be required to delegate and assign more direct care to HCAs. There is a requirement to distinguish between RN assignment and delegation at policy, regulation, and organizational levels.

Keywords: assignment, delegation, registered nurse, scope of practice

Procedia PDF Downloads 126
157 The Concept of Path in Original Buddhism and the Concept of Psychotherapeutic Improvement

Authors: Beth Jacobs

Abstract:

The landmark movement of Western clinical psychology in the 20th century was the development of psychotherapy. The landmark movement of clinical psychology in the 21st century will be the absorption of meditation practices from Buddhist psychology. While millions of people explore meditation and related philosophy, very few people are exposed to the materials of original Buddhism on this topic, especially to the Theravadan Abhidharma. The Abhidharma is an intricate system of lists and matrixes that were used to understand and remember Buddha’s teaching. The Abhidharma delineates the first psychological system of Buddhism, how the mind works in the universe of reality and why meditation training strengthens and purifies the experience of life. Its lists outline the psychology of mental constructions, perception, emotion and cosmological causation. While the Abhidharma is technical, elaborate and complex, its essential purpose relates to the central purpose of clinical psychology: to relieve human suffering. Like Western depth psychology, the methodology rests on understanding underlying processes of consciousness and perception. What clinical psychologists might describe as therapeutic improvement, the Abhidharma delineates as a specific pathway of purified actions of consciousness. This paper discusses the concept of 'path' as presented in aspects of the Theravadan Abhidharma and relates this to current clinical psychological views of therapy outcomes and gains. The core path in Buddhism is the Eight-Fold Path, which is the fourth noble truth and the launching of activity toward liberation. The path is not composed of eight ordinal steps; it’s eight-fold and is described as opening the way, not funneling choices. The specific path in the Abhidharma is described in many steps of development of consciousness activities. The path is not something a human moves on, but something that moments of consciousness develop within. 'Cittas' are extensively described in the Abhidharma as the atomic-level unit of a raw action of consciousness touching upon an object in a field, and there are 121 types of cittas categorized. The cittas are embedded in the mental factors, which could be described as the psychological packaging elements of our experiences of consciousness. Based on these constellations of infinitesimal, linked occurrences of consciousness, citta are categorized by dimensions of purification. A path is a chain of citta developing through causes and conditions. There are no selves, no pronouns in the Abhidharma. Instead of me walking a path, this is about a person working with conditions to cultivate a stream of consciousness that is pure, immediate, direct and generous. The same effort, in very different terms, informs the work of most psychotherapies. Depth psychology seeks to release the bound, unconscious elements of mental process into the clarity of realization. Cognitive and behavioral psychologies work on breaking down automatic thought valuations and actions, changing schemas and interpersonal dynamics. Understanding how the original Buddhist concept of positive human development relates to the clinical psychological concept of therapy weaves together two brilliant systems of thought on the development of human well being.

Keywords: Abhidharma, Buddhist path, clinical psychology, psychotherapeutic outcome

Procedia PDF Downloads 177
156 Dimensionality Reduction in Modal Analysis for Structural Health Monitoring

Authors: Elia Favarelli, Enrico Testi, Andrea Giorgetti

Abstract:

Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by density-based time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., mean value, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one class classifier (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, a new anomaly detector strategy is proposed, namely one class classifier neural network two (OCCNN2), which exploit the classification capability of standard classifiers in an anomaly detection problem, finding the standard class (the boundary of the features space in normal operating conditions) through a two-step approach: coarse and fine boundary estimation. The coarse estimation uses classics OCC techniques, while the fine estimation is performed through a feedforward neural network (NN) trained that exploits the boundaries estimated in the coarse step. The detection algorithms vare then compared with known methods based on principal component analysis (PCA), kernel principal component analysis (KPCA), and auto-associative neural network (ANN). In many cases, the proposed solution increases the performance with respect to the standard OCC algorithms in terms of F1 score and accuracy. In particular, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 96% with the proposed method.

Keywords: anomaly detection, frequencies selection, modal analysis, neural network, sensor network, structural health monitoring, vibration measurement

Procedia PDF Downloads 98
155 Induction Machine Design Method for Aerospace Starter/Generator Applications and Parametric FE Analysis

Authors: Wang Shuai, Su Rong, K. J.Tseng, V. Viswanathan, S. Ramakrishna

Abstract:

The More-Electric-Aircraft concept in aircraft industry levies an increasing demand on the embedded starter/generators (ESG). The high-speed and high-temperature environment within an engine poses great challenges to the operation of such machines. In view of such challenges, squirrel cage induction machines (SCIM) have shown advantages due to its simple rotor structure, absence of temperature-sensitive components as well as low torque ripples etc. The tight operation constraints arising from typical ESG applications together with the detailed operation principles of SCIMs have been exploited to derive the mathematical interpretation of the ESG-SCIM design process. The resultant non-linear mathematical treatment yielded unique solution to the SCIM design problem for each configuration of pole pair number p, slots/pole/phase q and conductors/slot zq, easily implemented via loop patterns. It was also found that not all configurations led to feasible solutions and corresponding observations have been elaborated. The developed mathematical procedures also proved an effective framework for optimization among electromagnetic, thermal and mechanical aspects by allocating corresponding degree-of-freedom variables. Detailed 3D FEM analysis has been conducted to validate the resultant machine performance against design specifications. To obtain higher power ratings, electrical machines often have to increase the slot areas for accommodating more windings. Since the available space for embedding such machines inside an engine is usually short in length, axial air gap arrangement appears more appealing compared to its radial gap counterpart. The aforementioned approach has been adopted in case studies of designing series of AFIMs and RFIMs respectively with increasing power ratings. Following observations have been obtained. Under the strict rotor diameter limitation AFIM extended axially for the increased slot areas while RFIM expanded radially with the same axial length. Beyond certain power ratings AFIM led to long cylinder geometry while RFIM topology resulted in the desired short disk shape. Besides the different dimension growth patterns, AFIMs and RFIMs also exhibited dissimilar performance degradations regarding power factor, torque ripples as well as rated slip along with increased power ratings. Parametric response curves were plotted to better illustrate the above influences from increased power ratings. The case studies may provide a basic guideline that could assist potential users in making decisions between AFIM and RFIM for relevant applications.

Keywords: axial flux induction machine, electrical starter/generator, finite element analysis, squirrel cage induction machine

Procedia PDF Downloads 435