Search results for: automated vehicles
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1930

Search results for: automated vehicles

250 Synthesis of Microencapsulated Phase Change Material for Adhesives with Thermoregulating Properties

Authors: Christin Koch, Andreas Winkel, Martin Kahlmeyer, Stefan Böhm

Abstract:

Due to environmental regulations on greenhouse gas emissions and the depletion of fossil fuels, there is an increasing interest in electric vehicles.To maximize their driving range, batteries with high storage capacities are needed. In most electric cars, rechargeable lithium-ion batteries are used because of their high energy density. However, it has to be taken into account that these batteries generate a large amount of heat during the charge and discharge processes. This leads to a decrease in a lifetime and damage to the battery cells when the temperature exceeds the defined operating range. To ensure an efficient performance of the battery cells, reliable thermal management is required. Currently, the cooling is achieved by heat sinks (e.g., cooling plates) bonded to the battery cells with a thermally conductive adhesive (TCA) that directs the heat away from the components. Especially when large amounts of heat have to be dissipated spontaneously due to peak loads, the principle of heat conduction is not sufficient, so attention must be paid to the mechanism of heat storage. An efficient method to store thermal energy is the use of phase change materials (PCM). Through an isothermal phase change, PCM can briefly absorb or release thermal energy at a constant temperature. If the phase change takes place in the transition from solid to liquid, heat is stored during melting and is released to the ambient during the freezing process upon cooling. The presented work displays the great potential of thermally conductive adhesives filled with microencapsulated PCM to limit peak temperatures in battery systems. The encapsulation of the PCM avoids the effects of aging (e.g., migration) and chemical reactions between the PCM and the adhesive matrix components. In this study, microencapsulation has been carried out by in situ polymerization. The microencapsulated PCM was characterized by FT-IR spectroscopy, and the thermal properties were measured by DSC and laser flash method. The mechanical properties, electrical and thermal conductivity, and adhesive toughness of the TCA/PCM composite were also investigated.

Keywords: phase change material, microencapsulation, adhesive bonding, thermal management

Procedia PDF Downloads 48
249 Advanced Techniques in Semiconductor Defect Detection: An Overview of Current Technologies and Future Trends

Authors: Zheng Yuxun

Abstract:

This review critically assesses the advancements and prospective developments in defect detection methodologies within the semiconductor industry, an essential domain that significantly affects the operational efficiency and reliability of electronic components. As semiconductor devices continue to decrease in size and increase in complexity, the precision and efficacy of defect detection strategies become increasingly critical. Tracing the evolution from traditional manual inspections to the adoption of advanced technologies employing automated vision systems, artificial intelligence (AI), and machine learning (ML), the paper highlights the significance of precise defect detection in semiconductor manufacturing by discussing various defect types, such as crystallographic errors, surface anomalies, and chemical impurities, which profoundly influence the functionality and durability of semiconductor devices, underscoring the necessity for their precise identification. The narrative transitions to the technological evolution in defect detection, depicting a shift from rudimentary methods like optical microscopy and basic electronic tests to more sophisticated techniques including electron microscopy, X-ray imaging, and infrared spectroscopy. The incorporation of AI and ML marks a pivotal advancement towards more adaptive, accurate, and expedited defect detection mechanisms. The paper addresses current challenges, particularly the constraints imposed by the diminutive scale of contemporary semiconductor devices, the elevated costs associated with advanced imaging technologies, and the demand for rapid processing that aligns with mass production standards. A critical gap is identified between the capabilities of existing technologies and the industry's requirements, especially concerning scalability and processing velocities. Future research directions are proposed to bridge these gaps, suggesting enhancements in the computational efficiency of AI algorithms, the development of novel materials to improve imaging contrast in defect detection, and the seamless integration of these systems into semiconductor production lines. By offering a synthesis of existing technologies and forecasting upcoming trends, this review aims to foster the dialogue and development of more effective defect detection methods, thereby facilitating the production of more dependable and robust semiconductor devices. This thorough analysis not only elucidates the current technological landscape but also paves the way for forthcoming innovations in semiconductor defect detection.

Keywords: semiconductor defect detection, artificial intelligence in semiconductor manufacturing, machine learning applications, technological evolution in defect analysis

Procedia PDF Downloads 3
248 The Evolution of Moral Politics: Analysis on Moral Foundations of Korean Parties

Authors: Changdong Oh

Abstract:

With the arrival of post-industrial society, social scientists have been giving attention to issues of which factors shape cleavage of political parties. Especially, there is a heated controversy over whether and how social and cultural values influence the identities of parties and voting behavior. Drawing from Moral Foundations Theory (MFT), which approached similar issues by considering the effect of five moral foundations on political decision-making of people, this study investigates the role of moral rhetoric in the evolution of Korean political parties. Researcher collected official announcements released by the major two parties (Democratic Party of Korea, Saenuri Party) from 2007 to 2016, and analyzed the data by using Word2Vec algorithm and Moral Foundations Dictionary. Five moral decision modules of MFT, composed of care, fairness (individualistic morality), loyalty, authority and sanctity (group-based, Durkheimian morality), can be represented in vector spaces consisted of party announcements data. By comparing the party vector and the five morality vectors, researcher can see how the political parties have actively used each of the five moral foundations to express themselves and the opposition. Results report that the conservative party tends to actively draw on collective morality such as loyalty, authority, purity to differentiate itself. Notably, such moral differentiation strategy is prevalent when they criticize an opposition party. In contrast, the liberal party tends to concern with individualistic morality such as fairness. This result indicates that moral cleavage does exist between parties in South Korea. Furthermore, individualistic moral gaps of the two political parties are eased over time, which seems to be due to the discussion of economic democratization of conservative party that emerged after 2012, but the community-related moral gaps widened. These results imply that past political cleavages related to economic interests are diminishing and replaced by cultural and social values associated with communitarian morality. However, since the conservative party’s differentiation strategy is largely related to negative campaigns, it is doubtful whether such moral differentiation among political parties can contribute to the long-term party identification of the voters, thus further research is needed to determine it is sustainable. Despite the limitations, this study makes it possible to track and identify the moral changes of party system through automated text analysis. More generally, this study could contribute to the analysis of various texts associated with the moral foundation and finding a distributed representation of moral, ethical values.

Keywords: moral foundations theory, moral politics, party system, Word2Vec

Procedia PDF Downloads 332
247 Mapping and Characterizing the Jefoure Cultural Landscape Which Provides Multiple Ecosystem Services to the Gurage People in Ethiopia

Authors: M. Achemo, O. Saito

Abstract:

Jefoure land use system is one of the traditional landscape human settlement patterns, and it is a cultural design and peculiar art of the people of Gurage in Ethiopia via which houses and trees flank roads left and right. Assessment of the multiple benefits of the traditional road that benefit society and development could enhance the understanding of the land use planners and decision makers to pay attention while planning and managing the land use system. Recent trend shows that the Jefoure land use is on the threshold of change as a result of flourishing road networks, overgrazing, and agricultural expansion. This study aimed to evaluate the multiple ecosystem services provided by the Jefoure land use system after characterization of the socio-ecological landscape. Information was compiled from existing data sources such as ordnance survey maps, aerial photographs, recent high resolution satellite imageries, designated questionnaires and interviews, and local authority contacts. The result generated scientific data on the characteristics, ecosystem services provision, and drivers of changes. The cultural landscape has novel characteristics and providing multiple ecosystem services to the community for long period of time. It is serving as road for humans, livestock and vehicles, habitat for plant species, regulating local temperature, climate, runoff and infiltration, and place for meeting, conducting religious and spiritual activities, holding social events such as marriage and mourning, playing station for children and court for football and other traditional games. As a result of its aesthetic quality and scenic beauty, it is considered as recreational place for improving mental and physical health. The study draws relevant land use planning and management solution in the improvement of socio-ecological resilience in the Jefoure land use system. The study suggests the landscape needs to be registrar as heritage site for recognizing the wisdom of the community and enhancing the conservation mechanisms.

Keywords: cultural landscape, ecosystem services, Gurage, Jefoure

Procedia PDF Downloads 96
246 Identifying the Determinants of the Shariah Non-Compliance Risk via Principal Axis Factoring

Authors: Muhammad Arzim Naim, Saiful Azhar Rosly, Mohamad Sahari Nordin

Abstract:

The objective of this study is to investigate the factors affecting the rise of Shariah non-compliance risk that can bring Islamic banks to succumb to monetary loss. Prior literatures have never analyzed such risk in details despite lots of it arguing on the validity of some Shariah compliance products. The Shariah non-compliance risk in this context is looking to the potentially failure of the facility to stand from the court test say that if the banks bring it to the court for compensation from the defaulted clients. The risk may also arise if the customers refuse to make the financing payments on the grounds of the validity of the contracts, for example, when relinquishing critical requirement of Islamic contract such as ownership, the risk that may lead the banks to suffer loss when the customer invalidate the contract through the court. The impact of Shariah non-compliance risk to Islamic banks is similar to that of legal risks faced by the conventional banks. Both resulted into monetary losses to the banks respectively. In conventional banking environment, losses can be in the forms of summons paid to the customers if they won the case. In banking environment, this normally can be in very huge amount. However, it is right to mention that for Islamic banks, the subsequent impact to them can be rigorously big because it will affect their reputation. If the customers do not perceive them to be Shariah compliant, they will take their money and bank it in other places. This paper provides new insights of risks faced by credit intensive Islamic banks by providing a new extension of knowledge with regards to the Shariah non-compliance risk by identifying its individual components that directly affecting the risk together with empirical evidences. Not limited to the Islamic banking fraternities, the regulators and policy makers should be able to use findings in this paper to evaluate the components of the Shariah non-compliance risk and make the necessary actions. The paper is written based on Malaysia’s Islamic banking practices which may not directly related to other jurisdictions. Even though the focuses of this study is directly towards to the Bay Bithaman Ajil or popularly known as BBA (i.e. sale with deferred payments) financing modality, the result from this study may be applicable to other Islamic financing vehicles.

Keywords: Islamic banking, Islamic finance, Shariah Non-compliance risk, Bay Bithaman Ajil (BBA), principal axis factoring

Procedia PDF Downloads 276
245 Performance Evaluation of Routing Protocols in Vehicular Adhoc Networks

Authors: Salman Naseer, Usman Zafar, Iqra Zafar

Abstract:

This study explores the implication of Vehicular Adhoc Network (VANET) - in the rural and urban scenarios that is one domain of Mobile Adhoc Network (MANET). VANET provides wireless communication between vehicle to vehicle and also roadside units. The Federal Commission Committee of United States of American has been allocated 75 MHz of the spectrum band in the 5.9 GHz frequency range for dedicated short-range communications (DSRC) that are specifically designed to enhance any road safety applications and entertainment/information applications. There are several vehicular related projects viz; California path, car 2 car communication consortium, the ETSI, and IEEE 1609 working group that have already been conducted to improve the overall road safety or traffic management. After the critical literature review, the selection of routing protocols is determined, and its performance was well thought-out in the urban and rural scenarios. Numerous routing protocols for VANET are applied to carry out current research. Its evaluation was conceded with the help of selected protocols through simulation via performance metric i.e. throughput and packet drop. Excel and Google graph API tools are used for plotting the graphs after the simulation results in order to compare the selected routing protocols which result with each other. In addition, the sum of the output from each scenario was computed to undoubtedly present the divergence in results. The findings of the current study present that DSR gives enhanced performance for low packet drop and high throughput as compared to AODV and DSDV in an urban congested area and in rural environments. On the other hand, in low-density area, VANET AODV gives better results as compared to DSR. The worth of the current study may be judged as the information exchanged between vehicles is useful for comfort, safety, and entertainment. Furthermore, the communication system performance depends on the way routing is done in the network and moreover, the routing of the data based on protocols implement in the network. The above-presented results lead to policy implication and develop our understanding of the broader spectrum of VANET.

Keywords: AODV, DSDV, DSR, Adhoc network

Procedia PDF Downloads 262
244 Modelling Tyre Rubber Materials for High Frequency FE Analysis

Authors: Bharath Anantharamaiah, Tomas Bouda, Elke Deckers, Stijn Jonckheere, Wim Desmet, Juan J. Garcia

Abstract:

Automotive tyres are gaining importance recently in terms of their noise emission, not only with respect to reduction in noise, but also their perception and detection. Tyres exhibit a mechanical noise generation mechanism up to 1 kHz. However, owing to the fact that tyre is a composite of several materials, it has been difficult to model it using finite elements to predict noise at high frequencies. The currently available FE models have a reliability of about 500 Hz, the limit which, however, is not enough to perceive the roughness or sharpness of noise from tyre. These noise components are important in order to alert pedestrians on the street about passing by slow, especially electric vehicles. In order to model tyre noise behaviour up to 1 kHz, its dynamic behaviour must be accurately developed up to a 1 kHz limit using finite elements. Materials play a vital role in modelling the dynamic tyre behaviour precisely. Since tyre is a composition of several components, their precise definition in finite element simulations is necessary. However, during the tyre manufacturing process, these components are subjected to various pressures and temperatures, due to which these properties could change. Hence, material definitions are better described based on the tyre responses. In this work, the hyperelasticity of tyre component rubbers is calibrated, using the design of experiments technique from the tyre characteristic responses that are measured on a stiffness measurement machine. The viscoelasticity of rubbers are defined by the Prony series for rubbers, which are determined from the loss factor relationship between the loss and storage moduli, assuming that the rubbers are excited within the linear viscoelasticity ranges. These values of loss factor are measured and theoretically expressed as a function of rubber shore hardness or hyperelasticities. From the results of the work, there exists a good correlation between test and simulation vibrational transfer function up to 1 kHz. The model also allows flexibility, i.e., the frequency limit can also be extended, if required, by calibrating the Prony parameters of rubbers corresponding to the frequency of interest. As future work, these tyre models are used for noise generation at high frequencies and thus for tyre noise perception.

Keywords: tyre dynamics, rubber materials, prony series, hyperelasticity

Procedia PDF Downloads 168
243 Scalable UI Test Automation for Large-scale Web Applications

Authors: Kuniaki Kudo, Raviraj Solanki, Kaushal Patel, Yash Virani

Abstract:

This research mainly concerns optimizing UI test automation for large-scale web applications. The test target application is the HHAexchange homecare management WEB application that seamlessly connects providers, state Medicaid programs, managed care organizations (MCOs), and caregivers through one platform with large-scale functionalities. This study focuses on user interface automation testing for the WEB application. The quality assurance team must execute many manual users interface test cases in the development process to confirm no regression bugs. The team automated 346 test cases; the UI automation test execution time was over 17 hours. The business requirement was reducing the execution time to release high-quality products quickly, and the quality assurance automation team modernized the test automation framework to optimize the execution time. The base of the WEB UI automation test environment is Selenium, and the test code is written in Python. Adopting a compilation language to write test code leads to an inefficient flow when introducing scalability into a traditional test automation environment. In order to efficiently introduce scalability into Test Automation, a scripting language was adopted. The scalability implementation is mainly implemented with AWS's serverless technology, an elastic container service. The definition of scalability here is the ability to automatically set up computers to test automation and increase or decrease the number of computers running those tests. This means the scalable mechanism can help test cases run parallelly. Then test execution time is dramatically decreased. Also, introducing scalable test automation is for more than just reducing test execution time. There is a possibility that some challenging bugs are detected by introducing scalable test automation, such as race conditions, Etc. since test cases can be executed at same timing. If API and Unit tests are implemented, the test strategies can be adopted more efficiently for this scalability testing. However, in WEB applications, as a practical matter, API and Unit testing cannot cover 100% functional testing since they do not reach front-end codes. This study applied a scalable UI automation testing strategy to the large-scale homecare management system. It confirmed the optimization of the test case execution time and the detection of a challenging bug. This study first describes the detailed architecture of the scalable test automation environment, then describes the actual performance reduction time and an example of challenging issue detection.

Keywords: aws, elastic container service, scalability, serverless, ui automation test

Procedia PDF Downloads 67
242 Cricket Injury Surveillence by Mobile Application Technology on Smartphones

Authors: Najeebullah Soomro, Habib Noorbhai, Mariam Soomro, Ross Sanders

Abstract:

The demands on cricketers are increasing with more matches being played in a shorter period of time with a greater intensity. A ten year report on injury incidence for Australian elite cricketers between the 2000- 2011 seasons revealed an injury incidence rate of 17.4%.1. In the 2009–10 season, 24 % of Australian fast bowlers missed matches through injury. 1 Injury rates are even higher in junior cricketers with an injury incidence of 25% or 2.9 injuries per 100 player hours reported. 2 Traditionally, injury surveillance has relied on the use of paper based forms or complex computer software. 3,4 This makes injury reporting laborious for the staff involved. The purpose of this presentation is to describe a smartphone based mobile application as a means of improving injury surveillance in cricket. Methods: The researchers developed CricPredict mobile App for the Android platforms, the world’s most widely used smartphone platform. It uses Qt SDK (Software Development Kit) as IDE (Integrated Development Environment). C++ was used as the programming language with the Qt framework, which provides us with cross-platform abilities that will allow this app to be ported to other operating systems (iOS, Mac, Windows) in the future. The wireframes (graphic user interface) were developed using Justinmind Prototyper Pro Edition Version (Ver. 6.1.0). CricPredict enables recording of injury and training status conveniently and immediately. When an injury is reported automated follow-up questions include site of injury, nature of injury, mechanism of injury, initial treatment, referral and action taken after injury. Direct communication with the player then enables assessment of severity and diagnosis. CricPredict also allows the coach to maintain and track each player’s attendance at matches and training session. Workload data can also be recorded by either the player or coach by recording the number of balls bowled or played in a day. This is helpful in formulating injury rates and time lost due to injuries. All the data are stored at a secured password protected data server. Outcomes and Significance: Use of CricPredit offers a simple, user friendly tool for the coaching or medical staff associated with teams to predict, record and report injuries. This system will assist teams to capture injury data with ease thus allowing better understanding of injuries associated with cricket and potentially optimize the performance of such cricketers.

Keywords: injury, cricket, surveillance, smartphones, mobile

Procedia PDF Downloads 438
241 High-Performance Thin-layer Chromatography (HPTLC) Analysis of Multi-Ingredient Traditional Chinese Medicine Supplement

Authors: Martin Cai, Khadijah B. Hashim, Leng Leo, Edmund F. Tian

Abstract:

Analysis of traditional Chinese medicinal (TCM) supplements has always been a laborious task, particularly in the case of multi‐ingredient formulations. Traditionally, herbal extracts are analysed using one or few markers compounds. In the recent years, however, pharmaceutical companies are introducing health supplements of TCM active ingredients to cater to the needs of consumers in the fast-paced society in this age. As such, new problems arise in the aspects of composition identification as well as quality analysis. In most cases of products or supplements formulated with multiple TCM herbs, the chemical composition, and nature of each raw material differs greatly from the others in the formulation. This results in a requirement for individual analytical processes in order to identify the marker compounds in the various botanicals. Thin-layer Chromatography (TLC) is a simple, cost effective, yet well-regarded method for the analysis of natural products, both as a Pharmacopeia-approved method for identification and authentication of herbs, and a great analytical tool for the discovery of chemical compositions in herbal extracts. Recent technical advances introduced High-Performance TLC (HPTLC) where, with the help of automated equipment and improvements on the chromatographic materials, both the quality and reproducibility are greatly improved, allowing for highly standardised analysis with greater details. Here we report an industrial consultancy project with ONI Global Pte Ltd for the analysis of LAC Liver Protector, a TCM formulation aimed at improving liver health. The aim of this study was to identify 4 key components of the supplement using HPTLC, following protocols derived from Chinese Pharmacopeia standards. By comparing the TLC profiles of the supplement to the extracts of the herbs reported in the label, this project proposes a simple and cost-effective analysis of the presence of the 4 marker compounds in the multi‐ingredient formulation by using 4 different HPTLC methods. With the increasing trend of small and medium-sized enterprises (SMEs) bringing natural products and health supplements into the market, it is crucial that the qualities of both raw materials and end products be well-assured for the protection of consumers. With the technology of HPTLC, science can be incorporated to help SMEs with their quality control, thereby ensuring product quality.

Keywords: traditional Chinese medicine supplement, high performance thin layer chromatography, active ingredients, product quality

Procedia PDF Downloads 250
240 Building User Behavioral Models by Processing Web Logs and Clustering Mechanisms

Authors: Madhuka G. P. D. Udantha, Gihan V. Dias, Surangika Ranathunga

Abstract:

Today Websites contain very interesting applications. But there are only few methodologies to analyze User navigations through the Websites and formulating if the Website is put to correct use. The web logs are only used if some major attack or malfunctioning occurs. Web Logs contain lot interesting dealings on users in the system. Analyzing web logs has become a challenge due to the huge log volume. Finding interesting patterns is not as easy as it is due to size, distribution and importance of minor details of each log. Web logs contain very important data of user and site which are not been put to good use. Retrieving interesting information from logs gives an idea of what the users need, group users according to their various needs and improve site to build an effective and efficient site. The model we built is able to detect attacks or malfunctioning of the system and anomaly detection. Logs will be more complex as volume of traffic and the size and complexity of web site grows. Unsupervised techniques are used in this solution which is fully automated. Expert knowledge is only used in validation. In our approach first clean and purify the logs to bring them to a common platform with a standard format and structure. After cleaning module web session builder is executed. It outputs two files, Web Sessions file and Indexed URLs file. The Indexed URLs file contains the list of URLs accessed and their indices. Web Sessions file lists down the indices of each web session. Then DBSCAN and EM Algorithms are used iteratively and recursively to get the best clustering results of the web sessions. Using homogeneity, completeness, V-measure, intra and inter cluster distance and silhouette coefficient as parameters these algorithms self-evaluate themselves to input better parametric values to run the algorithms. If a cluster is found to be too large then micro-clustering is used. Using Cluster Signature Module the clusters are annotated with a unique signature called finger-print. In this module each cluster is fed to Associative Rule Learning Module. If it outputs confidence and support as value 1 for an access sequence it would be a potential signature for the cluster. Then the access sequence occurrences are checked in other clusters. If it is found to be unique for the cluster considered then the cluster is annotated with the signature. These signatures are used in anomaly detection, prevent cyber attacks, real-time dashboards that visualize users, accessing web pages, predict actions of users and various other applications in Finance, University Websites, News and Media Websites etc.

Keywords: anomaly detection, clustering, pattern recognition, web sessions

Procedia PDF Downloads 260
239 Central Energy Management for Optimizing Utility Grid Power Exchange with a Network of Smart Homes

Authors: Sima Aznavi, Poria Fajri, Hanif Livani

Abstract:

Smart homes are small energy systems which may be equipped with renewable energy sources, storage devices, and loads. Energy management strategy plays a main role in the efficient operation of smart homes. Effective energy scheduling of the renewable energy sources and storage devices guarantees efficient energy management in households while reducing the energy imports from the grid. Nevertheless, despite such strategies, independently day ahead energy schedules for multiple households can cause undesired effects such as high power exchange with the grid at certain times of the day. Therefore, the interactions between multiple smart home day ahead energy projections is a challenging issue in a smart grid system and if not managed appropriately, the imported energy from the power network can impose additional burden on the distribution grid. In this paper, a central energy management strategy for a network consisting of multiple households each equipped with renewable energy sources, storage devices, and Plug-in Electric Vehicles (PEV) is proposed. The decision-making strategy alongside the smart home energy management system, minimizes the energy purchase cost of the end users, while at the same time reducing the stress on the utility grid. In this approach, the smart home energy management system determines different operating scenarios based on the forecasted household daily load and the components connected to the household with the objective of minimizing the end user overall cost. Then, selected projections for each household that are within the same cost range are sent to the central decision-making system. The central controller then organizes the schedules to reduce the overall peak to average ratio of the total imported energy from the grid. To validate this approach simulations are carried out for a network of five smart homes with different load requirements and the results confirm that by applying the proposed central energy management strategy, the overall power demand from the grid can be significantly flattened. This is an effective approach to alleviate the stress on the network by distributing its energy to a network of multiple households over a 24- hour period.

Keywords: energy management, renewable energy sources, smart grid, smart home

Procedia PDF Downloads 218
238 Hydrogel Hybridizing Temperature-Cured Dissolvable Gelatin Microspheres as Non-Anchorage Dependent Cell Carriers for Tissue Engineering Applications

Authors: Dong-An Wang

Abstract:

All kinds of microspheres have been extensively employed as carriers for drug, gene and therapeutic cell delivery. Most therapeutic cell delivery microspheres rely on a two-step methodology: fabrication of microspheres and subsequent seeding of cells onto them. In this study, we have developed a novel one-step cell encapsulation technique using a convenient and instant water-in-oil single emulsion approach to form cell-encapsulated gelatin microspheres. This technology is adopted for hyaline cartilage tissue engineering, in which autologous chondrocytes are used as therapeutic cells. Cell viability was maintained throughout and after the microsphere formation (75-100 µm diameters) process that avoids involvement of any covalent bonding reactions or exposure to any further chemicals. Further encapsulation of cell-laden microspheres in alginate gels were performed under 4°C via a prompt process. Upon the formation of alginate constructs, they were immediately relocated into CO2 incubator where the temperature was maintained at 37°C; under this temperature, the cell-laden gelatin microspheres dissolved within hours to yield similarly sized cavities and the chondrocytes were therefore suspended within the cavities inside the alginate gel bulk. Hence, the gelatin cell-laden microspheres served two roles: as cell delivery vehicles which can be removable through temperature curing, and as porogens within an alginate hydrogel construct to provide living space for cell growth and tissue development as well as better permeability for mutual diffusions. These cell-laden microspheres, namely “temperature-cured dissolvable gelatin microsphere based cell carriers” (tDGMCs), were further encapsulated in a chondrocyte-laden alginate scaffold system and analyzed by WST-1, gene expression analyses, biochemical assays, histology and immunochemistry stains. The positive results consistently demonstrated the promise of tDGMC technology in delivering these non-anchorage dependent cells (chondrocytes). It can be further conveniently translated into delivery of other non-anchorage dependent cell species, including stem cells, progenitors or iPS cells, for regeneration of tissues in internal organs, such as engineered hepatogenesis or pancreatic regeneration.

Keywords: biomaterials, tissue engineering, microsphere, hydrogel, porogen, anchorage dependence

Procedia PDF Downloads 363
237 Health Risk Assessment and Source Apportionment of Elemental Particulate Contents from a South Asian Future Megacity

Authors: Afifa Aslam, Muhammad Ibrahim, Abid Mahmood, Muhammad Usman Alvi, Fariha Jabeen, Umara Tabassum

Abstract:

Many factors cause air pollution in Pakistan, which poses a significant threat to human health. Diesel fuel and gasoline motor vehicles, as well as industrial companies, pollute the air in Pakistan's cities. The study's goal is to determine the level of air pollution in a Pakistani industrial city and to establish risk levels for the health of the population. We measured the intensity of air pollution by chemical characterization and examination of air samples collected at stationary remark sites. The PM10 levels observed at all sampling sites, including residential, commercial, high-traffic, and industrial areas were well above the limits imposed by Pakistan EPA, the United States EPA, and WHO. We assessed the health risk via chemical factors using a methodology approved for risk assessment. All Igeo index values greater than one were considered moderately contaminated or moderately to severely contaminated. Heavy metals have a substantial risk of acute adverse effects. In Faisalabad, Pakistan, there was an enormously high risk of chronic effects produced by a heavy metal acquaintance. Concerning specified toxic metals, intolerable levels of carcinogenic risks have been determined for the entire population. As a result, in most of the investigated areas of Faisalabad, the indices and hazard quotients for chronic and acute exposure exceeded the permissible level of 1.0. In the current study, re-suspended roadside mineral dust, anthropogenic exhaust emissions from traffic and industry, and industrial dust were identified as major emission sources of elemental particulate contents. Because of the unacceptable levels of risk in the research area, it is strongly suggested that a comprehensive study of the population's health status as a result of air pollution should be conducted for policies to be developed against these risks.

Keywords: elemental composition, particulate pollution, Igeo index, health risk assessment, hazard quotient

Procedia PDF Downloads 54
236 Concept of a Pseudo-Lower Bound Solution for Reinforced Concrete Slabs

Authors: M. De Filippo, J. S. Kuang

Abstract:

In construction industry, reinforced concrete (RC) slabs represent fundamental elements of buildings and bridges. Different methods are available for analysing the structural behaviour of slabs. In the early ages of last century, the yield-line method has been proposed to attempt to solve such problem. Simple geometry problems could easily be solved by using traditional hand analyses which include plasticity theories. Nowadays, advanced finite element (FE) analyses have mainly found their way into applications of many engineering fields due to the wide range of geometries to which they can be applied. In such cases, the application of an elastic or a plastic constitutive model would completely change the approach of the analysis itself. Elastic methods are popular due to their easy applicability to automated computations. However, elastic analyses are limited since they do not consider any aspect of the material behaviour beyond its yield limit, which turns to be an essential aspect of RC structural performance. Furthermore, their applicability to non-linear analysis for modeling plastic behaviour gives very reliable results. Per contra, this type of analysis is computationally quite expensive, i.e. not well suited for solving daily engineering problems. In the past years, many researchers have worked on filling this gap between easy-to-implement elastic methods and computationally complex plastic analyses. This paper aims at proposing a numerical procedure, through which a pseudo-lower bound solution, not violating the yield criterion, is achieved. The advantages of moment distribution are taken into account, hence the increase in strength provided by plastic behaviour is considered. The lower bound solution is improved by detecting over-yielded moments, which are used to artificially rule the moment distribution among the rest of the non-yielded elements. The proposed technique obeys Nielsen’s yield criterion. The outcome of this analysis provides a simple, yet accurate, and non-time-consuming tool of predicting the lower-bound solution of the collapse load of RC slabs. By using this method, structural engineers can find the fracture patterns and ultimate load bearing capacity. The collapse triggering mechanism is found by detecting yield-lines. An application to the simple case of a square clamped slab is shown, and a good match was found with the exact values of collapse load.

Keywords: computational mechanics, lower bound method, reinforced concrete slabs, yield-line

Procedia PDF Downloads 146
235 Experimental Investigation of the Thermal Conductivity of Neodymium and Samarium Melts by a Laser Flash Technique

Authors: Igor V. Savchenko, Dmitrii A. Samoshkin

Abstract:

The active study of the properties of lanthanides has begun in the late 50s of the last century, when methods for their purification were developed and metals with a relatively low content of impurities were obtained. Nevertheless, up to date, many properties of the rare earth metals (REM) have not been experimentally investigated, or insufficiently studied. Currently, the thermal conductivity and thermal diffusivity of lanthanides have been studied most thoroughly in the low-temperature region and at moderate temperatures (near 293 K). In the high-temperature region, corresponding to the solid phase, data on the thermophysical characteristics of the REM are fragmentary and in some cases contradictory. Analysis of the literature showed that the data on the thermal conductivity and thermal diffusivity of light REM in the liquid state are few in number, little informative (only one point corresponds to the liquid state region), contradictory (the nature of the thermal conductivity change with temperature is not reproduced), as well as the results of measurements diverge significantly beyond the limits of the total errors. Thereby our experimental results allow to fill this gap and to clarify the existing information on the heat transfer coefficients of neodymium and samarium in a wide temperature range from the melting point up to 1770 K. The measurement of the thermal conductivity of investigated metallic melts was carried out by laser flash technique on an automated experimental setup LFA-427. Neodymium sample of brand NM-1 (99.21 wt % purity) and samarium sample of brand SmM-1 (99.94 wt % purity) were cut from metal ingots and then ones were annealed in a vacuum (1 mPa) at a temperature of 1400 K for 3 hours. Measuring cells of a special design from tantalum were used for experiments. Sealing of the cell with a sample inside it was carried out by argon-arc welding in the protective atmosphere of the glovebox. The glovebox was filled with argon with purity of 99.998 vol. %; argon was additionally cleaned up by continuous running through sponge titanium heated to 900–1000 K. The general systematic error in determining the thermal conductivity of investigated metallic melts was 2–5%. The approximation dependences and the reference tables of the thermal conductivity and thermal diffusivity coefficients were developed. New reliable experimental data on the transport properties of the REM and their changes in phase transitions can serve as a scientific basis for optimizing the industrial processes of production and use of these materials, as well as ones are of interest for the theory of thermophysical properties of substances, physics of metals, liquids and phase transformations.

Keywords: high temperatures, laser flash technique, liquid state, metallic melt, rare earth metals, thermal conductivity, thermal diffusivity

Procedia PDF Downloads 171
234 Fast Transient Workflow for External Automotive Aerodynamic Simulations

Authors: Christina Peristeri, Tobias Berg, Domenico Caridi, Paul Hutcheson, Robert Winstanley

Abstract:

In recent years the demand for rapid innovations in the automotive industry has led to the need for accelerated simulation procedures while retaining a detailed representation of the simulated phenomena. The project’s aim is to create a fast transient workflow for external aerodynamic CFD simulations of road vehicles. The geometry used was the SAE Notchback Closed Cooling DrivAer model, and the simulation results were compared with data from wind tunnel tests. The meshes generated for this study were of two types. One was a mix of polyhedral cells near the surface and hexahedral cells away from the surface. The other was an octree hex mesh with a rapid method of fitting to the surface. Three different grid refinement levels were used for each mesh type, with the biggest total cell count for the octree mesh being close to 1 billion. A series of steady-state solutions were obtained on three different grid levels using a pseudo-transient coupled solver and a k-omega-based RANS turbulence model. A mesh-independent solution was found in all cases with a medium level of refinement with 200 million cells. Stress-Blended Eddy Simulation (SBES) was chosen for the transient simulations, which uses a shielding function to explicitly switch between RANS and LES mode. A converged pseudo-transient steady-state solution was used to initialize the transient SBES run that was set up with the SIMPLEC pressure-velocity coupling scheme to reach the fastest solution (on both CPU & GPU solvers). An important part of this project was the use of FLUENT’s Multi-GPU solver. Tesla A100 GPU has been shown to be 8x faster than an Intel 48-core Sky Lake CPU system, leading to significant simulation speed-up compared to the traditional CPU solver. The current study used 4 Tesla A100 GPUs and 192 CPU cores. The combination of rapid octree meshing and GPU computing shows significant promise in reducing time and hardware costs for industrial strength aerodynamic simulations.

Keywords: CFD, DrivAer, LES, Multi-GPU solver, octree mesh, RANS

Procedia PDF Downloads 89
233 Hematologic Inflammatory Markers and Inflammation-Related Hepatokines in Pediatric Obesity

Authors: Mustafa Metin Donma, Orkide Donma

Abstract:

Obesity in children particularly draws attention because it may threaten the individual’s future life due to many chronic diseases it may lead to. Most of these diseases, including obesity itself altogether are related to inflammation. For this reason, inflammation-related parameters gain importance. Within this context, complete blood cell counts, ratios or indices derived from these counts have recently found some platform to be used as inflammatory markers. So far, mostly adipokines were investigated within the field of obesity. The liver is at the center of the metabolic pathways network. Metabolic inflammation is closely associated with cellular dysfunction. In this study, hematologic inflammatory markers and two major hepatokines, cytokines produced predominantly by the liver, fibroblast growth factor-21 (FGF-21) and fetuin A were investigated in pediatric obesity. Two groups were constituted from seventy-six obese children based on World Health Organization criteria. Group 1 was composed of children whose age- and sex-adjusted body mass index (BMI) percentiles were between 95 and 99. Group 2 consists of children who are above the 99ᵗʰ percentile. The first and the latter groups were defined as obese (OB) and morbid obese (MO). Anthropometric measurements of the children were performed. Informed consent forms and the approval of the institutional ethics committee were obtained. Blood cell counts and ratios were determined by an automated hematology analyzer. The related ratios and indexes were calculated. Statistical evaluation of the data was performed by the SPSS program. There was no statistically significant difference in terms of neutrophil-to lymphocyte ratio, monocyte-to-high density lipoprotein cholesterol ratio and the platelet-to-lymphocyte ratio between the groups. Mean platelet volume and platelet distribution width values were decreased (p<0.05), total platelet count, red cell distribution width (RDW) and systemic immune inflammation index values were increased (p<0.01) in MO group. Both hepatokines were increased in the same group; however, increases were not statistically significant. In this group, also a strong correlation was calculated between FGF-21 and RDW when controlled by age, hematocrit, iron and ferritin (r=0.425; p<0.01). In conclusion, the association between RDW, a hematologic inflammatory marker, and FGF-21, an inflammation-related hepatokine, found in MO group is an important finding discriminating between OB and MO children. This association is even more powerful when controlled by age and iron-related parameters.

Keywords: childhood obesity, fetuin A , fibroblast growth factor-21, hematologic markers, red cell distribution width

Procedia PDF Downloads 169
232 Flexible Integration of Airbag Weakening Lines in Interior Components: Airbag Weakening with Jenoptik Laser Technology

Authors: Markus Remm, Sebastian Dienert

Abstract:

Vehicle interiors are not only changing in terms of design and functionality but also due to new driving situations in which, for example, autonomous operating modes are possible. Flexible seating positions are changing the requirements for passive safety system behavior and location in the interior of a vehicle. With fully autonomous driving, the driver can, for example, leave the position behind the steering wheel and take a seated position facing backward. Since autonomous and non-autonomous vehicles will share the same road network for the foreseeable future, accidents cannot be avoided, which makes the use of passive safety systems indispensable. With JENOPTIK-VOTAN® A technology, the trend towards flexible predetermined airbag weakening lines is enabled. With the help of laser beams, the predetermined weakening lines are introduced from the backside of the components so that they are absolutely invisible. This machining process is sensor-controlled and guarantees that a small residual wall thickness remains for the best quality and reliability for airbag weakening lines. Due to the wide processing range of the laser, the processing of almost all materials is possible. A CO₂ laser is used for many plastics, natural fiber materials, foams, foils and material composites. A femtosecond laser is used for natural materials and textiles that are very heat-sensitive. This laser type has extremely short laser pulses with very high energy densities. Supported by a high-precision and fast movement of the laser beam by a laser scanner system, the so-called cold ablation is enabled to predetermine weakening lines layer by layer until the desired residual wall thickness remains. In that way, for example, genuine leather can be processed in a material-friendly and process-reliable manner without design implications to the components A-Side. Passive safety in the vehicle is increased through the interaction of modern airbag technology and high-precision laser airbag weakening. The JENOPTIK-VOTAN® A product family has been representing this for more than 25 years and is pointing the way to the future with new and innovative technologies.

Keywords: design freedom, interior material processing, laser technology, passive safety

Procedia PDF Downloads 82
231 The Causes and Effects of Poor Household Sanitation: Case Study of Kansanga Parish

Authors: Rosine Angelique Uwacu

Abstract:

Poor household sanitation is rife in Uganda, especially in Kampala. This study was carried out with he goal of establishing the main causes and effects of poor household sanitation in Kansanga parish. The study objectively sought to: To identify various ways through which wastes are generated and disposed of in Kansanga parish, identify different hygiene procedures/behaviors of waste handling in Kansanga parish and assess health effects of poor household sanitation and suggest the recommended appropriate measures of addressing cases of lack of hygiene in Kansanga parish. The study used a survey method where cluster sampling was employed. This is because there is no register of population or sufficient information, or geographic distribution of individuals is widely scattered. Data was collected through the use of interviews accompanied by observation and questionnaires. The study involved a sample of 100 households. The study revealed that; some households use wheeled bin collection, skip hire and roll on/off contained others take their wastes to refuse collection vehicles. Surprisingly, majority of the households submitted that they use polythene bags 'Kavera' and at times plastic sacs to dispose of their wastes which are dumped in drainage patterns or dustbins and other illegal dumping site. The study showed that washing hands with small jerrycans after using the toilet was being adopted by most households as there were no or few other alternatives. The study revealed that the common health effects that come as a result of poor household sanitation in Kansanga Parish are diseases outbreaks such as malaria, typhoid and diarrhea. Finally, the study gave a number of recommendations or suggestions on maintaining and achieving an adequate household sanitation in Kansanga Parish such as sensitization of community members by their leaders like Local Counselors could help to improve the situation, establishment of community sanitation days for people to collectively and voluntarily carry out good sanitation practices like digging trenches, burning garbage and proper waste management and disposal. Authorities like Kampala Capital City Authority should distribute dumping containers or allocate dumping sites where people can dispose of their wastes preferably at a minimum cost for proper management.

Keywords: household sanitation, kansanga parish, Uganda, waste

Procedia PDF Downloads 165
230 Wildlife Habitat Corridor Mapping in Urban Environments: A GIS-Based Approach Using Preliminary Category Weightings

Authors: Stefan Peters, Phillip Roetman

Abstract:

The global loss of biodiversity is threatening the benefits nature provides to human populations and has become a more pressing issue than climate change and requires immediate attention. While there have been successful global agreements for environmental protection, such as the Montreal Protocol, these are rare, and we cannot rely on them solely. Thus, it is crucial to take national and local actions to support biodiversity. Australia is one of the 17 countries in the world with a high level of biodiversity, and its cities are vital habitats for endangered species, with more of them found in urban areas than in non-urban ones. However, the protection of biodiversity in metropolitan Adelaide has been inadequate, with over 130 species disappearing since European colonization in 1836. In this research project we conceptualized, developed and implemented a framework for wildlife Habitat Hotspots and Habitat Corridor modelling in an urban context using geographic data and GIS modelling and analysis. We used detailed topographic and other geographic data provided by a local council, including spatial and attributive properties of trees, parcels, water features, vegetated areas, roads, verges, traffic, and census data. Weighted factors considered in our raster-based Habitat Hotspot model include parcel size, parcel shape, population density, canopy cover, habitat quality and proximity to habitats and water features. Weighted factors considered in our raster-based Habitat Corridor model include habitat potential (resulting from the Habitat Hotspot model), verge size, road hierarchy, road widths, human density, and presence of remnant indigenous vegetation species. We developed a GIS model, using Python scripting and ArcGIS-Pro Model-Builder, to establish an automated reproducible and adjustable geoprocessing workflow, adaptable to any study area of interest. Our habitat hotspot and corridor modelling framework allow to determine and map existing habitat hotspots and wildlife habitat corridors. Our research had been applied to the study case of Burnside, a local council in Adelaide, Australia, which encompass an area of 30 km2. We applied end-user expertise-based category weightings to refine our models and optimize the use of our habitat map outputs towards informing local strategic decision-making.

Keywords: biodiversity, GIS modeling, habitat hotspot, wildlife corridor

Procedia PDF Downloads 84
229 Permeable Reactive Pavement for Controlling the Transport of Benzene, Toluene, Ethyl-Benzene, and Xylene (BTEX) Contaminants

Authors: Shengyi Huang, Chenju Liang

Abstract:

Volatile organic compounds such as benzene, toluene, ethyl-benzene, and xylene (BTEX) are common contaminants in environment, which could come from asphalt concrete or exhaust emissions of vehicles. The BTEX may invade to the subsurface environment via wet and dry atmospheric depositions. If there aren’t available ways for controlling contaminants’ fate and transport, they would extensively harm natural environment. In the 1st phase of this study, various adsorbents were screened for a suitable one to be an additive in the porous asphalt mixture. In the 2nd phase, addition of the selected adsorbent was incorporated with the design of porous asphalt concrete (PAC) to produce the permeable reactive pavement (PRP), which was subsequently tested for the potential of adsorbing aqueous BTEX as compared to the PAC, in the 3rd phase. The PRP was prepared according to the following steps: firstly, the suitable adsorbent was chosen based on the analytical results of specific surface area analysis, thermal-gravimetric analysis, adsorption kinetics and isotherms, and thermal dynamics analysis; secondly, the materials of coarse aggregate, fine aggregate, filler, asphalt, and fiber were tested in order to meet regulated specifications (e.g., water adsorption, soundness, viscosity etc.) for preparing the PRP; thirdly, the amount of adsorbent additive was determined in the PRP; fourthly, the prepared PAC and PRP were examined for their physical properties (e.g., abrasion loss, drain-down loss, Marshall stability, Marshall flow, dynamic stability etc.). As a result of comparison between PRP and PAC, the PRP showed better physical performance than the traditional PAC. At last, the Marshall Specimen column tests were conducted to explore the adsorption capacities of PAC and PRPs. The BTEX adsorption capacities of PRPs are higher than those obtained from traditional PAC. In summary, PRPs showed superior physical performance and adsorption capacities, which exhibit the potential of PRP to be applied as a replacement of PAC for better controlling the transport of non-point source pollutants.

Keywords: porous asphalt concrete, volatile organic compounds, permeable reactive pavement, non-point source pollution

Procedia PDF Downloads 177
228 Designing Presentational Writing Assessments for the Advanced Placement World Language and Culture Exams

Authors: Mette Pedersen

Abstract:

This paper outlines the criteria that assessment specialists use when they design the 'Persuasive Essay' task for the four Advanced Placement World Language and Culture Exams (AP French, German, Italian, and Spanish). The 'Persuasive Essay' is a free-response, source-based, standardized measure of presentational writing. Each 'Persuasive Essay' item consists of three sources (an article, a chart, and an audio) and a prompt, which is a statement of the topic phrased as an interrogative sentence. Due to its richness of source materials and due to the amount of time that test takers are given to prepare for and write their responses (a total of 55 minutes), the 'Persuasive Essay' is the free-response task on the AP World Language and Culture Exams that goes to the greatest lengths to unleash the test takers' proficiency potential. The author focuses on the work that goes into designing the 'Persuasive Essay' task, outlining best practices for the selection of topics and sources, the interplay that needs to be present among the sources and the thinking behind the articulation of prompts for the 'Persuasive Essay' task. Using released 'Persuasive Essay' items from the AP World Language and Culture Exams and accompanying data on test taker performance, the author shows how different passages, and features of passages, have succeeded (and sometimes not succeeded) in eliciting writing proficiency among test takers over time. Data from approximately 215.000 test takers per year from 2014 to 2017 and approximately 35.000 test takers per year from 2012 to 2013 form the basis of this analysis. The conclusion of the study is that test taker performance improves significantly when the sources that test takers are presented with express directly opposing viewpoints. Test taker performance also improves when the interrogative prompt that the test takers respond to is phrased as a yes/no question. Finally, an analysis of linguistic difficulty and complexity levels of the printed sources reveals that test taker performance does not decrease when the complexity level of the article of the 'Persuasive Essay' increases. This last text complexity analysis is performed with the help of the 'ETS TextEvaluator' tool and the 'Complexity Scale for Information Texts (Scale)', two tools, which, in combination, provide a rubric and a fully-automated technology for evaluating nonfiction and informational texts in English translation.

Keywords: advanced placement world language and culture exams, designing presentational writing assessments, large-scale standardized assessments of written language proficiency, source-based language testing

Procedia PDF Downloads 110
227 Artificial Neural Network Approach for Vessel Detection Using Visible Infrared Imaging Radiometer Suite Day/Night Band

Authors: Takashi Yamaguchi, Ichio Asanuma, Jong G. Park, Kenneth J. Mackin, John Mittleman

Abstract:

In this paper, vessel detection using the artificial neural network is proposed in order to automatically construct the vessel detection model from the satellite imagery of day/night band (DNB) in visible infrared in the products of Imaging Radiometer Suite (VIIRS) on Suomi National Polar-orbiting Partnership (Suomi-NPP).The goal of our research is the establishment of vessel detection method using the satellite imagery of DNB in order to monitor the change of vessel activity over the wide region. The temporal vessel monitoring is very important to detect the events and understand the circumstances within the maritime environment. For the vessel locating and detection techniques, Automatic Identification System (AIS) and remote sensing using Synthetic aperture radar (SAR) imagery have been researched. However, each data has some lack of information due to uncertain operation or limitation of continuous observation. Therefore, the fusion of effective data and methods is important to monitor the maritime environment for the future. DNB is one of the effective data to detect the small vessels such as fishery ships that is difficult to observe in AIS. DNB is the satellite sensor data of VIIRS on Suomi-NPP. In contrast to SAR images, DNB images are moderate resolution and gave influence to the cloud but can observe the same regions in each day. DNB sensor can observe the lights produced from various artifact such as vehicles and buildings in the night and can detect the small vessels from the fishing light on the open water. However, the modeling of vessel detection using DNB is very difficult since complex atmosphere and lunar condition should be considered due to the strong influence of lunar reflection from cloud on DNB. Therefore, artificial neural network was applied to learn the vessel detection model. For the feature of vessel detection, Brightness Temperature at the 3.7 μm (BT3.7) was additionally used because BT3.7 can be used for the parameter of atmospheric conditions.

Keywords: artificial neural network, day/night band, remote sensing, Suomi National Polar-orbiting Partnership, vessel detection, Visible Infrared Imaging Radiometer Suite

Procedia PDF Downloads 214
226 Experimental Studies of the Reverse Load-Unloading Effect on the Mechanical, Linear and Nonlinear Elastic Properties of n-AMg6/C60 Nanocomposite

Authors: Aleksandr I. Korobov, Natalia V. Shirgina, Aleksey I. Kokshaiskiy, Vyacheslav M. Prokhorov

Abstract:

The paper presents the results of an experimental study of the effect of reverse mechanical load-unloading on the mechanical, linear, and nonlinear elastic properties of n-AMg6/C60 nanocomposite. Samples for experimental studies of n-AMg6/C60 nanocomposite were obtained by grinding AMg6 polycrystalline alloy in a planetary mill with 0.3 wt % of C60 fullerite in an argon atmosphere. The resulting product consisted of 200-500-micron agglomerates of nanoparticles. X-ray coherent scattering (CSL) method has shown that the average nanoparticle size is 40-60 nm. The resulting preform was extruded at high temperature. Modifications of C60 fullerite interferes the process of recrystallization at grain boundaries. In the samples of n-AMg6/C60 nanocomposite, the load curve is measured: the dependence of the mechanical stress σ on the strain of the sample ε under its multi-cycle load-unloading process till its destruction. The hysteresis dependence σ = σ(ε) was observed, and insignificant residual strain ε < 0.005 were recorded. At σ≈500 MPa and ε≈0.025, the sample was destroyed. The destruction of the sample was fragile. Microhardness was measured before and after destruction of the sample. It was found that the loading-unloading process led to an increase in its microhardness. The effect of the reversible mechanical stress on the linear and nonlinear elastic properties of the n-AMg6/C60 nanocomposite was studied experimentally by ultrasonic method on the automated complex Ritec RAM-5000 SNAP SYSTEM. In the n-AMg6/C60 nanocomposite, the velocities of the longitudinal and shear bulk waves were measured with the pulse method, and all the second-order elasticity coefficients and their dependence on the magnitude of the reversible mechanical stress applied to the sample were calculated. Studies of nonlinear elastic properties of the n-AMg6/C60 nanocomposite at reversible load-unloading of the sample were carried out with the spectral method. At arbitrary values of the strain of the sample (up to its breakage), the dependence of the amplitude of the second longitudinal acoustic harmonic at a frequency of 2f = 10MHz on the amplitude of the first harmonic at a frequency f = 5MHz of the acoustic wave is measured. Based on the results of these measurements, the values of the nonlinear acoustic parameter in the n-AMg6/C60 nanocomposite sample at different mechanical stress were determined. The obtained results can be used in solid-state physics, materials science, for development of new techniques for nondestructive testing of structural materials using methods of nonlinear acoustic diagnostics. This study was supported by the Russian Science Foundation (project №14-22-00042).

Keywords: nanocomposite, generation of acoustic harmonics, nonlinear acoustic parameter, hysteresis

Procedia PDF Downloads 123
225 Mikrophonie I (1964) by Karlheinz Stockhausen - Between Idea and Auditory Image

Authors: Justyna Humięcka-Jakubowska

Abstract:

1. Background in music analysis. Traditionally, when we think about a composer’s sketches, the chances are that we are thinking in terms of the working out of detail, rather than the evolution of an overall concept. Since music is a “time art’, it follows that questions of a form cannot be entirely detached from considerations of time. One could say that composers tend to regard time either as a place gradually and partially intuitively filled, or they can look for a specific strategy to occupy it. In my opinion, one thing that sheds light on Stockhausen's compositional thinking is his frequent use of 'form schemas', that is often a single-page representation of the entire structure of a piece. 2. Background in music technology. Sonic Visualiser is a program used to study a musical recording. It is an open source application for viewing, analysing, and annotating music audio files. It contains a number of visualisation tools, which are designed with useful default parameters for musical analysis. Additionally, the Vamp plugin format of SV supports to provide analysis such as for example structural segmentation. 3. Aims. The aim of my paper is to show how SV may be used to obtain a better understanding of the specific musical work, and how the compositional strategy does impact on musical structures and musical surfaces. I want to show that ‘traditional” music analytic methods don’t allow to indicate interrelationships between musical surface (which is perceived) and underlying musical/acoustical structure. 4. Main Contribution. Stockhausen had dealt with the most diverse musical problems by the most varied methods. A characteristic which he had never ceased to be placed at the center of his thought and works, it was the quest for a new balance founded upon an acute connection between speculation and intuition. In the case with Mikrophonie I (1964) for tam-tam and 6 players Stockhausen makes a distinction between the "connection scheme", which indicates the ground rules underlying all versions, and the form scheme, which is associated with a particular version. The preface to the published score includes both the connection scheme, and a single instance of a "form scheme", which is what one can hear on the CD recording. In the current study, the insight into the compositional strategy chosen by Stockhausen was been compared with auditory image, that is, with the perceived musical surface. Stockhausen's musical work is analyzed both in terms of melodic/voice and timbre evolution. 5. Implications The current study shows how musical structures have determined of musical surface. My general assumption is this, that while listening to music we can extract basic kinds of musical information from musical surfaces. It is shown that an interactive strategies of musical structure analysis can offer a very fruitful way of looking directly into certain structural features of music.

Keywords: automated analysis, composer's strategy, mikrophonie I, musical surface, stockhausen

Procedia PDF Downloads 275
224 The Gezi Park Protests in the Columns

Authors: Süleyman Hakan Yilmaz, Yasemin Gülsen Yilmaz

Abstract:

The Gezi Park protests of 2013 have significantly changed the Turkish agenda and its effects have been felt historically. The protests, which rapidly spread throughout the country, were triggered by the proposal to recreate the Ottoman Army Barracks to function as a shopping mall on Gezi Park located in Istanbul’s Taksim neighbourhood despite the oppositions of several NGOs and when trees were cut in the park for this purpose. Once the news that construction vehicles entered the park on May 27 spread on social media, activists moved into the park to stop the demolition, against whom the police used disproportioned force. With this police intervention and the then prime-minister Tayyip Erdoğan's insistent statements about the construction plans, the protests turned into anti-government demonstrations, which then spread to the rest of the country, mainly in big cities like Ankara and Izmir. According to the Ministry of Internal Affairs’ June 23rd reports, 2.5 million people joined the demonstrations in 79 provinces, that is all of them, except for the provinces of Bayburt and Bingöl, while even more people shared their opinions via social networks. As a result of these events, 8 civilians and 2 security personnel lost their lives, namely police chief Mustafa Sarı, police officer Ahmet Küçükdağ, citizens Mehmet Ayvalıtaş, Abdullah Cömert, Ethem Sarısülük, Ali İsmail Korkmaz, Ahmet Atakan, Berkin Elvan, Burak Can Karamanoğlu, Mehmet İstif, and Elif Çermik, and 8163 more were injured. Besides being a turning point in Turkish history, the Gezi Park protests also had broad repercussions in both in Turkish and in global media, which focused on Turkey throughout the events. Our study conducts content analysis of three Turkish reporting newspapers with varying ideological standpoints, Hürriyet, Cumhuriyet ve Yeni Şafak, in order to reveal their basic approach to columns casting in context of the Gezi Park protests. Columns content relating to the Gezi protests were treated and analysed for this purpose. The aim of this study is to understand the social effects of the Gezi Park protests through media samples with varying political attitudes towards news casting.

Keywords: Gezi Park, media, news casting, columns

Procedia PDF Downloads 407
223 Modelling of Phase Transformation Kinetics in Post Heat-Treated Resistance Spot Weld of AISI 1010 Mild Steel

Authors: B. V. Feujofack Kemda, N. Barka, M. Jahazi, D. Osmani

Abstract:

Automobile manufacturers are constantly seeking means to reduce the weight of car bodies. The usage of several steel grades in auto body assembling has been found to be a good technique to enlighten vehicles weight. This few years, the usage of dual phase (DP) steels, transformation induced plasticity (TRIP) steels and boron steels in some parts of the auto body have become a necessity because of their lightweight. However, these steels are martensitic, when they undergo a fast heat treatment, the resultant microstructure is essential, made of martensite. Resistance spot welding (RSW), one of the most used techniques in assembling auto bodies, becomes problematic in the case of these steels. RSW being indeed a process were steel is heated and cooled in a very short period of time, the resulting weld nugget is mostly fully martensitic, especially in the case of DP, TRIP and boron steels but that also holds for plain carbon steels as AISI 1010 grade which is extensively used in auto body inner parts. Martensite in its turn must be avoided as most as possible when welding steel because it is the principal source of brittleness and it weakens weld nugget. Thus, this work aims to find a mean to reduce martensite fraction in weld nugget when using RSW for assembling. The prediction of phase transformation kinetics during RSW has been done. That phase transformation kinetics prediction has been made possible through the modelling of the whole welding process, and a technique called post weld heat treatment (PWHT) have been applied in order to reduce martensite fraction in the weld nugget. Simulation has been performed for AISI 1010 grade, and results show that the application of PWHT leads to the formation of not only martensite but also ferrite, bainite and pearlite during the cooling of weld nugget. Welding experiments have been done in parallel and micrographic analyses show the presence of several phases in the weld nugget. Experimental weld geometry and phase proportions are in good agreement with simulation results, showing here the validity of the model.

Keywords: resistance spot welding, AISI 1010, modeling, post weld heat treatment, phase transformation, kinetics

Procedia PDF Downloads 93
222 An Analytical Metric and Process for Critical Infrastructure Architecture System Availability Determination in Distributed Computing Environments under Infrastructure Attack

Authors: Vincent Andrew Cappellano

Abstract:

In the early phases of critical infrastructure system design, translating distributed computing requirements to an architecture has risk given the multitude of approaches (e.g., cloud, edge, fog). In many systems, a single requirement for system uptime / availability is used to encompass the system’s intended operations. However, when architected systems may perform to those availability requirements only during normal operations and not during component failure, or during outages caused by adversary attacks on critical infrastructure (e.g., physical, cyber). System designers lack a structured method to evaluate availability requirements against candidate system architectures through deep degradation scenarios (i.e., normal ops all the way down to significant damage of communications or physical nodes). This increases risk of poor selection of a candidate architecture due to the absence of insight into true performance for systems that must operate as a piece of critical infrastructure. This research effort proposes a process to analyze critical infrastructure system availability requirements and a candidate set of systems architectures, producing a metric assessing these architectures over a spectrum of degradations to aid in selecting appropriate resilient architectures. To accomplish this effort, a set of simulation and evaluation efforts are undertaken that will process, in an automated way, a set of sample requirements into a set of potential architectures where system functions and capabilities are distributed across nodes. Nodes and links will have specific characteristics and based on sampled requirements, contribute to the overall system functionality, such that as they are impacted/degraded, the impacted functional availability of a system can be determined. A machine learning reinforcement-based agent will structurally impact the nodes, links, and characteristics (e.g., bandwidth, latency) of a given architecture to provide an assessment of system functional uptime/availability under these scenarios. By varying the intensity of the attack and related aspects, we can create a structured method of evaluating the performance of candidate architectures against each other to create a metric rating its resilience to these attack types/strategies. Through multiple simulation iterations, sufficient data will exist to compare this availability metric, and an architectural recommendation against the baseline requirements, in comparison to existing multi-factor computing architectural selection processes. It is intended that this additional data will create an improvement in the matching of resilient critical infrastructure system requirements to the correct architectures and implementations that will support improved operation during times of system degradation due to failures and infrastructure attacks.

Keywords: architecture, resiliency, availability, cyber-attack

Procedia PDF Downloads 72
221 Economic Evaluation of an Advanced Bioethanol Manufacturing Technology Using Maize as a Feedstock in South Africa

Authors: Ayanda Ndokwana, Stanley Fore

Abstract:

Industrial prosperity and rapid expansion of human population in South Africa over the past two decades, have increased the use of conventional fossil fuels such as crude oil, coal and natural gas to meet the country’s energy demands. However, the inevitable depletion of fossil fuel reserves, global volatile oil price and large carbon footprint are some of the crucial reasons the South African Government needs to make a considerable investment in the development of the biofuel industry. In South Africa, this industry is still at the introductory stage with no large scale manufacturing plant that has been commissioned yet. Bioethanol is a potential replacement of gasoline which is a fossil fuel that is used in motor vehicles. Using bioethanol for the transport sector as a source of fuel will help Government to save heavy foreign exchange incurred during importation of oil and create many job opportunities in rural farming. In 2007, the South African Government developed the National Biofuels Industrial Strategy in an effort to make provision for support and attract investment in bioethanol production. However, capital investment in the production of bioethanol on a large scale, depends on the sound economic assessment of the available manufacturing technologies. The aim of this study is to evaluate the profitability of an advanced bioethanol manufacturing technology which uses maize as a feedstock in South Africa. The impact of fiber or bran fractionation in this technology causes it to possess a number of merits such as energy efficiency, low capital expenditure, and profitability compared to a conventional dry-mill bioethanol technology. Quantitative techniques will be used to collect and analyze numerical data from suitable organisations in South Africa. The dependence of three profitability indicators such as the Discounted Payback Period (DPP), Net Present Value (NPV) and Return On Investment (ROI) on plant capacity will be evaluated. Profitability analysis will be done on the following plant capacities: 100 000 ton/year, 150 000 ton/year and 200 000 ton/year. The plant capacity with the shortest Discounted Payback Period, positive Net Present Value and highest Return On Investment implies that a further consideration in terms of capital investment is warranted.

Keywords: bioethanol, economic evaluation, maize, profitability indicators

Procedia PDF Downloads 205