Search results for: engine start cycle
459 Analyzing the Effects of Bio-fibers on the Stiffness and Strength of Adhesively Bonded Thermoplastic Bio-fiber Reinforced Composites by a Mixed Experimental-Numerical Approach
Authors: Sofie Verstraete, Stijn Debruyne, Frederik Desplentere
Abstract:
Considering environmental issues, the interest to apply sustainable materials in industry increases. Specifically for composites, there is an emerging need for suitable materials and bonding techniques. As an alternative to traditional composites, short bio-fiber (cellulose-based flax) reinforced Polylactic Acid (PLA) is gaining popularity. However, these thermoplastic based composites show issues in adhesive bonding. This research focusses on analyzing the effects of the fibers near the bonding interphase. The research applies injection molded plate structures. A first important parameter concerns the fiber volume fraction, which directly affects adhesion characteristics of the surface. This parameter is varied between 0 (pure PLA) and 30%. Next to fiber volume fraction, the orientation of fibers near the bonding surface governs the adhesion characteristics of the injection molded parts. This parameter is not directly controlled in this work, but its effects are analyzed. Surface roughness also greatly determines surface wettability, thus adhesion. Therefore, this research work considers three different roughness conditions. Different mechanical treatments yield values up to 0.5 mm. In this preliminary research, only one adhesive type is considered. This is a two-part epoxy which is cured at 23 °C for 48 hours. In order to assure a dedicated parametric study, simple and reproduceable adhesive bonds are manufactured. Both single lap (substrate width 25 mm, thickness 3 mm, overlap length 10 mm) and double lap tests are considered since these are well documented and quite straightforward to conduct. These tests are conducted for the different substrate and surface conditions. Dog bone tensile testing is applied to retrieve the stiffness and strength characteristics of the substrates (with different fiber volume fractions). Numerical modelling (non-linear FEA) relates the effects of the considered parameters on the stiffness and strength of the different joints, obtained through the abovementioned tests. Ongoing work deals with developing dedicated numerical models, incorporating the different considered adhesion parameters. Although this work is the start of an extensive research project on the bonding characteristics of thermoplastic bio-fiber reinforced composites, some interesting results are already prominent. Firstly, a clear correlation between the surface roughness and the wettability of the substrates is observed. Given the adhesive type (and viscosity), it is noticed that an increase in surface energy is proportional to the surface roughness, to some extent. This becomes more pronounced when fiber volume fraction increases. Secondly, ultimate bond strength (single lap) also increases with increasing fiber volume fraction. On a macroscopic level, this confirms the positive effect of fibers near the adhesive bond line.Keywords: adhesive bonding, bio-fiber reinforced composite, flax fibers, lap joint
Procedia PDF Downloads 127458 Verification of Geophysical Investigation during Subsea Tunnelling in Qatar
Authors: Gary Peach, Furqan Hameed
Abstract:
Musaimeer outfall tunnel is one of the longest storm water tunnels in the world, with a total length of 10.15 km. The tunnel will accommodate surface and rain water received from the drainage networks from 270 km of urban areas in southern Doha with a pumping capacity of 19.7m³/sec. The tunnel is excavated by Tunnel Boring Machine (TBM) through Rus Formation, Midra Shales, and Simsima Limestone. Water inflows at high pressure, complex mixed ground, and weaker ground strata prone to karstification with the presence of vertical and lateral fractures connected to the sea bed were also encountered during mining. In addition to pre-tender geotechnical investigations, the Contractor carried out a supplementary offshore geophysical investigation in order to fine-tune the existing results of geophysical and geotechnical investigations. Electric resistivity tomography (ERT) and Seismic Reflection survey was carried out. Offshore geophysical survey was performed, and interpretations of rock mass conditions were made to provide an overall picture of underground conditions along the tunnel alignment. This allowed the critical tunnelling area and cutter head intervention to be planned accordingly. Karstification was monitored with a non-intrusive radar system facility installed on the TBM. The Boring Electric Ahead Monitoring(BEAM) was installed at the cutter head and was able to predict the rock mass up to 3 tunnel diameters ahead of the cutter head. BEAM system was provided with an online system for real time monitoring of rock mass condition and then correlated with the rock mass conditions predicted during the interpretation phase of offshore geophysical surveys. The further correlation was carried by Samples of the rock mass taken from tunnel face inspections and excavated material produced by the TBM. The BEAM data was continuously monitored to check the variations in resistivity and percentage frequency effect (PFE) of the ground. This system provided information about rock mass condition, potential karst risk, and potential of water inflow. BEAM system was found to be more than 50% accurate in picking up the difficult ground conditions and faults as predicted in the geotechnical interpretative report before the start of tunnelling operations. Upon completion of the project, it was concluded that the combined use of different geophysical investigation results can make the execution stage be carried out in a more confident way with the less geotechnical risk involved. The approach used for the prediction of rock mass condition in Geotechnical Interpretative Report (GIR) and Geophysical Reflection and electric resistivity tomography survey (ERT) Geophysical Reflection surveys were concluded to be reliable as the same rock mass conditions were encountered during tunnelling operations.Keywords: tunnel boring machine (TBM), subsea, karstification, seismic reflection survey
Procedia PDF Downloads 244457 Innovation Mechanism in Developing Cultural and Creative Industries
Authors: Liou Shyhnan, Chia Han Yang
Abstract:
The study aims to investigate the promotion of innovation in the development of cultural and creative industries (CCI) and apply research on culture and creativity to this promotion. Using the research perspectives of culture and creativity as the starting points, this study has examined the challenges, trends, and opportunities that have emerged from the development of the CCI until the present. It is found that a definite context of cause and effect exist between them, and that a homologous theoretical basis can be used to understand and interpret them. Based on the characteristics of the aforementioned challenges and trends, this study has compiled two main theoretical systems for conducting research on culture and creativity: (i) reciprocal process between creativity and culture, and (ii) a mechanism for innovation involving multicultural convergence. Both theoretical systems were then used as the foundation to arrive at possible research propositions relating to the two developmental systems. This was respectively done through identification of the theoretical context through a literature review, and interviews and observations of actual case studies within Taiwan’s CCI. In so doing, the critical factors that can address the aforementioned challenges and trends were discovered. Our results indicated that, for reciprocal process between creativity and culture, we recognize that culture serves as creative resources in cultural and creative industries. According to shared consensus, culture provides symbolic meanings and emotional attachment for products and experiences offered by CCI. Besides, different cultures vary in their effects on creativity processes and standards, thus engendering distinctive preferences for and evaluations of the creative expressions and experiences of CCIs. In addition, we identify that creativity serves as the engine for driving the continuation and rebirth of cultures. Accounting for the core of culture, the employment of technology, design, and business facilitates the transformation and innovation mechanism for promoting culture continuity. In addition, with cultural centered, the digital technology, design thinking, and business model are critical constitutes of the innovation mechanism to promote the cultural continuity. Regarding cultural preservation and regeneration of local spaces and folk customs, we argue that the preservation and regeneration of local spaces and cultural cultures must embody the interactive experiences of present-day life. And cultural space and folk custom would regenerate with interact and experience in modern life. Regarding innovation mechanism for multicultural convergence, we propose that innovative stakeholders from different disciplines (e.g., creators, designers, engineers, and marketers) in CCIs rely on the establishment of a cocreation mechanism to promote interdisciplinary interaction. Furthermore, CCI development needs to develop a cocreation mechanism for enhancing the interdisciplinary collaboration among CCI innovation stakeholders. We further argue multicultural mixing would enhance innovation in developing CCI, and assuming an open and mutually enlightening attitude to enrich one another’s cultures in the multicultural exchanges under globalization will create diversity in homogenous CCIs. Finally, for promoting innovation in developing cultural and creative industries, we further propose a model for joint knowledge creation that can be established for enhancing the mutual reinforcement of theoretical and practical research on culture and creativity.Keywords: culture and creativity, innovation, cultural and creative industries, cultural mixing
Procedia PDF Downloads 325456 Control of Belts for Classification of Geometric Figures by Artificial Vision
Authors: Juan Sebastian Huertas Piedrahita, Jaime Arturo Lopez Duque, Eduardo Luis Perez Londoño, Julián S. Rodríguez
Abstract:
The process of generating computer vision is called artificial vision. The artificial vision is a branch of artificial intelligence that allows the obtaining, processing, and analysis of any type of information especially the ones obtained through digital images. Actually the artificial vision is used in manufacturing areas for quality control and production, as these processes can be realized through counting algorithms, positioning, and recognition of objects that can be measured by a single camera (or more). On the other hand, the companies use assembly lines formed by conveyor systems with actuators on them for moving pieces from one location to another in their production. These devices must be previously programmed for their good performance and must have a programmed logic routine. Nowadays the production is the main target of every industry, quality, and the fast elaboration of the different stages and processes in the chain of production of any product or service being offered. The principal base of this project is to program a computer that recognizes geometric figures (circle, square, and triangle) through a camera, each one with a different color and link it with a group of conveyor systems to organize the mentioned figures in cubicles, which differ from one another also by having different colors. This project bases on artificial vision, therefore the methodology needed to develop this project must be strict, this one is detailed below: 1. Methodology: 1.1 The software used in this project is QT Creator which is linked with Open CV libraries. Together, these tools perform to realize the respective program to identify colors and forms directly from the camera to the computer. 1.2 Imagery acquisition: To start using the libraries of Open CV is necessary to acquire images, which can be captured by a computer’s web camera or a different specialized camera. 1.3 The recognition of RGB colors is realized by code, crossing the matrices of the captured images and comparing pixels, identifying the primary colors which are red, green, and blue. 1.4 To detect forms it is necessary to realize the segmentation of the images, so the first step is converting the image from RGB to grayscale, to work with the dark tones of the image, then the image is binarized which means having the figure of the image in a white tone with a black background. Finally, we find the contours of the figure in the image to detect the quantity of edges to identify which figure it is. 1.5 After the color and figure have been identified, the program links with the conveyor systems, which through the actuators will classify the figures in their respective cubicles. Conclusions: The Open CV library is a useful tool for projects in which an interface between a computer and the environment is required since the camera obtains external characteristics and realizes any process. With the program for this project any type of assembly line can be optimized because images from the environment can be obtained and the process would be more accurate.Keywords: artificial intelligence, artificial vision, binarized, grayscale, images, RGB
Procedia PDF Downloads 378455 Reliability Analysis of Geometric Performance of Onboard Satellite Sensors: A Study on Location Accuracy
Authors: Ch. Sridevi, A. Chalapathi Rao, P. Srinivasulu
Abstract:
The location accuracy of data products is a critical parameter in assessing the geometric performance of satellite sensors. This study focuses on reliability analysis of onboard sensors to evaluate their performance in terms of location accuracy performance over time. The analysis utilizes field failure data and employs the weibull distribution to determine the reliability and in turn to understand the improvements or degradations over a period of time. The analysis begins by scrutinizing the location accuracy error which is the root mean square (RMS) error of differences between ground control point coordinates observed on the product and the map and identifying the failure data with reference to time. A significant challenge in this study is to thoroughly analyze the possibility of an infant mortality phase in the data. To address this, the Weibull distribution is utilized to determine if the data exhibits an infant stage or if it has transitioned into the operational phase. The shape parameter beta plays a crucial role in identifying this stage. Additionally, determining the exact start of the operational phase and the end of the infant stage poses another challenge as it is crucial to eliminate residual infant mortality or wear-out from the model, as it can significantly increase the total failure rate. To address this, an approach utilizing the well-established statistical Laplace test is applied to infer the behavior of sensors and to accurately ascertain the duration of different phases in the lifetime and the time required for stabilization. This approach also helps in understanding if the bathtub curve model, which accounts for the different phases in the lifetime of a product, is appropriate for the data and whether the thresholds for the infant period and wear-out phase are accurately estimated by validating the data in individual phases with Weibull distribution curve fitting analysis. Once the operational phase is determined, reliability is assessed using Weibull analysis. This analysis not only provides insights into the reliability of individual sensors with regards to location accuracy over the required period of time, but also establishes a model that can be applied to automate similar analyses for various sensors and parameters using field failure data. Furthermore, the identification of the best-performing sensor through this analysis serves as a benchmark for future missions and designs, ensuring continuous improvement in sensor performance and reliability. Overall, this study provides a methodology to accurately determine the duration of different phases in the life data of individual sensors. It enables an assessment of the time required for stabilization and provides insights into the reliability during the operational phase and the commencement of the wear-out phase. By employing this methodology, designers can make informed decisions regarding sensor performance with regards to location accuracy, contributing to enhanced accuracy in satellite-based applications.Keywords: bathtub curve, geometric performance, Laplace test, location accuracy, reliability analysis, Weibull analysis
Procedia PDF Downloads 65454 Examining the Influence of Firm Internal Level Factors on Performance Variations among Micro and Small Enterprises: Evidence from Tanzanian Agri-Food Processing Firms
Authors: Pulkeria Pascoe, Hawa P. Tundui, Marcia Dutra de Barcellos, Hans de Steur, Xavier Gellynck
Abstract:
A majority of Micro and Small Enterprises (MSEs) experience low or no growth. Understanding their performance remains unfinished and disjointed as there is no consensus on the factors influencing it, especially in developing countries. Using a Resource-Based View (RBV) as the theoretical background, this cross-sectional study employed four regression models to examine the influence of firm-level factors (firm-specific characteristics, firm resources, manager socio-demographic characteristics, and selected management practices) on the overall performance variations among 442 Tanzanian micro and small agri-food processing firms. Study results confirmed the RBV argument that intangible resources make a larger contribution to overall performance variations among firms than that tangible resources. Firms' tangible and intangible resources explained 34.5% of overall performance variations (intangible resources explained the overall performance variability by 19.4% compared to tangible resources, which accounted for 15.1%), ranking first in explaining the overall performance variance. Firm-specific characteristics ranked second by influencing variations in overall performance by 29.0%. Selected management practices ranked third (6.3%), while the manager's socio-demographic factors were last on the list, as they influenced the overall performance variability among firms by only 5.1%. The study also found that firms that focus on proper utilization of tangible resources (financial and physical), set targets, and undertake better working capital management practices performed higher than their counterparts (low and average performers). Furthermore, accumulation and proper utilization of intangible resources (relational, organizational, and reputational), undertaking performance monitoring practices, age of the manager, and the choice of the firm location and activity were the dominant significant factors influencing the variations among average and high performers, relative to low performers. The entrepreneurial background was a significant factor influencing variations in average and low-performing firms, indicating that entrepreneurial skills are crucial to achieving average levels of performance. Firm age, size, legal status, source of start-up capital, gender, education level, and total business experience of the manager were not statistically significant variables influencing the overall performance variations among the agri-food processors under the study. The study has identified both significant and non-significant factors influencing performance variations among low, average, and high-performing micro and small agri-food processing firms in Tanzania. Therefore, results from this study will help managers, policymakers and researchers to identify areas where more attention should be placed in order to improve overall performance of MSEs in agri-food industry.Keywords: firm-level factors, micro and small enterprises, performance, regression analysis, resource-based-view
Procedia PDF Downloads 86453 Challenges in the Last Mile of the Global Guinea Worm Eradication Program: A Systematic Review
Authors: Getahun Lemma
Abstract:
Introduction Guinea Worm Disease (GWD), also known as dracunculiasisis, is one of the oldest diseases in the history of mankind. Dracunculiasis is caused by a parasitic nematode, Dracunculus medinensis. Infection is acquired by drinking contaminated water with copepods containing infective Guinea Worm (GW) larvae). Almost one year after the infection, the worm usually emerges out through the skin on a lower, causing severe pain and disabilities. Although there is no effective drug or vaccine against the disease, the chain of transmission can be effectively prevented with simple and cost effective public health measures. Death due to dracunculiasis is very rare. However, it results in a wide range of physical, social and economic sequels. The disease is usually common in the rural, remote places of Sub-Saharan African countries among the marginalized societies. Currently, GWD is one of the neglected tropical diseases, which is on the verge of eradication. The global Guinea Worm Eradication Program (GWEP) was started in 1980. Since then, the program has achieved a tremendous success in reducing the global burden and number of GW case from 3.5 million to only 28 human cases at the end of 2018. However, it has recently been shown that not only humans can become infected, with a total of 1,105 animal infections have been reported at the end of 2018. Therefore, the objective of this study was to identify the existing challenges in the last mile of the GWEP in order To inform Policy makers and stakeholders on potential measures to finally achieve eradication. Method Systematic literature review on articles published from January 1, 2000 until May 30, 2019. Papers listed in Cochrane Library, Google Scholar, ProQuest PubMed and Web of Science databases were searched and reviewed. Results Twenty-five articles met inclusion criteria of the study and were selected for analysis. Hence, relevant data were extracted, grouped and descriptively analyzed. Results showed the main challenges complicating the last mile of global GWEP: 1. Unusual mode of transmission; 2. Rising animal Guinea Worm infection; 3. Suboptimal surveillance; 4. Insecurity; 5. Inaccessibility; 6. Inadequate safe water points; 7. Migration; 8. Poor case containment measures, 9. Ecological changes; and 10. New geographic foci of the disease. Conclusion This systematic review identified that most of the current challenges in the GWEP have been present since the start of the campaign. However, the recent change in epidemiological patterns and nature of GWD in the last remaining endemic countries illustrates a new twist in the global GWEP. Considering the complex nature of the current challenges, there seems to be a need for a more coordinated and multidisciplinary approach of GWD prevention and control measures in the last mile of the campaign. These new strategies would help to make history by eradicating dracunculiasis as the first ever parasitic disease.Keywords: dracunculiasis, eradication program, guinea worm, last mile
Procedia PDF Downloads 131452 Learning the History of a Tuscan Village: A Serious Game Using Geolocation Augmented Reality
Authors: Irene Capecchi, Tommaso Borghini, Iacopo Bernetti
Abstract:
An important tool for the enhancement of cultural sites is serious games (SG), i.e., games designed for educational purposes; SG is applied in cultural sites through trivia, puzzles, and mini-games for participation in interactive exhibitions, mobile applications, and simulations of past events. The combination of Augmented Reality (AR) and digital cultural content has also produced examples of cultural heritage recovery and revitalization around the world. Through AR, the user perceives the information of the visited place in a more real and interactive way. Another interesting technological development for the revitalization of cultural sites is the combination of AR and Global Positioning System (GPS), which integrated have the ability to enhance the user's perception of reality by providing historical and architectural information linked to specific locations organized on a route. To the author’s best knowledge, there are currently no applications that combine GPS AR and SG for cultural heritage revitalization. The present research focused on the development of an SG based on GPS and AR. The study area is the village of Caldana in Tuscany, Italy. Caldana is a fortified Renaissance village; the most important architectures are the walls, the church of San Biagio, the rectory, and the marquis' palace. The historical information is derived from extensive research by the Department of Architecture at the University of Florence. The storyboard of the SG is based on the history of the three characters who built the village: marquis Marcello Agostini, who was commissioned by Cosimo I de Medici, Grand Duke of Tuscany, to build the village, his son Ippolito and his architect Lorenzo Pomarelli. The three historical characters were modeled in 3D using the freeware MakeHuman and imported into Blender and Mixamo to associate a skeleton and blend shapes to have gestural animations and reproduce lip movement during speech. The Unity Rhubarb Lip Syncer plugin was used for the lip sync animation. The historical costumes were created by Marvelous Designer. The application was developed using the Unity 3D graphics and game engine. The AR+GPS Location plugin was used to position the 3D historical characters based on GPS coordinates. The ARFoundation library was used to display AR content. The SG is available in two versions: for children and adults. the children's version consists of finding a digital treasure consisting of valuable items and historical rarities. Players must find 9 village locations where 3D AR models of historical figures explaining the history of the village provide clues. To stimulate players, there are 3 levels of rewards for every 3 clues discovered. The rewards consist of AR masks for archaeologist, professor, and explorer. At the adult level, the SG consists of finding the 16 historical landmarks in the village, and learning historical and architectural information interactively and engagingly. The application is being tested on a sample of adults and children. Test subjects will be surveyed on a Likert scale to find out their perceptions of using the app and the learning experience between the guided tour and interaction with the app.Keywords: augmented reality, cultural heritage, GPS, serious game
Procedia PDF Downloads 95451 Monitoring of 53 Contaminants of Emerging Concern: Occurrence in Effluents, Sludges, and Surface Waters Upstream and Downstream of 7 Wastewater Treatment Plants
Authors: Azziz Assoumani, Francois Lestremau, Celine Ferret, Benedicte Lepot, Morgane Salomon, Helene Budzinski, Marie-Helene Devier, Pierre Labadie, Karyn Le Menach, Patrick Pardon, Laure Wiest, Emmanuelle Vulliet, Pierre-Francois Staub
Abstract:
Seven French wastewater treatment plants (WWTP) were monitored for 53 contaminants of emerging concern within a nation-wide monitoring campaign in surface waters, which took place in 2018. The overall objective of the 2018 campaign was to provide the exercise of prioritization of emerging substances, which is being carried out in 2021, with monitoring data. This exercise should make it possible to update the list of relevant substances to be monitored (SPAS) as part of future water framework directive monitoring programmes, which will be implemented in the next water body management cycle (2022). One sampling campaign was performed in October 2018 in the seven WWTP, where affluent and sludge samples were collected. Surface water samples were collected in September 2018 at three to five sites upstream and downstream the point of effluent discharge of each WWTP. The contaminants (36 biocides and 17 surfactants, selected by the Prioritization Experts Committee) were determined in the seven WWTP effluent and sludge samples and in surface water samples by liquid or gas chromatography coupled with tandem mass spectrometry, depending on the contaminant. Nine surfactants and three biocides were quantified at least in one WWTP effluent sample. Linear alkylbenzene sulfonic acids (LAS) and fipronil were quantified in all samples; the LAS were quantified at the highest median concentrations. Twelve surfactants and 13 biocides were quantified in at least one sludge sample. The LAS and didecyldimethylammonium were quantified in all samples and at the highest median concentrations. Higher concentration levels of the substances quantified in WWTP effluent samples were observed in the surface water samples collected downstream the effluents discharge points, compared with the samples collected upstream, suggesting a contribution of the WWTP effluents in the contamination of surface waters.Keywords: contaminants of emerging concern, effluent, monitoring, river water, sludge
Procedia PDF Downloads 146450 Arc Plasma Thermochemical Preparation of Coal to Effective Combustion in Thermal Power Plants
Authors: Vladimir Messerle, Alexandr Ustimenko, Oleg Lavrichshev
Abstract:
This work presents plasma technology for solid fuel ignition and combustion. Plasma activation promotes more effective and environmentally friendly low-rank coal ignition and combustion. To realise this technology at coal fired power plants plasma-fuel systems (PFS) were developed. PFS improve efficiency of power coals combustion and decrease harmful emission. PFS is pulverized coal burner equipped with arc plasma torch. Plasma torch is the main element of the PFS. Plasma forming gas is air. It is blown through the electrodes forming plasma flame. Temperature of this flame is varied from 5000 to 6000 K. Plasma torch power is varied from 100 to 350 kW and geometrical sizes are the following: the height is 0.4-0.5 m and diameter is 0.2-0.25 m. The base of the PFS technology is plasma thermochemical preparation of coal for burning. It consists of heating of the pulverized coal and air mixture by arc plasma up to temperature of coal volatiles release and char carbon partial gasification. In the PFS coal-air mixture is deficient in oxygen and carbon is oxidised mainly to carbon monoxide. As a result, at the PFS exit a highly reactive mixture is formed of combustible gases and partially burned char particles, together with products of combustion, while the temperature of the gaseous mixture is around 1300 K. Further mixing with the air promotes intensive ignition and complete combustion of the prepared fuel. PFS have been tested for boilers start up and pulverized coal flame stabilization in different countries at power boilers of 75 to 950 t/h steam productivity. They were equipped with different types of pulverized coal burners (direct flow, muffle and swirl burners). At PFS testing power coals of all ranks (lignite, bituminous, anthracite and their mixtures) were incinerated. Volatile content of them was from 4 to 50%, ash varied from 15 to 48% and heat of combustion was from 1600 to 6000 kcal/kg. To show the advantages of the plasma technology before conventional technologies of coal combustion numerical investigation of plasma ignition, gasification and thermochemical preparation of a pulverized coal for incineration in an experimental furnace with heat capacity of 3 MW was fulfilled. Two computer-codes were used for the research. The computer simulation experiments were conducted for low-rank bituminous coal of 44% ash content. The boiler operation has been studied at the conventional mode of combustion and with arc plasma activation of coal combustion. The experiments and computer simulation showed ecological efficiency of the plasma technology. When a plasma torch operates in the regime of plasma stabilization of pulverized coal flame, NOX emission is reduced twice and amount of unburned carbon is reduced four times. Acknowledgement: This work was supported by Ministry of Education and Science of the Republic of Kazakhstan and Ministry of Education and Science of the Russian Federation (Agreement on grant No. 14.613.21.0005, project RFMEFI61314X0005).Keywords: coal, ignition, plasma-fuel system, plasma torch, thermal power plant
Procedia PDF Downloads 278449 Municipal Asset Management Planning 2.0 – A New Framework For Policy And Program Design In Ontario
Authors: Scott R. Butler
Abstract:
Ontario, Canada’s largest province, is in the midst of an interesting experiment in mandated asset management planning for local governments. At the beginning of 2021, Ontario’s 444 municipalities were responsible for the management of 302,864 lane kilometers of roads that have a replacement cost of $97.545 billion CDN. Roadways are by far the most complex, expensive, and extensive assets that a municipality is responsible for overseeing. Since adopting Ontario Regulation 588/47: Asset Management Planning for Municipal Infrastructure in 2017, the provincial government has established prescriptions for local road authorities regarding asset category and levels of service being provided. This provincial regulation further stipulates that asset data such as extent, condition, and life cycle costing are to be captured in manner compliant with qualitative descriptions and technical metrics. The Ontario Good Roads Association undertook an exercise to aggregate the road-related data contained within the 444 asset management plans that municipalities have filed with the provincial government. This analysis concluded that collectively Ontario municipal roadways have a $34.7 billion CDN in deferred maintenance. The ill-state of repair of Ontario municipal roads has lasting implications for province’s economic competitiveness and has garnered considerable political attention. Municipal efforts to address the maintenance backlog are stymied by the extremely limited fiscal parameters municipalities must operate within in Ontario. Further exacerbating the program are provincially designed programs that are ineffective, administratively burdensome, and not necessarily aligned with local priorities or strategies. This paper addresses how municipal asset management plans – and more specifically, the data contained in these plans – can be used to design innovative policy frameworks, flexible funding programs, and new levels of service that respond to these funding challenges, as well as emerging issues such as local economic development and climate change. To fully unlock the potential that Ontario Regulation 588/17 has imposed will require a resolute commitment to data standardization and horizontal collaboration between municipalities within regions.Keywords: transportation, municipal asset management, subnational policy design, subnational funding program design
Procedia PDF Downloads 94448 The Study on How Outward Direct Investment of Chinese MNEs to European Union Area Affect the Domestic Industrial Structure
Authors: Nana Weng
Abstract:
From 2008, Chinese Foreign Direct Investment flows to the European Union continued its rapid rise. Currently, the industrial structure adjustment in developing countries has also been placed on the international movement of factors of production. Now China economy is in an important period of transformation on industrial structure adjustment. Under the international transfer of industry background, the adjustment of industrial structure upgrading and sophistication are the key elements of a successful economic transformation. In order to achieve a virtuous cycle of foreign investment patterns and optimize the industrial structure of foreign direct investment as well, the research on the positive the role of the EU direct investment and how it impact China’s industrial structure optimization and upgrading is of great significance. In this paper, the author explained how the EU as an investment destination is different with the United States and ASEAN. Then, based on the theory of FDI and industrial structure and combining the four kinds of motives of China’s ODI in EU, this paper explained the impact mechanism which has influenced China domestic industrial structure primarily through the Transfer effect, Correlation effect and Competitive effect. On the premise that FDI activities do affect the home country’s domestic industrial structure, this paper made empirical analysis with industrial panel data. With the help of Gray Correlation Method and Limited Distributed Lags, this paper found that China/s ODI in the EU impacted the tertiary industry strongly and had a significant positive impact, particularly the manufacturing industry and the financial industry. This paper also pointed out that Chinese MNEs should realize several issues, such as pay more attention to high-tech industries so that they can make the best use of reverse technology spillover. When Chinese enterprises ‘go out,' they ought to keep in mind that domestic research and development capital contribution can make greater economic growth. Finally, based on theoretical and empirical analysis results, this paper presents the industry choice recommendations in the future of the EU direct investment, particularly through the development of the proper rational industrial policy and industrial development strategic to guide the industrial restructuring and upgrading.Keywords: china ODI in european union, industrial structure optimization, impact mechanism, empirical analysis
Procedia PDF Downloads 319447 Comparison of Physical and Chemical Effects on Senescent Cells
Authors: Svetlana Guryeva, Inna Kornienko, Andrey Usanov, Dmitry Usanov, Elena Petersen
Abstract:
Every day cells in our organism are exposed to various factors: chemical agents, reactive oxygen species, ionizing radiation, and others. These factors can cause damage to DNA, cellular membrane, intracellular compartments, and proteins. The fate of cells depends on the exposure intensity and duration. The prolonged and intense exposure causes the irreversible damage accumulation, which triggers the permanent cell cycle arrest (cellular senescence) or cell death programs. In the case of low dose of impacts, it can lead to cell renovation and to cell functional state improvement. Therefore, it is a pivotal question to investigate the factors and doses that result in described positive effects. In order to estimate the influence of different agents, the proliferation index and levels of cell death markers (annexin V/propidium iodide), senescence-associated β-galactosidase, and lipofuscin were measured. The experiments were conducted on primary human fibroblasts of the 8th passage. According to the levels of mentioned markers, these cells were defined as senescent cells. The effect of low-frequency magnetic field was investigated. Different modes of magnetic field exposure were tested. The physical agents were compared with chemical agents: metformin (10 mM) and taurine (0.8 mM and 1.6 mM). Cells were incubating with chemicals for 5 days. The highest decrease in the level of senescence-associated β-galactosidase (21%) and lipofuscin (17%) was observed in the primary senescent fibroblasts after 5 days after double treatments with 48 h intervals with low-frequency magnetic field. There were no significant changes in the proliferation index after magnetic field application. The cytotoxic effect of magnetic field was not observed. The chemical agent taurine (1.6 mM) decreased the level of senescence-associated β-galactosidase (23%) and lipofuscin (22%). Metformin improved the activity of senescence-associated β-galactosidase on 15% and the level of lipofuscin on 19% in this experiment. According to these results, the effect of double treatment with 48 h interval with low-frequency magnetic field and the effect of taurine (1.6 mM) were comparable to the effect of metformin, for which anti-aging properties are proved. In conclusion, this study can become the first step towards creation of the standardized system for the investigation of different effects on senescent cells.Keywords: biomarkers, magnetic field, metformin, primary fibroblasts, senescence, taurine
Procedia PDF Downloads 280446 Disability in the Course of a Chronic Disease: The Example of People Living with Multiple Sclerosis in Poland
Authors: Milena Trojanowska
Abstract:
Disability is a phenomenon for which meanings and definitions have evolved over the decades. This became the trigger to start a project to answer the question of what disability constitutes in the course of an incurable chronic disease. The chosen research group are people living with multiple sclerosis.The contextual phase of the research was participant observation at the Polish Multiple Sclerosis Society, the largest NGO in Poland supporting people living with MS and their relatives. The research techniques used in the project are (in order of implementation): group interviews with people living with MS and their relatives, narrative interviews, asynchronous technique, participant observation during events organised for people living with MS and their relatives.The researcher is currently conducting follow-up interviews, as inaccuracies in the respondents' narratives were identified during the data analysis. Interviews and supplementary research techniques were used over the four years of the research, and the researcher also benefited from experience gained from 12 years of working with NGOs (diaries, notes). The research was carried out in Poland with the participation of people living in this country only.The research has been based on grounded theory methodology in a constructivist perspectivedeveloped by Kathy Charmaz. The goal was to follow the idea that research must be reliable, original, and useful. The aim was to construct an interpretive theory that assumes temporality and the processualityof social life. TheAtlas.ti software was used to collect research material and analyse it. It is a program from the CAQDAS(Computer-Assisted Qualitative Data Analysis Software) group.Several key factors influencing the construction of a disability identity by people living with multiple sclerosis was identified:-course of interaction with significant relatives,- the expectation of identification with disability (expressed by close relatives),- economic profitability (pension, allowances),- institutional advantages (e.g. parking card),- independence and autonomy (not equated with physical condition, but access to adapted infrastructure and resources to support daily functioning),- the way a person with MS construes the meaning of disability,- physical and mental state,- medical diagnosis of illness.In addition, it has been shown that making an assumption about the experience of disability in the course of MS is a form of cognitive reductionism leading to further phenomenon such as: the expectation of the person with MS to construct a social identity as a person with a disability (e.g. giving up work), the occurrence of institutional inequalities. It can also be a determinant of the choice of a life strategy that limits social and individual functioning, even if this necessity is not influenced by the person's physical or psychological condition.The results of the research are important for the development of knowledge about the phenomenon of disability. It indicates the contextuality and complexity of the disability phenomenon, which in the light of the research is a set of different phenomenon of heterogeneous nature and multifaceted causality. This knowledge can also be useful for institutions and organisations in the non-governmental sector supporting people with disabilities and people living with multiple sclerosis.Keywords: disability, multiple sclerosis, grounded theory, poland
Procedia PDF Downloads 106445 Biodiesel Production from Edible Oil Wastewater Sludge with Bioethanol Using Nano-Magnetic Catalysis
Authors: Wighens Ngoie Ilunga, Pamela J. Welz, Olewaseun O. Oyekola, Daniel Ikhu-Omoregbe
Abstract:
Currently, most sludge from the wastewater treatment plants of edible oil factories is disposed to landfills, but landfill sites are finite and potential sources of environmental pollution. Production of biodiesel from wastewater sludge can contribute to energy production and waste minimization. However, conventional biodiesel production is energy and waste intensive. Generally, biodiesel is produced from the transesterification reaction of oils with alcohol (i.e., Methanol, ethanol) in the presence of a catalyst. Homogeneously catalysed transesterification is the conventional approach for large-scale production of biodiesel as reaction times are relatively short. Nevertheless, homogenous catalysis presents several challenges such as high probability of soap. The current study aimed to reuse wastewater sludge from the edible oil industry as a novel feedstock for both monounsaturated fats and bioethanol for the production of biodiesel. Preliminary results have shown that the fatty acid profile of the oilseed wastewater sludge is favourable for biodiesel production with 48% (w/w) monounsaturated fats and that the residue left after the extraction of fats from the sludge contains sufficient fermentable sugars after steam explosion followed by an enzymatic hydrolysis for the successful production of bioethanol [29% (w/w)] using a commercial strain of Saccharomyces cerevisiae. A novel nano-magnetic catalyst was synthesised from mineral processing alkaline tailings, mainly containing dolomite originating from cupriferous ores using a modified sol-gel. The catalyst elemental chemical compositions and structural properties were characterised by X-ray diffraction (XRD), scanning electron microscopy (SEM), Fourier transform infra-red (FTIR) and the BET for the surface area with 14.3 m²/g and 34.1 nm average pore diameter. The mass magnetization of the nano-magnetic catalyst was 170 emu/g. Both the catalytic properties and reusability of the catalyst were investigated. A maximum biodiesel yield of 78% was obtained, which dropped to 52% after the fourth transesterification reaction cycle. The proposed approach has the potential to reduce material costs, energy consumption and water usage associated with conventional biodiesel production technologies. It may also mitigate the impact of conventional biodiesel production on food and land security, while simultaneously reducing waste.Keywords: biodiesel, bioethanol, edible oil wastewater sludge, nano-magnetism
Procedia PDF Downloads 145444 Multilocal Youth and the Berlin Digital Industry: Productive Leisure as a Key Factor in European Migration
Authors: Stefano Pelaggi
Abstract:
The research is focused on youth labor and mobility in Berlin. Mobility has become a common denominator in our daily lives but it does not primarily move according to monetary incentives. Labor, knowledge and leisure overlap on this point as cities are trying to attract people who could participate in production of the innovations while the new migrants are experiencing the lifestyle of the host cities. The research will present the project of empirical study focused on Italian workers in the digital industry in Berlin, trying to underline the connection between pleasure, leisure with the choice of life abroad. Berlin has become the epicenter of the European Internet start-up scene, but people suitable to work for digital industries are not moving in Berlin to make a career, most of them are attracted to the city for different reasons. This point makes a clear exception to traditional migration flows, which are always originated from a specific search of employment opportunities or strong ties, usually families, in a place that could guarantee success in finding a job. Even the skilled migration has always been originated from a specific need, finding the right path for a successful professional life. In a society where the lack of free time in our calendar seems to be something to be ashamed, the actors of youth mobility incorporate some categories of experiential tourism within their own life path. Professional aspirations, lifestyle choices of the protagonists of youth mobility are geared towards meeting the desires and aspirations that define leisure. While most of creative work places, in particular digital industries, uses the category of fun as a primary element of corporate policy, virtually extending the time to work for the whole day; more and more people around the world are deciding their path in life, career choices on the basis of indicators linked to the realization of the self, which may include factors like a warm climate, cultural environment. All indicators that are usually eradicated from the hegemonic approach to labor. The interpretative framework commonly used seems to be mostly focused on a dualism between Florida's theories and those who highlight the absence of conflict in his studies. While the flexibility of the new creative industries is minimizing leisure, incorporating elements of leisure itself in work activities, more people choose their own path of life by placing great importance to basic needs, through a gaze on pleasure that is only partially driven by consumption. The multi localism is the co-existence of different identities and cultures that do not conflict because they reject the bind on territory. Local loses its strength of opposition to global, with an attenuation of the whole concept of citizenship, territory and even integration. A similar perspective could be useful to search a new approach to all the studies dedicated to the gentrification process, while studying the new migrations flow.Keywords: brain drain, digital industry, leisure and gentrification, multi localism
Procedia PDF Downloads 243443 A Literature Review and a Proposed Conceptual Framework for Learning Activities in Business Process Management
Authors: Carin Lindskog
Abstract:
Introduction: Long-term success requires an organizational balance between continuity (exploitation) and change (exploration). The problem of balancing exploitation and exploration is a common issue in studies of organizational learning. In order to better face the tough competition in the face of changes, organizations need to exploit their current business and explore new business fields by developing new capabilities. The purpose of this work in progress is to develop a conceptual framework to shed light on the relevance of 'learning activities', i.e., exploitation and exploration, on different levels. The research questions that will be addressed are as follows: What sort of learning activities are found in the Business Process Management (BPM) field? How can these activities be linked to the individual level, group, level, and organizational level? In the work, a literature review will first be conducted. This review will explore the status of learning activities in the BPM field. An outcome from the literature review will be a conceptual framework of learning activities based on the included publications. The learning activities will be categorized to focus on the categories exploitation, exploration or both and into the levels of individual, group, and organization. The proposed conceptual framework will be a valuable tool for analyzing the research field as well as identification of future research directions. Related Work: BPM has increased in popularity as a way of working to strengthen the quality of the work and meet the demands of efficiency. Due to the increase in BPM popularity, more and more organizations reporting on BPM failure. One reason for this is the lack of knowledge about the extended scope of BPM to other business contexts that include, for example, more creative business fields. Yet another reason for the failures are the fact of the employees’ are resistant to changes. The learning process in an organization is an ongoing cycle of reflection and action and is a process that can be initiated, developed and practiced. Furthermore, organizational learning is multilevel; therefore the theory of organizational learning needs to consider the individual, the group, and the organization level. Learning happens over time and across levels, but it also creates a tension between incorporating new learning (feed-forward) and exploiting or using what has already been learned (feedback). Through feed-forward processes, new ideas and actions move from the individual to the group to the organization level. At the same time, what has already been learned feeds back from the organization to a group to an individual and has an impact on how people act and think.Keywords: business process management, exploitation, exploration, learning activities
Procedia PDF Downloads 124442 Contextual Toxicity Detection with Data Augmentation
Authors: Julia Ive, Lucia Specia
Abstract:
Understanding and detecting toxicity is an important problem to support safer human interactions online. Our work focuses on the important problem of contextual toxicity detection, where automated classifiers are tasked with determining whether a short textual segment (usually a sentence) is toxic within its conversational context. We use “toxicity” as an umbrella term to denote a number of variants commonly named in the literature, including hate, abuse, offence, among others. Detecting toxicity in context is a non-trivial problem and has been addressed by very few previous studies. These previous studies have analysed the influence of conversational context in human perception of toxicity in controlled experiments and concluded that humans rarely change their judgements in the presence of context. They have also evaluated contextual detection models based on state-of-the-art Deep Learning and Natural Language Processing (NLP) techniques. Counterintuitively, they reached the general conclusion that computational models tend to suffer performance degradation in the presence of context. We challenge these empirical observations by devising better contextual predictive models that also rely on NLP data augmentation techniques to create larger and better data. In our study, we start by further analysing the human perception of toxicity in conversational data (i.e., tweets), in the absence versus presence of context, in this case, previous tweets in the same conversational thread. We observed that the conclusions of previous work on human perception are mainly due to data issues: The contextual data available does not provide sufficient evidence that context is indeed important (even for humans). The data problem is common in current toxicity datasets: cases labelled as toxic are either obviously toxic (i.e., overt toxicity with swear, racist, etc. words), and thus context does is not needed for a decision, or are ambiguous, vague or unclear even in the presence of context; in addition, the data contains labeling inconsistencies. To address this problem, we propose to automatically generate contextual samples where toxicity is not obvious (i.e., covert cases) without context or where different contexts can lead to different toxicity judgements for the same tweet. We generate toxic and non-toxic utterances conditioned on the context or on target tweets using a range of techniques for controlled text generation(e.g., Generative Adversarial Networks and steering techniques). On the contextual detection models, we posit that their poor performance is due to limitations on both of the data they are trained on (same problems stated above) and the architectures they use, which are not able to leverage context in effective ways. To improve on that, we propose text classification architectures that take the hierarchy of conversational utterances into account. In experiments benchmarking ours against previous models on existing and automatically generated data, we show that both data and architectural choices are very important. Our model achieves substantial performance improvements as compared to the baselines that are non-contextual or contextual but agnostic of the conversation structure.Keywords: contextual toxicity detection, data augmentation, hierarchical text classification models, natural language processing
Procedia PDF Downloads 170441 Multi-Omics Integrative Analysis Coupled to Control Theory and Computational Simulation of a Genome-Scale Metabolic Model Reveal Controlling Biological Switches in Human Astrocytes under Palmitic Acid-Induced Lipotoxicity
Authors: Janneth Gonzalez, Andrés Pinzon Velasco, Maria Angarita
Abstract:
Astrocytes play an important role in various processes in the brain, including pathological conditions such as neurodegenerative diseases. Recent studies have shown that the increase in saturated fatty acids such as palmitic acid (PA) triggers pro-inflammatorypathways in the brain. The use of synthetic neurosteroids such as tibolone has demonstrated neuro-protective mechanisms. However, broad studies with a systemic point of view on the neurodegenerative role of PA and the neuro-protective mechanisms of tibolone are lacking. In this study, we performed the integration of multi-omic data (transcriptome and proteome) into a human astrocyte genomic scale metabolic model to study the astrocytic response during palmitate treatment. We evaluated metabolic fluxes in three scenarios (healthy, induced inflammation by PA, and tibolone treatment under PA inflammation). We also applied a control theory approach to identify those reactions that exert more control in the astrocytic system. Our results suggest that PA generates a modulation of central and secondary metabolism, showing a switch in energy source use through inhibition of folate cycle and fatty acid β‐oxidation and upregulation of ketone bodies formation. We found 25 metabolic switches under PA‐mediated cellular regulation, 9 of which were critical only in the inflammatory scenario but not in the protective tibolone one. Within these reactions, inhibitory, total, and directional coupling profiles were key findings, playing a fundamental role in the (de)regulation of metabolic pathways that may increase neurotoxicity and represent potential treatment targets. Finally, the overall framework of our approach facilitates the understanding of complex metabolic regulation, and it can be used for in silico exploration of the mechanisms of astrocytic cell regulation, directing a more complex future experimental work in neurodegenerative diseases.Keywords: astrocytes, data integration, palmitic acid, computational model, multi-omics
Procedia PDF Downloads 97440 Audio-Visual Co-Data Processing Pipeline
Authors: Rita Chattopadhyay, Vivek Anand Thoutam
Abstract:
Speech is the most acceptable means of communication where we can quickly exchange our feelings and thoughts. Quite often, people can communicate orally but cannot interact or work with computers or devices. It’s easy and quick to give speech commands than typing commands to computers. In the same way, it’s easy listening to audio played from a device than extract output from computers or devices. Especially with Robotics being an emerging market with applications in warehouses, the hospitality industry, consumer electronics, assistive technology, etc., speech-based human-machine interaction is emerging as a lucrative feature for robot manufacturers. Considering this factor, the objective of this paper is to design the “Audio-Visual Co-Data Processing Pipeline.” This pipeline is an integrated version of Automatic speech recognition, a Natural language model for text understanding, object detection, and text-to-speech modules. There are many Deep Learning models for each type of the modules mentioned above, but OpenVINO Model Zoo models are used because the OpenVINO toolkit covers both computer vision and non-computer vision workloads across Intel hardware and maximizes performance, and accelerates application development. A speech command is given as input that has information about target objects to be detected and start and end times to extract the required interval from the video. Speech is converted to text using the Automatic speech recognition QuartzNet model. The summary is extracted from text using a natural language model Generative Pre-Trained Transformer-3 (GPT-3). Based on the summary, essential frames from the video are extracted, and the You Only Look Once (YOLO) object detection model detects You Only Look Once (YOLO) objects on these extracted frames. Frame numbers that have target objects (specified objects in the speech command) are saved as text. Finally, this text (frame numbers) is converted to speech using text to speech model and will be played from the device. This project is developed for 80 You Only Look Once (YOLO) labels, and the user can extract frames based on only one or two target labels. This pipeline can be extended for more than two target labels easily by making appropriate changes in the object detection module. This project is developed for four different speech command formats by including sample examples in the prompt used by Generative Pre-Trained Transformer-3 (GPT-3) model. Based on user preference, one can come up with a new speech command format by including some examples of the respective format in the prompt used by the Generative Pre-Trained Transformer-3 (GPT-3) model. This pipeline can be used in many projects like human-machine interface, human-robot interaction, and surveillance through speech commands. All object detection projects can be upgraded using this pipeline so that one can give speech commands and output is played from the device.Keywords: OpenVINO, automatic speech recognition, natural language processing, object detection, text to speech
Procedia PDF Downloads 80439 Effect of Pioglitazone on Intracellular Na+ Homeostasis in Metabolic Syndrome-Induced Cardiomyopathy in Male Rats
Authors: Ayca Bilginoglu, Belma Turan
Abstract:
Metabolic syndrome, is associated impaired blood glucose level, insulin resistance, dyslipidemia caused by abdominal obesity. Also, it is related with cardiovascular risk accumulation and cardiomyopathy. The hypothesis of this study was to examine the effect of thiazolidinediones such as pioglitazone which is widely used insulin-sensitizing agents that improve glycemic control, on intracellular Na+ homeostasis in metabolic syndrome-induced cardiomyopathy in male rats. Male Wistar-Albino rats were randomly divided into three groups, namely control (Con, n=7), metabolic syndrome (MetS, n=7) and pioglitazone treated metabolic syndrome group (MetS+PGZ, n=7). Metabolic syndrome was induced by providing drinking water that was 32% sucrose, for 18 weeks. All of the animals were exposed to a 12 h light – 12 h dark cycle. Abdominal obesity and glucose intolerance had measured as a marker of metabolic syndrome. Intracellular Na+ ([Na+]i) is an important modulator of excitation–contraction coupling in heart. [Na+]i at rest and [Na+]i during pacing with electrical field stimulation in 0.2 Hz, 0.8 Hz, 2.0 Hz stimulation frequency were recorded in cardiomyocytes. Also, Na+ channel current (INa) density and I-V curve were measured to understand [Na+]i homeostasis. In results, high sucrose intake, as well as the normal daily diet, significantly increased body mass and blood glucose level of the rats in the metabolic syndrome group as compared with the non-treated control group. In MetS+PZG group, the blood glucose level and body inclined to decrease to the Con group. There was a decrease in INa density and there was a shift both activation and inactivation curve of INa. Pioglitazone reversed the shift to the control side. Basal [Na+]i either MetS and Con group were not significantly different, but there was a significantly increase in [Na+]i in stimulated cardiomyocytes in MetS group. Furthermore, pioglitazone had not effect on basal [Na+]i but it reversed the increase in [Na+]i in stimulated cardiomyocytes to the that of Con group. Results of the present study suggest that pioglitazone has a significant effect on the Na+ homeostasis in the metabolic syndrome induced cardiomyopathy in rats. All animal procedures and experiments were approved by the Animal Ethics Committee of Ankara University Faculty of Medicine (2015-2-37).Keywords: insulin resistance, intracellular sodium, metabolic syndrome, sodium current
Procedia PDF Downloads 285438 Computer Aided Design Solution Based on Genetic Algorithms for FMEA and Control Plan in Automotive Industry
Authors: Nadia Belu, Laurenţiu Mihai Ionescu, Agnieszka Misztal
Abstract:
The automotive industry is one of the most important industries in the world that concerns not only the economy, but also the world culture. In the present financial and economic context, this field faces new challenges posed by the current crisis, companies must maintain product quality, deliver on time and at a competitive price in order to achieve customer satisfaction. Two of the most recommended techniques of quality management by specific standards of the automotive industry, in the product development, are Failure Mode and Effects Analysis (FMEA) and Control Plan. FMEA is a methodology for risk management and quality improvement aimed at identifying potential causes of failure of products and processes, their quantification by risk assessment, ranking of the problems identified according to their importance, to the determination and implementation of corrective actions related. The companies use Control Plans realized using the results from FMEA to evaluate a process or product for strengths and weaknesses and to prevent problems before they occur. The Control Plans represent written descriptions of the systems used to control and minimize product and process variation. In addition Control Plans specify the process monitoring and control methods (for example Special Controls) used to control Special Characteristics. In this paper we propose a computer-aided solution with Genetic Algorithms in order to reduce the drafting of reports: FMEA analysis and Control Plan required in the manufacture of the product launch and improved knowledge development teams for future projects. The solution allows to the design team to introduce data entry required to FMEA. The actual analysis is performed using Genetic Algorithms to find optimum between RPN risk factor and cost of production. A feature of Genetic Algorithms is that they are used as a means of finding solutions for multi criteria optimization problems. In our case, along with three specific FMEA risk factors is considered and reduce production cost. Analysis tool will generate final reports for all FMEA processes. The data obtained in FMEA reports are automatically integrated with other entered parameters in Control Plan. Implementation of the solution is in the form of an application running in an intranet on two servers: one containing analysis and plan generation engine and the other containing the database where the initial parameters and results are stored. The results can then be used as starting solutions in the synthesis of other projects. The solution was applied to welding processes, laser cutting and bending to manufacture chassis for buses. Advantages of the solution are efficient elaboration of documents in the current project by automatically generating reports FMEA and Control Plan using multiple criteria optimization of production and build a solid knowledge base for future projects. The solution which we propose is a cheap alternative to other solutions on the market using Open Source tools in implementation.Keywords: automotive industry, FMEA, control plan, automotive technology
Procedia PDF Downloads 406437 An Adaptive Conversational AI Approach for Self-Learning
Authors: Airy Huang, Fuji Foo, Aries Prasetya Wibowo
Abstract:
In recent years, the focus of Natural Language Processing (NLP) development has been gradually shifting from the semantics-based approach to deep learning one, which performs faster with fewer resources. Although it performs well in many applications, the deep learning approach, due to the lack of semantics understanding, has difficulties in noticing and expressing a novel business case with a pre-defined scope. In order to meet the requirements of specific robotic services, deep learning approach is very labor-intensive and time consuming. It is very difficult to improve the capabilities of conversational AI in a short time, and it is even more difficult to self-learn from experiences to deliver the same service in a better way. In this paper, we present an adaptive conversational AI algorithm that combines both semantic knowledge and deep learning to address this issue by learning new business cases through conversations. After self-learning from experience, the robot adapts to the business cases originally out of scope. The idea is to build new or extended robotic services in a systematic and fast-training manner with self-configured programs and constructed dialog flows. For every cycle in which a chat bot (conversational AI) delivers a given set of business cases, it is trapped to self-measure its performance and rethink every unknown dialog flows to improve the service by retraining with those new business cases. If the training process reaches a bottleneck and incurs some difficulties, human personnel will be informed of further instructions. He or she may retrain the chat bot with newly configured programs, or new dialog flows for new services. One approach employs semantics analysis to learn the dialogues for new business cases and then establish the necessary ontology for the new service. With the newly learned programs, it completes the understanding of the reaction behavior and finally uses dialog flows to connect all the understanding results and programs, achieving the goal of self-learning process. We have developed a chat bot service mounted on a kiosk, with a camera for facial recognition and a directional microphone array for voice capture. The chat bot serves as a concierge with polite conversation for visitors. As a proof of concept. We have demonstrated to complete 90% of reception services with limited self-learning capability.Keywords: conversational AI, chatbot, dialog management, semantic analysis
Procedia PDF Downloads 136436 Transcriptome Sequencing of the Spleens Reveals Genes Involved in Antiviral Response in Chickens Infected with Castv
Authors: Sajewicz-Krukowska Joanna, Domańska-Blicharz Katarzyna, Tarasiuk Karolina, Marzec-Kotarska Barbara
Abstract:
Astroviral infections pose a significant problem in the poultry industry, leading to multiple adverse effects such as decreased egg production, breeding disorders, poor weight gain, and even increased mortality. Commonly observed chicken astrovirus (CAstV) was recently reported to be responsible for "white chicks syndrome" associated with increased embryo/chick mortality. The CAstV-mediated pathogenesis in chicken occurs due to complex interactions between the infectious pathogen and the immune system. Many aspects of CAstV-chicken interactions remain unclear, and there is no information available regarding gene expression changes in the chicken's spleen in response to CAstV infection. We aimed to investigate the molecular background triggered by CAstV infection. Ten 21-day-old SPF White Leghorn chickens were divided into two groups of 5 birds each. One group was inoculated with CAstV, and the other was used as the negative control. On 4th dpi, spleen samples were collected and immediately frozen at -70°C for RNA isolation. We analysed transcriptional profiles of the chickens' spleens at the 4th day following infection using RNA-seq to establish differentially expressed genes (DEGs). The RNA-seq findings were verified by quantitative real-time PCR (qRT-PCR). A total of 31959 transcripts were identified in response to CAstV infection. Eventually 45 DEGs (p-value<0.05; Log2Foldchange>1)were recognized in the spleen after CAstV infection (26 upregulated DEGs and 19 downregulated DEGs). qRT-PCR performed on 4 genes (IFIT5, OASL, RASD1, DDX60) confirmed RNAseq results. Top differentially expressed genes belonged to novel putative IFN-induced CAstV restriction factors. Most of the DEGs were associated with RIG-I–like signalling pathway or, more generally, with an innate antiviral response(upregulated: BLEC3, CMPK2, IFIT5, OASL, DDX60, IFI6, and downregulated: SPIK5, SELENOP, HSPA2, TMEM158, RASD1, YWHAB). The study provided a global analysis of host transcriptional changes that occur during CAstV infection in vivo and proved the cell cycle in the spleen and immune signalling in chickens were predominantly affected upon CAstV infection.Keywords: chicken astrovirus, CastV, RNA-seq, transcriptome, spleen
Procedia PDF Downloads 154435 A Mother’s Silent Adversary: A Case of Pregnant Woman with Cervical Cancer
Authors: Paola Millare, Nelinda Catherine Pangilinan
Abstract:
Background and Aim: Cervical cancer is the most commonly diagnosed gynecological malignancy during pregnancy. Owing to the rarity of the disease, and the complexity of all factors that have to be taken into consideration, standardization of treatment is very difficult. Cervical cancer is the second most common malignancy among women. The treatment of cancer during pregnancy is most challenging in the case of cervical cancer, since the pregnant uterus itself is affected. This report aims to present a case of cervical cancer in a pregnant woman and how to manage this case and several issues accompanied with it. Methods: This is a case of a 28 year-old, Gravida 4 Para 2 (1111), who presented with watery to mucoid, whitish, non-foul smelling and increasing in amount. Internal examination revealed normal external genitalia, parous outlet, cervix was transformed into a fungating mass measuring 5x4 cm, with left parametrial involvement, body of uterus was enlarged to 24 weeks size, no adnexal mass or tenderness. She had cervical punch biopsy, which revealed, adenocarcinoma, well-differentiated cervical tissue. Standard management for cases with stage 2B cervical carcinoma was to start radiation or radical hysterectomy. In the case of patients diagnosed with cervical cancer and currently pregnant, these kind of management will result to fetal loss. The patient still declined the said management and opted to delay the treatment and wait for her baby to reach at least term and proceed to cesarean section as route of delivery. Results: The patient underwent an elective cesarean section at 37th weeks age of gestation, with an outcome of a term, live baby boy APGAR score 7,9 birthweight 2600 grams. One month postpartum, the patient followed up and completed radiotherapy, chemotherapy and brachytherapy. She was advised to go back after 6 months for monitoring. On her last check up, an internal examination was done which revealed normal external genitalia, vagina admits 2 fingers with ease, there is a palpable fungating mass at the cervix measuring 2x2 cm. A repeat gynecologic oncologic ultrasound was done revealing cervical mass, endophytic, grade 1 color score with stromal invasion 35% post radiation reactive lymph nodes with intact paracolpium, pericervical, and parametrial involvement. The patient was then advised to undergo pelvic boost and for close monitoring of the cervical mass. Conclusion: Cervical cancer in pregnancy is rare but is a dilemma for women and their physicians. Treatment should be multidisciplinary and individualized following careful counseling. In this case, the treatment was clearly on the side of preventing the progression of cervical cancer while she is pregnant, however due to ethical reasons, the management deviates on the right of the patient to decide for her own health and her unborn child. The collaborative collection of data relating to treatment and outcome is strongly encouraged.Keywords: cancer, cervical, ethical, pregnancy
Procedia PDF Downloads 245434 A First-Principles Molecular Dynamics Study on Li+ Solvation Structures in THF/MTHF Containing Electrolytes for Lithium Metal Batteries.
Authors: Chiu-Neng Su, Santhanamoorthi Nachimuthu, Jyh-Chiang Jiang
Abstract:
In lithium-ion batteries (LIBs) the solid–electrolyte interphase (SEI) layer, which forms on the anode surface, plays a crucial role in stabilizing battery performance. Over the past two decades, efforts to enhance LIB electrolytes have primarily focused on refining the quality of SEI components. Despite these endeavors, several observed phenomena remain inadequately improved the SEI layer. Consequently, there has been a significant surge in research interest regarding the behavior of electrolyte solvation structures to elucidate improvements in battery performance. Thus, in this study, we aimed to explore the solvation structures of LiPF₆ in a mixture of organic solvents, tetrahydrofuran (THF) and 2-methyl-tetrahydrofuran (MTHF) using ab-initio molecular dynamics (AIMD) simulations. Our work investigated the solvation structure of electrolytes with different salt concentrations: low-concentration electrolyte (1.0M LiPF6 in 1:1v/v mixture of THF and MTHF), and high-concentration electrolyte (2.0M LiPF₆ in 1:1v/v mixture of THF and MTHF) and compared them with that of conventional electrolyte (1.0M LiPF₆ in 1:1v/v mixture of ethylene carbonate (EC) and dimethyl carbonate (DMC)). Furthermore, the reduction stability of Li+ solvation structures in these electrolyte systems are investigated. It is found that the first solvation shell of Li+ primary consists of THF. We also analyzed the molecular orbital energy levels to understand the reducing stability of these solvents. Compared with the solvation sheath of commercial electrolyte, the THF/MTHF-containing electrolytes have a higher lowest unoccupied molecular orbital (LUMO) energy level, resulting in improved reduction and interface stability. It has been shown that Li-Al alloy can significantly improve cycle life and promote the formation of a dense SEI layer. Therefore, this study aims to construct the solvation structures obtained from calculations of the pure electrolyte system on the surface of Al-Li alloy. Additionally, AIMD simulations will be conducted to investigate chemical reactions at the interface. This investigation aims to elucidate the composition of the SEI layer formed. Furthermore, Bader charges are used to determine the origin and flow of electrons, thereby revealing the sequence of reduction reactions for generating SEI layers.Keywords: lithium, aluminum, alloy, battery, solvation structure
Procedia PDF Downloads 22433 Influence of Natural Rubber on the Frictional and Mechanical Behavior of the Composite Brake Pad Materials
Authors: H. Yanar, G. Purcek, H. H. Ayar
Abstract:
The ingredients of composite materials used for the production of composite brake pads play an important role in terms of safety braking performance of automobiles and trains. Therefore, the ingredients must be selected carefully and used in appropriate ratios in the matrix structure of the brake pad materials. In the present study, a non-asbestos organic composite brake pad materials containing binder resin, space fillers, solid lubricants, and friction modifier was developed, and its fillers content was optimized by adding natural rubber with different rate into the specified matrix structure in order to achieve the best combination of tribo-performance and mechanical properties. For this purpose, four compositions with different rubber content (2.5wt.%, 5.0wt.%, 7.5wt.% and 10wt.%) were prepared and then test samples with the diameter of 20 mm and length of 15 mm were produced to evaluate the friction and mechanical behaviors of the mixture. The friction and wear tests were performed using a pin-on-disc type test rig which was designed according to NF-F-11-292 French standard. All test samples were subjected to two different types of friction tests defined as periodic braking and continuous braking (also known as fade test). In this way, the coefficient of friction (CoF) of composite sample with different rubber content were determined as a function of number of braking cycle and temperature of the disc surface. The results demonstrated that addition of rubber into the matrix structure of the composite caused a significant change in the CoF. Average CoF of the composite samples increased linearly with increasing rubber content into the matrix. While the average CoF was 0.19 for the rubber-free composite, the composite sample containing 20wt.% rubber had the maximum CoF of about 0.24. Although the CoF of composite sample increased, the amount of specific wear rate decreased with increasing rubber content into the matrix. On the other hand, it was observed that the CoF decreased with increasing temperature generated in-between sample and disk depending on the increasing rubber content. While the CoF decreased to the minimum value of 0.15 at 400 °C for the rubber-free composite sample, the sample having the maximum rubber content of 10wt.% exhibited the lowest one of 0.09 at the same temperature. Addition of rubber into the matrix structure decreased the hardness and strength of the samples. It was concluded from the results that the composite matrix with 5 wt.% rubber had the best composition regarding the performance parameters such as required frictional and mechanical behavior. This composition has the average CoF of 0.21, specific wear rate of 0.024 cm³/MJ and hardness value of 63 HRX.Keywords: brake pad composite, friction and wear, rubber, friction materials
Procedia PDF Downloads 137432 Therapeutic Effects of Toll Like Receptor 9 Ligand CpG-ODN on Radiation Injury
Authors: Jianming Cai
Abstract:
Exposure to ionizing radiation causes severe damage to human body and an safe and effective radioprotector is urgently required for alleviating radiation damage. In 2008, flagellin, an agonist of TLR5, was found to exert radioprotective effects on radiation injury through activating NF-kB signaling pathway. From then, the radioprotective effects of TLR ligands has shed new lights on radiation protection. CpG-ODN is an unmethylated oligonucleotide which activates TLR9 signaling pathway. In this study, we demonstrated that CpG-ODN has therapeutic effects on radiation injuries induced by γ ray and 12C6+ heavy ion particles. Our data showed that CpG-ODN increased the survival rate of mice after whole body irradiation and increased the number of leukocytes as well as the bone marrow cells. CpG-ODN also alleviated radiation damage on intestinal crypt through regulating apoptosis signaling pathway including bcl2, bax, and caspase 3 etc. By using a radiation-induced pulmonary fibrosis model, we found that CpG-ODN could alleviate structural damage, within 20 week after whole–thorax 15Gy irradiation. In this model, Th1/Th2 imbalance induced by irradiation was also reversed by CpG-ODN. We also found that TGFβ-Smad signaling pathway was regulated by CpG-ODN, which accounts for the therapeutic effects of CpG-ODN in radiation-induced pulmonary injury. On another hand, for high LET radiation protection, we investigated protective effects of CpG-ODN against 12C6+ heavy ion irradiation and found that after CpG-ODN treatment, the apoptosis and cell cycle arrest induced by 12C6+ irradiation was reduced. CpG-ODN also reduced the expression of Bax and caspase 3, while increased the level of bcl2. Then we detected the effect of CpG-ODN on heavy ion induced immune dysfunction. Our data showed that CpG-ODN increased the survival rate of mice and also the leukocytes after 12C6+ irradiation. Besides, the structural damage of immune organ such as thymus and spleen was also alleviated by CpG-ODN treatment. In conclusion, we found that TLR9 ligand, CpG-ODN reduced radiation injuries in response to γ ray and 12C6+ heavy ion irradiation. On one hand, CpG-ODN inhibited the activation of apoptosis induced by radiation through regulating bcl2, bax and caspase 3. On another hand, through activating TLR9, CpG-ODN recruit MyD88-IRAK-TRAF6 complex, activating TAK1, IRF5 and NF-kB pathway, and thus alleviates radiation damage. This study provides novel insights into protection and therapy of radiation damages.Keywords: TLR9, CpG-ODN, radiation injury, high LET radiation
Procedia PDF Downloads 480431 Investigating the Need to Align with and Adapt Sustainability of Cotton
Authors: Girija Jha
Abstract:
This paper investigates the need of cotton to integrate sustainability. The methodology used in the paper is to do secondary research to find out the various environmental implications of cotton as textile material across its life cycle and try to look at ways and possibilities of minimizing its ecological footprint. Cotton is called ‘The Fabric of Our Lives’. History is replete with examples where this fabric used to be more than a fabric of lives. It used to be a miracle fabric, a symbol India’s pride and social Movement of Swaraj, Gandhijee’s clarion call to self reliance. Cotton is grown in more than 90 countries across the globe on 2.5 percent of the world's arable land in countries like China, India, United States, etc. accounting for almost three fourth of global production. But cotton as a raw material has come under the scanner of sustainability experts because of myriad reasons a few have been discussed here. It may take more than 20,000 liters of water to produce 1kg of cotton. Cotton harvest is primarily done from irrigated land which leads to Salinization and depletion of local water reservoirs, e.g., Drying up of Aral Sea. Cotton is cultivated on 2.4% of total world’s crop land but accounts for 24% usage of insecticide and shares the blame of 11% usage of pesticides leading to health hazards and having an alarmingly dangerous impact on the ecosystem. One of the possible solutions to these problems as proposed was GM, Genetically Modified cotton crop. However, use of GM cotton is still debatable and has many ethical issues. The practice of mass production and increasing consumerism and especially fast fashion has been major culprits to disrupt this delicate balance. Disposable fashion or fast fashion is on the rise and cotton being one of the major choices adds on to the problem. Denims – made of cotton and have a strong fashion statement and the washes being an integral part of their creation they share a lot of blame. These are just a few problems listed. Today Sustainability is the need of the hour and it is inevitable to incorporate have major changes in the way we cultivate and process cotton to make it a sustainable choice. The answer lies in adopting minimalism and boycotting fast fashion, in using Khadi, in saying no to washed denims and using selvedge denims or using better methods of finishing the washed out fabric so that the environment does not bleed blue. Truly, the answer lies in integrating state of art technology with age old sustainable practices so that the synergy of the two may help us come out of the vicious circle.Keywords: cotton, sustainability, denim, Khadi
Procedia PDF Downloads 156430 Sustainable Living Where the Immaterial Matters
Authors: Maria Hadjisoteriou, Yiorgos Hadjichristou
Abstract:
This paper aims to explore and provoke a debate, through the work of the design studio, “living where the immaterial matters” of the architecture department of the University of Nicosia, on the role that the “immaterial matter” can play in enhancing innovative sustainable architecture and viewing the cities as sustainable organisms that always grow and alter. The blurring, juxtaposing binary of immaterial and matter, as the theoretical backbone of the Unit is counterbalanced by the practicalities of the contested sites of the last divided capital Nicosia with its ambiguous green line and the ghost city of Famagusta in the island of Cyprus. Jonathan Hill argues that the ‘immaterial is as important to architecture as the material concluding that ‘Immaterial–Material’ weaves the two together, so that they are in conjunction not opposition’. This understanding of the relationship of the immaterial vs material set the premises and the departing point of our argument, and talks about new recipes for creating hybrid public space that can lead to the unpredictability of a complex and interactive, sustainable city. We hierarchized the human experience as a priority. We distinguish the notion of space and place referring to Heidegger’s ‘building dwelling thinking’: ‘a distinction between space and place, where spaces gain authority not from ‘space’ appreciated mathematically but ‘place’ appreciated through human experience’. Following the above, architecture and the city are seen as one organism. The notions of boundaries, porous borders, fluidity, mobility, and spaces of flows are the lenses of the investigation of the unit’s methodology, leading to the notion of a new hybrid urban environment, where the main constituent elements are in a flux relationship. The material and the immaterial flows of the town are seen interrelated and interwoven with the material buildings and their immaterial contents, yielding to new sustainable human built environments. The above premises consequently led to choices of controversial sites. Indisputably a provoking site was the ghost town of Famagusta where the time froze back in 1974. Inspired by the fact that the nature took over the a literally dormant, decaying city, a sustainable rebirthing was seen as an opportunity where both nature and built environment, material and immaterial are interwoven in a new emergent urban environment. Similarly, we saw the dividing ‘green line’ of Nicosia completely failing to prevent the trespassing of images, sounds and whispers, smells and symbols that define the two prevailing cultures and becoming a porous creative entity which tends to start reuniting instead of separating , generating sustainable cultures and built environments. The authors would like to contribute to the debate by introducing a question about a new recipe of cooking the built environment. Can we talk about a new ‘urban recipe’: ‘cooking architecture and city’ to deliver an ever changing urban sustainable organism, whose identity will mainly depend on the interrelationship of the immaterial and material constituents?Keywords: blurring zones, porous borders, spaces of flow, urban recipe
Procedia PDF Downloads 420