Search results for: transverse flux PM linear machine
701 Effects of Fe Addition and Process Parameters on the Wear and Corrosion Characteristics of Icosahedral Al-Cu-Fe Coatings on Ti-6Al-4V Alloy
Authors: Olawale S. Fatoba, Stephen A. Akinlabi, Esther T. Akinlabi, Rezvan Gharehbaghi
Abstract:
The performance of material surface under wear and corrosion environments cannot be fulfilled by the conventional surface modifications and coatings. Therefore, different industrial sectors need an alternative technique for enhanced surface properties. Titanium and its alloys possess poor tribological properties which limit their use in certain industries. This paper focuses on the effect of hybrid coatings Al-Cu-Fe on a grade five titanium alloy using laser metal deposition (LMD) process. Icosahedral Al-Cu-Fe as quasicrystals is a relatively new class of materials which exhibit unusual atomic structure and useful physical and chemical properties. A 3kW continuous wave ytterbium laser system (YLS) attached to a KUKA robot which controls the movement of the cladding process was utilized for the fabrication of the coatings. The titanium cladded surfaces were investigated for its hardness, corrosion and tribological behaviour at different laser processing conditions. The samples were cut to corrosion coupons, and immersed into 3.65% NaCl solution at 28oC using Electrochemical Impedance Spectroscopy (EIS) and Linear Polarization (LP) techniques. The cross-sectional view of the samples was analysed. It was found that the geometrical properties of the deposits such as width, height and the Heat Affected Zone (HAZ) of each sample remarkably increased with increasing laser power due to the laser-material interaction. It was observed that there are higher number of aluminum and titanium presented in the formation of the composite. The indentation testing reveals that for both scanning speed of 0.8 m/min and 1m/min, the mean hardness value decreases with increasing laser power. The low coefficient of friction, excellent wear resistance and high microhardness were attributed to the formation of hard intermetallic compounds (TiCu, Ti2Cu, Ti3Al, Al3Ti) produced through the in situ metallurgical reactions during the LMD process. The load-bearing capability of the substrate was improved due to the excellent wear resistance of the coatings. The cladded layer showed a uniform crack free surface due to optimized laser process parameters which led to the refinement of the coatings.Keywords: Al-Cu-Fe coating, corrosion, intermetallics, laser metal deposition, Ti-6Al-4V alloy, wear resistance
Procedia PDF Downloads 178700 Re-identification Risk and Mitigation in Federated Learning: Human Activity Recognition Use Case
Authors: Besma Khalfoun
Abstract:
In many current Human Activity Recognition (HAR) applications, users' data is frequently shared and centrally stored by third parties, posing a significant privacy risk. This practice makes these entities attractive targets for extracting sensitive information about users, including their identity, health status, and location, thereby directly violating users' privacy. To tackle the issue of centralized data storage, a relatively recent paradigm known as federated learning has emerged. In this approach, users' raw data remains on their smartphones, where they train the HAR model locally. However, users still share updates of their local models originating from raw data. These updates are vulnerable to several attacks designed to extract sensitive information, such as determining whether a data sample is used in the training process, recovering the training data with inversion attacks, or inferring a specific attribute or property from the training data. In this paper, we first introduce PUR-Attack, a parameter-based user re-identification attack developed for HAR applications within a federated learning setting. It involves associating anonymous model updates (i.e., local models' weights or parameters) with the originating user's identity using background knowledge. PUR-Attack relies on a simple yet effective machine learning classifier and produces promising results. Specifically, we have found that by considering the weights of a given layer in a HAR model, we can uniquely re-identify users with an attack success rate of almost 100%. This result holds when considering a small attack training set and various data splitting strategies in the HAR model training. Thus, it is crucial to investigate protection methods to mitigate this privacy threat. Along this path, we propose SAFER, a privacy-preserving mechanism based on adaptive local differential privacy. Before sharing the model updates with the FL server, SAFER adds the optimal noise based on the re-identification risk assessment. Our approach can achieve a promising tradeoff between privacy, in terms of reducing re-identification risk, and utility, in terms of maintaining acceptable accuracy for the HAR model.Keywords: federated learning, privacy risk assessment, re-identification risk, privacy preserving mechanisms, local differential privacy, human activity recognition
Procedia PDF Downloads 11699 Experimenting with Clay 3D Printing Technology to Create an Undulating Facade
Authors: Naeimehsadat Hosseininam, Rui Wang, Dishita Shah
Abstract:
In recent years, new experimental approaches with the help of the new technology have bridged the gaps between the application of natural materials and creating unconventional forms. Clay has been one of the oldest building materials in all ancient civilizations. The availability and workability of clay have contributed to the widespread application of this material around the world. The aim of this experimental research is to apply the Clay 3D printing technology to create a load bearing and visually dynamic and undulating façade. Creation of different unique pieces is the most significant goal of this research which justifies the application of 3D printing technology instead of the conventional mass industrial production. This study provides an abbreviated overview of the similar cases which have used the Clay 3D printing to generate the corresponding prototypes. The study of these cases also helps in understanding the potential and flexibility of the material and 3D printing machine in developing different forms. In the next step, experimental research carried out by 3D printing of six various options which designed considering the properties of clay as well as the methodology of them being 3D printed. Here, the ratio of water to clay (W/C) has a significant role in the consistency of the material and the workability of the clay. Also, the size of the selected nozzle impacts the shape and the smoothness of the final surface. Moreover, the results of these experiments show the limitations of clay toward forming various slopes. The most notable consequence of having steep slopes in the prototype is an unpredicted collapse which is the result of internal tension in the material. From the six initial design ideas, the final prototype selected with the aim of creating a self-supported component with unique blocks that provides a possibility of installing the insulation system within the component. Apart from being an undulated façade, the presented prototype has the potential to be used as a fence and an interior partition (double-sided). The central shaft also provides a space to run services or insulation in different parts of the wall. In parallel to present the capability and potential of the clay 3D printing technology, this study illustrates the limitations of this system in some certain areas. There are inevitable parameters such as printing speed, temperature, drying speed that need to be considered while printing each piece. Clay 3D printing technology provides the opportunity to create variations and design parametric building components with the application of the most practiced material in the world.Keywords: clay 3D printing, material capability, undulating facade, load bearing facade
Procedia PDF Downloads 141698 The Need for Automation in the Domestic Food Processing Sector and its Impact
Authors: Shantam Gupta
Abstract:
The objective of this study is to address the critical need for automation in the domestic food processing sector and study its impact. Food is the one of the most basic physiological needs essential for the survival of a living being. Some of them have the capacity to prepare their own food (like most plants) and henceforth are designated as primary food producers; those who depend on these primary food producers for food form the primary consumers’ class (herbivores). Some of the organisms relying on the primary food are the secondary food consumers (carnivores). There is a third class of consumers called tertiary food consumers/apex food consumers that feed on both the primary and secondary food consumers. Humans form an essential part of the apex predators and are generally at the top of the food chain. But still further disintegration of the food habits of the modern human i.e. Homo sapiens, reveals that humans depend on other individuals for preparing their own food. The old notion of eating raw/brute food is long gone and food processing has become very trenchant in lives of modern human. This has led to an increase in dependence on other individuals for ‘processing’ the food before it can be actually consumed by the modern human. This has led to a further shift of humans in the classification of food chain of consumers. The effects of the shifts shall be systematically investigated in this paper. The processing of food has a direct impact on the economy of the individual (consumer). Also most individuals depend on other processing individuals for the preparation of food. This dependency leads to establishment of a vital link of dependency in the food web which when altered can adversely affect the food web and can have dire consequences on the health of the individual. This study investigates the challenges arising out due to this dependency and the impact of food processing on the economy of the individual. A comparison of Industrial food processing and processing at domestic platforms (households and restaurants) has been made to provide an idea about the present scenario of automation in the food processing sector. A lot of time and energy is also consumed while processing food at home for consumption. The high frequency of consumption of meals (greater than 2 times a day) makes it even more laborious. Through the medium of this study a pressing need for development of an automatic cooking machine is proposed with a mission to reduce the inter-dependency & human effort of individuals required for the preparation of food (by automation of the food preparation process) and make them more self-reliant The impact of development of this product has also further been profoundly discussed. Assumption used: The individuals those who process food also consume the food that they produce. (They are also termed as ‘independent’ or ‘self-reliant’ modern human beings.)Keywords: automation, food processing, impact on economy, processing individual
Procedia PDF Downloads 470697 Skin-Dose Mapping for Patients Undergoing Interventional Radiology Procedures: Clinical Experimentations versus a Mathematical Model
Authors: Aya Al Masri, Stefaan Carpentier, Fabrice Leroy, Thibault Julien, Safoin Aktaou, Malorie Martin, Fouad Maaloul
Abstract:
Introduction: During an 'Interventional Radiology (IR)' procedure, the patient's skin-dose may become very high for a burn, necrosis and ulceration to appear. In order to prevent these deterministic effects, an accurate calculation of the patient skin-dose mapping is essential. For most machines, the 'Dose Area Product (DAP)' and fluoroscopy time are the only information available for the operator. These two parameters are a very poor indicator of the peak skin dose. We developed a mathematical model that reconstructs the magnitude (delivered dose), shape, and localization of each irradiation field on the patient skin. In case of critical dose exceeding, the system generates warning alerts. We present the results of its comparison with clinical studies. Materials and methods: Two series of comparison of the skin-dose mapping of our mathematical model with clinical studies were performed: 1. At a first time, clinical tests were performed on patient phantoms. Gafchromic films were placed on the table of the IR machine under of PMMA plates (thickness = 20 cm) that simulate the patient. After irradiation, the film darkening is proportional to the radiation dose received by the patient's back and reflects the shape of the X-ray field. After film scanning and analysis, the exact dose value can be obtained at each point of the mapping. Four experimentation were performed, constituting a total of 34 acquisition incidences including all possible exposure configurations. 2. At a second time, clinical trials were launched on real patients during real 'Chronic Total Occlusion (CTO)' procedures for a total of 80 cases. Gafchromic films were placed at the back of patients. We performed comparisons on the dose values, as well as the distribution, and the shape of irradiation fields between the skin dose mapping of our mathematical model and Gafchromic films. Results: The comparison between the dose values shows a difference less than 15%. Moreover, our model shows a very good geometric accuracy: all fields have the same shape, size and location (uncertainty < 5%). Conclusion: This study shows that our model is a reliable tool to warn physicians when a high radiation dose is reached. Thus, deterministic effects can be avoided.Keywords: clinical experimentation, interventional radiology, mathematical model, patient's skin-dose mapping.
Procedia PDF Downloads 140696 Bone Mineral Density and Trabecular Bone Score in Ukrainian Women with Obesity
Authors: Vladyslav Povoroznyuk, Nataliia Dzerovych, Larysa Martynyuk, Tetiana Kovtun
Abstract:
Obesity and osteoporosis are the two diseases whose increasing prevalence and high impact on the global morbidity and mortality, during the two recent decades, have gained a status of major health threats worldwide. Obesity purports to affect the bone metabolism through complex mechanisms. Debated data on the connection between the bone mineral density and fracture prevalence in the obese patients are widely presented in literature. There is evidence that the correlation of weight and fracture risk is site-specific. The aim of this study was to evaluate the Bone Mineral Density (BMD) and Trabecular Bone Score (TBS) in the obese Ukrainian women. We examined 1025 40-89-year-old women, divided them into the groups according to their body mass index: Group a included 360 women with obesity whose BMI was ≥30 kg/m2, and Group B – 665 women with no obesity and BMI of < 30 kg/m2. The BMD of total body, lumbar spine at the site L1-L4, femur and forearm were measured by DXA (Prodigy, GEHC Lunar, Madison, WI, USA). The TBS of L1-L4 was assessed by means of TBS iNsight® software installed on our DXA machine (product of Med-Imaps, Pessac, France). In general, obese women had a significantly higher BMD of lumbar spine, femoral neck, proximal femur, total body, and ultradistal forearm (p<0.001) in comparison with women without obesity. The TBS of L1-L4 was significantly lower in obese women compared to non-obese women (p<0.001). The BMD of lumbar spine, femoral neck and total body differed to a significant extent in women of 40-49, 50-59, 60-69, and 70-79 years (p<0.05). At same time, in women aged 80-89 years the BMD of lumbar spine (p=0.09), femoral neck (p=0.22) and total body (p=0.06) barely differed. The BMD of ultradistal forearm was significantly higher in women of all age groups (p<0.05). The TBS of L1-L4 in all the age groups tended to reveal the lower parameters in obese women compared with the non-obese; however, those data were not statistically significant. By contrast, a significant positive correlation was observed between the fat mass and the BMD at different sites. The correlation between the fat mass and TBS of L1-L4 was also significant, although negative. Women with vertebral fractures had a significantly lower body weight, body mass index and total body fat mass in comparison with women without vertebral fractures in their anamnesis. In obese women the frequency of vertebral fractures was 27%, while in women without obesity – 57%.Keywords: obesity, trabecular bone score, bone mineral density, women
Procedia PDF Downloads 443695 Analysis of Extreme Rainfall Trends in Central Italy
Authors: Renato Morbidelli, Carla Saltalippi, Alessia Flammini, Marco Cifrodelli, Corrado Corradini
Abstract:
The trend of magnitude and frequency of extreme rainfalls seems to be different depending on the investigated area of the world. In this work, the impact of climate change on extreme rainfalls in Umbria, an inland region of central Italy, is examined using data recorded during the period 1921-2015 by 10 representative rain gauge stations. The study area is characterized by a complex orography, with altitude ranging from 200 to more than 2000 m asl. The climate is very different from zone to zone, with mean annual rainfall ranging from 650 to 1450 mm and mean annual air temperature from 3.3 to 14.2°C. Over the past 15 years, this region has been affected by four significant droughts as well as by six dangerous flood events, all with very large impact in economic terms. A least-squares linear trend analysis of annual maximums over 60 time series selected considering 6 different durations (1 h, 3 h, 6 h, 12 h, 24 h, 48 h) showed about 50% of positive and 50% of negative cases. For the same time series the non-parametrical Mann-Kendall test with a significance level 0.05 evidenced only 3% of cases characterized by a negative trend and no positive case. Further investigations have also demonstrated that the variance and covariance of each time series can be considered almost stationary. Therefore, the analysis on the magnitude of extreme rainfalls supplies the indication that an evident trend in the change of values in the Umbria region does not exist. However, also the frequency of rainfall events, with particularly high rainfall depths values, occurred during a fixed period has also to be considered. For all selected stations the 2-day rainfall events that exceed 50 mm were counted for each year, starting from the first monitored year to the end of 2015. Also, this analysis did not show predominant trends. Specifically, for all selected rain gauge stations the annual number of 2-day rainfall events that exceed the threshold value (50 mm) was slowly decreasing in time, while the annual cumulated rainfall depths corresponding to the same events evidenced trends that were not statistically significant. Overall, by using a wide available dataset and adopting simple methods, the influence of climate change on the heavy rainfalls in the Umbria region is not detected.Keywords: climate changes, rainfall extremes, rainfall magnitude and frequency, central Italy
Procedia PDF Downloads 236694 Characterising Performative Technological Innovation: Developing a Strategic Framework That Incorporates the Social Mechanisms That Promote Change within a Technological Environment
Authors: Joan Edwards, J. Lawlor
Abstract:
Technological innovation is frequently defined in terms of bringing a new invention to market through a relatively straightforward process of diffusion. In reality, this process is complex and non-linear in nature, and includes social and cognitive factors that influence the development of an emerging technology and its related market or environment. As recent studies contend technological trajectory is part of technological paradigms, which arise from the expectations and desires of industry agents and results in co-evolution, it may be realised that social factors play a major role in the development of a technology. It is conjectured that collective social behaviour is fuelled by individual motivations and expectations, which inform the possibilities and uses for a new technology. The individual outlook highlights the issues present at the micro-level of developing a technology. Accordingly, this may be zoomed out to realise how these embedded social structures, influence activities and expectations at a macro level and can ultimately strategically shape the development and use of a technology. These social factors rely on communication to foster the innovation process. As innovation may be defined as the implementation of inventions, technological change results from the complex interactions and feedback occurring within an extended environment. The framework presented in this paper, recognises that social mechanisms provide the basis for an iterative dialogue between an innovator, a new technology, and an environment - within which social and cognitive ‘identity-shaping’ elements of the innovation process occur. Identity-shaping characteristics indicate that an emerging technology has a performative nature that transforms, alters, and ultimately configures the environment to which it joins. This identity–shaping quality is termed as ‘performative’. This paper examines how technologies evolve within a socio-technological sphere and how 'performativity' facilitates the process. A framework is proposed that incorporates the performative elements which are identified as feedback, iteration, routine, expectations, and motivations. Additionally, the concept of affordances is employed to determine how the role of the innovator and technology change over time - constituting a more conducive environment for successful innovation.Keywords: affordances, framework, performativity, strategic innovation
Procedia PDF Downloads 206693 Digital Transformation and Digitalization of Public Administration
Authors: Govind Kumar
Abstract:
The concept of ‘e-governance’ that was brought about by the new wave of reforms, namely ‘LPG’ in the early 1990s, has been enabling governments across the globe to digitally transform themselves. Digital transformation is leading the governments with qualitative decisions, optimization in rational use of resources, facilitation of cost-benefit analyses, and elimination of redundancy and corruption with the help of ICT-based applications interface. ICT-based applications/technologies have enormous potential for impacting positive change in the social lives of the global citizenry. Supercomputers test and analyze millions of drug molecules for developing candidate vaccines to combat the global pandemic. Further, e-commerce portals help distribute and supply household items and medicines, while videoconferencing tools provide a visual interface between the clients and hosts. Besides, crop yields are being maximized with the help of drones and machine learning, whereas satellite data, artificial intelligence, and cloud computing help governments with the detection of illegal mining, tackling deforestation, and managing freshwater resources. Such e-applications have the potential to take governance an extra mile by achieving 5 Es (effective, efficient, easy, empower, and equity) of e-governance and six Rs (reduce, reuse, recycle, recover, redesign and remanufacture) of sustainable development. If such digital transformation gains traction within the government framework, it will replace the traditional administration with the digitalization of public administration. On the other hand, it has brought in a new set of challenges, like the digital divide, e-illiteracy, technological divide, etc., and problems like handling e-waste, technological obsolescence, cyber terrorism, e-fraud, hacking, phishing, etc. before the governments. Therefore, it would be essential to bring in a rightful mixture of technological and humanistic interventions for addressing the above issues. This is on account of the reason that technology lacks an emotional quotient, and the administration does not work like technology. Both are self-effacing unless a blend of technology and a humane face are brought in into the administration. The paper will empirically analyze the significance of the technological framework of digital transformation within the government set up for the digitalization of public administration on the basis of the synthesis of two case studies undertaken from two diverse fields of administration and present a future framework of the study.Keywords: digital transformation, electronic governance, public administration, knowledge framework
Procedia PDF Downloads 99692 Effect of Immunocastration Vaccine Administration at Different Doses on Performance of Feedlot Holstein Bulls
Authors: M. Bolacali
Abstract:
The aim of the study is to determine the effect of immunocastration vaccine administration at different doses on fattening performance of feedlot Holstein bulls. Bopriva® is a vaccine that stimulates the animals' own immune system to produce specific antibodies against gonadotropin releasing factor (GnRF). Ninety four Holstein male calves (309.5 ± 2.58 kg body live weight and 267 d-old) assigned to the 4 treatments. Control group; 1 mL of 0.9% saline solution was subcutaneously injected to intact bulls on 1st and 60th days of the feedlot as placebo. On the same days of the feedlot, Bopriva® at two doses of 1 mL and 1 mL for Trial-1 group, 1.5 mL, and 1.5 mL for Trial-2 group, 1.5 mL, and 1 mL for Trial-3 group were subcutaneously injected to bulls. The study was conducted in a private establishment in the Sirvan district of Siirt province and lasted 180 days. The animals were weighed at the beginning of fattening and at 30-day intervals to determine their live weights at various periods. The statistical analysis for normal distribution data of the treatment groups was carried out with the general linear model procedure of SPSS software. The fattening initial live weight in Control, Trial-1, Trial-2 and Trial-3 groups was respectively 309.21, 306.62, 312.11, and 315.39 kg. The fattening final live weight was respectively 560.88, 536.67, 548.56, and 548.25 kg. The daily live weight gain during the trial was respectively 1.40, 1.28, 1.31, and 1.29 kg/day. The cold carcass yield was respectively 51.59%, 50.32%, 50.85%, and 50.77%. Immunocastration vaccine administration at different doses did not affect the live weights and cold carcass yields of Holstein male calves reared under intensive conditions (P > 0.05). However, it was determined to reduce fattening performance between 61-120 days (P < 0.05) and 1-180 days (P < 0.01). In addition, it was determined that the best performance among the vaccine-treated groups occurred in the group administered a 1.5 mL of vaccine on the 1st and 60th study days. In animals, castration is used to control fertility, aggressive and sexual behaviors. As a result, the fact that stress is induced by physical castration in animals and active immunization against GnRF maintains performance by maximizing welfare in bulls improves carcass and meat quality and controls unwanted sexual and aggressive behavior. Considering such features, it may be suggested that immunocastration vaccine with Bopriva® can be administered as a 1.5 mL dose on the 1st and 60th days of the fattening period in Holstein bulls.Keywords: anti-GnRF, fattening, growth, immunocastration
Procedia PDF Downloads 192691 Conflation Methodology Applied to Flood Recovery
Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong
Abstract:
Current flooding risk modeling focuses on resilience, defined as the probability of recovery from a severe flooding event. However, the long-term damage to property and well-being by nuisance flooding and its long-term effects on communities are not typically included in risk assessments. An approach was developed to address the probability of recovering from a severe flooding event combined with the probability of community performance during a nuisance event. A consolidated model, namely the conflation flooding recovery (&FR) model, evaluates risk-coping mitigation strategies for communities based on the recovery time from catastrophic events, such as hurricanes or extreme surges, and from everyday nuisance flooding events. The &FR model assesses the variation contribution of each independent input and generates a weighted output that favors the distribution with minimum variation. This approach is especially useful if the input distributions have dissimilar variances. The &FR is defined as a single distribution resulting from the product of the individual probability density functions. The resulting conflated distribution resides between the parent distributions, and it infers the recovery time required by a community to return to basic functions, such as power, utilities, transportation, and civil order, after a flooding event. The &FR model is more accurate than averaging individual observations before calculating the mean and variance or averaging the probabilities evaluated at the input values, which assigns the same weighted variation to each input distribution. The main disadvantage of these traditional methods is that the resulting measure of central tendency is exactly equal to the average of the input distribution’s means without the additional information provided by each individual distribution variance. When dealing with exponential distributions, such as resilience from severe flooding events and from nuisance flooding events, conflation results are equivalent to the weighted least squares method or best linear unbiased estimation. The combination of severe flooding risk with nuisance flooding improves flood risk management for highly populated coastal communities, such as in South Florida, USA, and provides a method to estimate community flood recovery time more accurately from two different sources, severe flooding events and nuisance flooding events.Keywords: community resilience, conflation, flood risk, nuisance flooding
Procedia PDF Downloads 103690 Propagation of Simmondsia chinensis (Link) Schneider by Stem Cuttings
Authors: Ahmed M. Eed, Adam H. Burgoyne
Abstract:
Jojoba (Simmondsia chinensis (Link) Schneider), is a desert shrub which tolerates saline, alkyle soils and drought. The seeds contain a characteristic liquid wax of economic importance in industry as a machine lubricant and cosmetics. A major problem in seed propagation is that jojoba is a dioecious plant whose sex is not easily determined prior to flowering (3-4 years from germination). To overcome this phenomenon, asexual propagation using vegetative methods such as cutting can be used. This research was conducted to find out the effect of different Plant Growth Regulators (PGRs) and rooting media on Jojoba rhizogenesis. An experiment was carried out in a Factorial Completely Randomized Design (FCRD) with three replications, each with sixty cuttings per replication in fiberglass house of Natural Jojoba Corporation at Yemen. The different rooting media used were peat moss + perlite + vermiculite (1:1:1), peat moss + perlite (1:1) and peat moss + sand (1:1). Plant materials used were semi-hard wood cuttings of jojoba plants with length of 15 cm. The cuttings were collected in the month of June during 2012 and 2013 from the sub-terminal growth of the mother plants of Amman farm and introduced to Yemen. They were wounded, treated with Indole butyric acid (IBA), α-naphthalene acetic acid (NAA) or Indole-3-acetic acid (IAA) all @ 4000 ppm (part per million) and cultured on different rooting media under intermittent mist propagation conditions. IBA gave significantly higher percentage of rooting (66.23%) compared to NAA and IAA in all media used. However, the lowest percentage of rooting (5.33%) was recorded with IAA in the medium consisting of peat moss and sand (1:1). No significant difference was observed at all types of PGRs used with rooting media in respect of root length. Maximum number of roots was noticed in medium consisting of peat moss, perlite and vermiculite (1:1:1); peat moss and perlite (1:1) and peat moss and sand (1:1) using IBA, NAA and IBA, respectively. The interaction among rooting media was statistically significant with respect to rooting percentage character. Similarly, the interactions among PGRs were significant in terms of rooting percentage and also root length characters. The results demonstrated suitability of propagation of jojoba plants by semi-hard wood cuttings.Keywords: cutting, IBA, Jojoba, propagation, rhizogenesis
Procedia PDF Downloads 341689 Handy EKG: Low-Cost ECG For Primary Care Screening In Developing Countries
Authors: Jhiamluka Zservando Solano Velasquez, Raul Palma, Alejandro Calderon, Servio Paguada, Erick Marin, Kellyn Funes, Hana Sandoval, Oscar Hernandez
Abstract:
Background: Screening cardiac conditions in primary care in developing countries can be challenging, and Honduras is not the exception. One of the main limitations is the underfunding of the Healthcare System in general, causing conventional ECG acquisition to become a secondary priority. Objective: Development of a low-cost ECG to improve screening of arrhythmias in primary care and communication with a specialist in secondary and tertiary care. Methods: Design a portable, pocket-size low-cost 3 lead ECG (Handy EKG). The device is autonomous and has Wi-Fi/Bluetooth connectivity options. A mobile app was designed which can access online servers with machine learning, a subset of artificial intelligence to learn from the data and aid clinicians in their interpretation of readings. Additionally, the device would use the online servers to transfer patient’s data and readings to a specialist in secondary and tertiary care. 50 randomized patients volunteer to participate to test the device. The patients had no previous cardiac-related conditions, and readings were taken. One reading was performed with the conventional ECG and 3 readings with the Handy EKG using different lead positions. This project was possible thanks to the funding provided by the National Autonomous University of Honduras. Results: Preliminary results show that the Handy EKG performs readings of the cardiac activity similar to those of a conventional electrocardiograph in lead I, II, and III depending on the position of the leads at a lower cost. The wave and segment duration, amplitude, and morphology of the readings were similar to the conventional ECG, and interpretation was possible to conclude whether there was an arrhythmia or not. Two cases of prolonged PR segment were found in both ECG device readings. Conclusion: Using a Frugal innovation approach can allow lower income countries to develop innovative medical devices such as the Handy EKG to fulfill unmet needs at lower prices without compromising effectiveness, safety, and quality. The Handy EKG provides a solution for primary care screening at a much lower cost and allows for convenient storage of the readings in online servers where clinical data of patients can then be accessed remotely by Cardiology specialists.Keywords: low-cost hardware, portable electrocardiograph, prototype, remote healthcare
Procedia PDF Downloads 180688 Leveraging xAPI in a Corporate e-Learning Environment to Facilitate the Tracking, Modelling, and Predictive Analysis of Learner Behaviour
Authors: Libor Zachoval, Daire O Broin, Oisin Cawley
Abstract:
E-learning platforms, such as Blackboard have two major shortcomings: limited data capture as a result of the limitations of SCORM (Shareable Content Object Reference Model), and lack of incorporation of Artificial Intelligence (AI) and machine learning algorithms which could lead to better course adaptations. With the recent development of Experience Application Programming Interface (xAPI), a large amount of additional types of data can be captured and that opens a window of possibilities from which online education can benefit. In a corporate setting, where companies invest billions on the learning and development of their employees, some learner behaviours can be troublesome for they can hinder the knowledge development of a learner. Behaviours that hinder the knowledge development also raise ambiguity about learner’s knowledge mastery, specifically those related to gaming the system. Furthermore, a company receives little benefit from their investment if employees are passing courses without possessing the required knowledge and potential compliance risks may arise. Using xAPI and rules derived from a state-of-the-art review, we identified three learner behaviours, primarily related to guessing, in a corporate compliance course. The identified behaviours are: trying each option for a question, specifically for multiple-choice questions; selecting a single option for all the questions on the test; and continuously repeating tests upon failing as opposed to going over the learning material. These behaviours were detected on learners who repeated the test at least 4 times before passing the course. These findings suggest that gauging the mastery of a learner from multiple-choice questions test scores alone is a naive approach. Thus, next steps will consider the incorporation of additional data points, knowledge estimation models to model knowledge mastery of a learner more accurately, and analysis of the data for correlations between knowledge development and identified learner behaviours. Additional work could explore how learner behaviours could be utilised to make changes to a course. For example, course content may require modifications (certain sections of learning material may be shown to not be helpful to many learners to master the learning outcomes aimed at) or course design (such as the type and duration of feedback).Keywords: artificial intelligence, corporate e-learning environment, knowledge maintenance, xAPI
Procedia PDF Downloads 121687 Developing Environmental Engineering Alternatives for Deep Desulphurization of Transportation Fuels
Authors: Nalinee B. Suryawanshi, Vinay M. Bhandari, Laxmi Gayatri Sorokhaibam, Vivek V. Ranade
Abstract:
Deep desulphurization of transportation fuels is a major environmental concern all over the world and recently prescribed norms for the sulphur content require below 10 ppm sulphur concentrations in fuels such as diesel and gasoline. The existing technologies largely based on catalytic processes such as hydrodesulphurization, oxidation require newer catalysts and demand high cost of deep desulphurization whereas adsorption based processes have limitations due to lower capacity of sulphur removal. The present work is an attempt to provide alternatives for the existing methodologies using a newer non-catalytic process based on hydrodynamic cavitation. The developed process requires appropriate combining of organic and aqueous phases under ambient conditions and passing through a cavitating device such as orifice, venturi or vortex diode. The implosion of vapour cavities formed in the cavitating device generates (in-situ) oxidizing species which react with the sulphur moiety resulting in the removal of sulphur from the organic phase. In this work, orifice was used as a cavitating device and deep desulphurization was demonstrated for removal of thiophene as a model sulphur compound from synthetic fuel of n-octane, toluene and n-octanol. The effect of concentration of sulphur (up to 300 ppm), nature of organic phase and effect of pressure drop (0.5 to 10 bar) was discussed. A very high removal of sulphur content of more than 90% was demonstrated. The process is easy to operate, essentially works at ambient conditions and the ratio of aqueous to organic phase can be easily adjusted to maximise sulphur removal. Experimental studies were also carried out using commercial diesel as a solvent and the results substantiate similar high sulphur removal. A comparison of the two cavitating devices- one with a linear flow and one using vortex flow for effecting pressure drop and cavitation indicates similar trends in terms of sulphur removal behaviour. The developed process is expected to provide an attractive environmental engineering alternative for deep desulphurization of transportation fuels.Keywords: cavitation, petroleum, separation, sulphur removal
Procedia PDF Downloads 379686 Treating Voxels as Words: Word-to-Vector Methods for fMRI Meta-Analyses
Authors: Matthew Baucum
Abstract:
With the increasing popularity of fMRI as an experimental method, psychology and neuroscience can greatly benefit from advanced techniques for summarizing and synthesizing large amounts of data from brain imaging studies. One promising avenue is automated meta-analyses, in which natural language processing methods are used to identify the brain regions consistently associated with certain semantic concepts (e.g. “social”, “reward’) across large corpora of studies. This study builds on this approach by demonstrating how, in fMRI meta-analyses, individual voxels can be treated as vectors in a semantic space and evaluated for their “proximity” to terms of interest. In this technique, a low-dimensional semantic space is built from brain imaging study texts, allowing words in each text to be represented as vectors (where words that frequently appear together are near each other in the semantic space). Consequently, each voxel in a brain mask can be represented as a normalized vector sum of all of the words in the studies that showed activation in that voxel. The entire brain mask can then be visualized in terms of each voxel’s proximity to a given term of interest (e.g., “vision”, “decision making”) or collection of terms (e.g., “theory of mind”, “social”, “agent”), as measured by the cosine similarity between the voxel’s vector and the term vector (or the average of multiple term vectors). Analysis can also proceed in the opposite direction, allowing word cloud visualizations of the nearest semantic neighbors for a given brain region. This approach allows for continuous, fine-grained metrics of voxel-term associations, and relies on state-of-the-art “open vocabulary” methods that go beyond mere word-counts. An analysis of over 11,000 neuroimaging studies from an existing meta-analytic fMRI database demonstrates that this technique can be used to recover known neural bases for multiple psychological functions, suggesting this method’s utility for efficient, high-level meta-analyses of localized brain function. While automated text analytic methods are no replacement for deliberate, manual meta-analyses, they seem to show promise for the efficient aggregation of large bodies of scientific knowledge, at least on a relatively general level.Keywords: FMRI, machine learning, meta-analysis, text analysis
Procedia PDF Downloads 448685 Automatic Differential Diagnosis of Melanocytic Skin Tumours Using Ultrasound and Spectrophotometric Data
Authors: Kristina Sakalauskiene, Renaldas Raisutis, Gintare Linkeviciute, Skaidra Valiukeviciene
Abstract:
Cutaneous melanoma is a melanocytic skin tumour, which has a very poor prognosis while is highly resistant to treatment and tends to metastasize. Thickness of melanoma is one of the most important biomarker for stage of disease, prognosis and surgery planning. In this study, we hypothesized that the automatic analysis of spectrophotometric images and high-frequency ultrasonic 2D data can improve differential diagnosis of cutaneous melanoma and provide additional information about tumour penetration depth. This paper presents the novel complex automatic system for non-invasive melanocytic skin tumour differential diagnosis and penetration depth evaluation. The system is composed of region of interest segmentation in spectrophotometric images and high-frequency ultrasound data, quantitative parameter evaluation, informative feature extraction and classification with linear regression classifier. The segmentation of melanocytic skin tumour region in ultrasound image is based on parametric integrated backscattering coefficient calculation. The segmentation of optical image is based on Otsu thresholding. In total 29 quantitative tissue characterization parameters were evaluated by using ultrasound data (11 acoustical, 4 shape and 15 textural parameters) and 55 quantitative features of dermatoscopic and spectrophotometric images (using total melanin, dermal melanin, blood and collagen SIAgraphs acquired using spectrophotometric imaging device SIAscope). In total 102 melanocytic skin lesions (including 43 cutaneous melanomas) were examined by using SIAscope and ultrasound system with 22 MHz center frequency single element transducer. The diagnosis and Breslow thickness (pT) of each MST were evaluated during routine histological examination after excision and used as a reference. The results of this study have shown that automatic analysis of spectrophotometric and high frequency ultrasound data can improve non-invasive classification accuracy of early-stage cutaneous melanoma and provide supplementary information about tumour penetration depth.Keywords: cutaneous melanoma, differential diagnosis, high-frequency ultrasound, melanocytic skin tumours, spectrophotometric imaging
Procedia PDF Downloads 270684 Microscale observations of a gas cell wall rupture in bread dough during baking and confrontation to 2/3D Finite Element simulations of stress concentration
Authors: Kossigan Bernard Dedey, David Grenier, Tiphaine Lucas
Abstract:
Bread dough is often described as a dispersion of gas cells in a continuous gluten/starch matrix. The final bread crumb structure is strongly related to gas cell walls (GCWs) rupture during baking. At the end of proofing and during baking, part of the thinnest GCWs between expanding gas cells is reduced to a gluten film of about the size of a starch granule. When such size is reached gluten and starch granules must be considered as interacting phases in order to account for heterogeneities and appropriately describe GCW rupture. Among experimental investigations carried out to assess GCW rupture, no experimental work was performed to observe the GCW rupture in the baking conditions at GCW scale. In addition, attempts to numerically understand GCW rupture are usually not performed at the GCW scale and often considered GCWs as continuous. The most relevant paper that accounted for heterogeneities dealt with the gluten/starch interactions and their impact on the mechanical behavior of dough film. However, stress concentration in GCW was not discussed. In this study, both experimental and numerical approaches were used to better understand GCW rupture in bread dough during baking. Experimentally, a macro-scope placed in front of a two-chamber device was used to observe the rupture of a real GCW of 200 micrometers in thickness. Special attention was paid in order to mimic baking conditions as far as possible (temperature, gas pressure and moisture). Various differences in pressure between both sides of GCW were applied and different modes of fracture initiation and propagation in GCWs were observed. Numerically, the impact of gluten/starch interactions (cohesion or non-cohesion) and rheological moduli ratio on the mechanical behavior of GCW under unidirectional extension was assessed in 2D/3D. A non-linear viscoelastic and hyperelastic approach was performed to match the finite strain involved in GCW during baking. Stress concentration within GCW was identified. Simulated stresses concentration was discussed at the light of GCW failure observed in the device. The gluten/starch granule interactions and rheological modulus ratio were found to have a great effect on the amount of stress possibly reached in the GCW.Keywords: dough, experimental, numerical, rupture
Procedia PDF Downloads 122683 Synthesis and Prediction of Activity Spectra of Substances-Assisted Evaluation of Heterocyclic Compounds Containing Hydroquinoline Scaffolds
Authors: Gizachew Mulugeta Manahelohe, Khidmet Safarovich Shikhaliev
Abstract:
There has been a significant surge in interest in the synthesis of heterocyclic compounds that contain hydroquinoline fragments. This surge can be attributed to the broad range of pharmaceutical and industrial applications that these compounds possess. The present study provides a comprehensive account of the synthesis of both linear and fused heterocyclic systems that incorporate hydroquinoline fragments. Furthermore, the pharmacological activity spectra of the synthesized compounds were assessed using the in silico method, employing the prediction of activity spectra of substances (PASS) program. Hydroquinoline nitriles 7 and 8 were prepared through the reaction of the corresponding hydroquinolinecarbaldehyde using a hydroxylammonium chloride/pyridine/toluene system and iodine in aqueous ammonia under ambient conditions, respectively. 2-Phenyl-1,3-oxazol-5(4H)-ones 9a,b and 10a,b were synthesized via the condensation of compounds 5a,b and 6a,b with hippuric acid in acetic acid in 30–60% yield. When activated, 7-methylazolopyrimidines 11a and b were reacted with N-alkyl-2,2,4-trimethyl-1,2,3,4-tetrahydroquinoline-6-carbaldehydes 6a and b, and triazolo/pyrazolo[1,5-a]pyrimidin-6-yl carboxylic acids 12a and b were obtained in 60–70% yield. The condensation of 7-hydroxy-1,2,3,4-tetramethyl-1,2-dihydroquinoline 3 h with dimethylacetylenedicarboxylate (DMAD) and ethyl acetoacetate afforded cyclic products 16 and 17, respectively. The condensation reaction of 6-formyl-7-hydroxy-1,2,2,4-tetramethyl-1,2-dihydroquinoline 5e with methylene-active compounds such as ethyl cyanoacetate/dimethyl-3-oxopentanedioate/ethyl acetoacetate/diethylmalonate/Meldrum’s acid afforded 3-substituted coumarins containing dihydroquinolines 19 and 21. Pentacyclic coumarin 22 was obtained via the random condensation of malononitrile with 5e in the presence of a catalytic amount of piperidine in ethanol. The biological activities of the synthesized compounds were assessed using the PASS program. Based on the prognosis, compounds 13a, b, and 14 exhibited a high likelihood of being active as inhibitors of gluconate 2-dehydrogenase, as well as possessing antiallergic, antiasthmatic, and antiarthritic properties, with a probability value (Pa) ranging from 0.849 to 0.870. Furthermore, it was discovered that hydroquinoline carbonitriles 7 and 8 tended to act as effective progesterone antagonists and displayed antiallergic, antiasthmatic, and antiarthritic effects (Pa = 0.276–0.827). Among the hydroquinolines containing coumarin moieties, compounds 17, 19a, and 19c were predicted to be potent progesterone antagonists, with Pa values of 0.710, 0.630, and 0.615, respectively.Keywords: heterocyclic compound, hydroquinoline, Vilsmeier–Haack formulation, quinolone
Procedia PDF Downloads 42682 Tribological Behavior Of 17-4PH Steel Produced Via Binder Jetting And Low Energy High Current Pulsed Electron Beam Surface Treated
Authors: Lorenza Fabiocchi, Marco Mariani, Andrea Lucchini Huspek, Matteo Pozzi, Massimiliano Bestetti, Serena Graziosi, Nora Lecis
Abstract:
Additive manufacturing of stainless steels is rapidly developing thanks to the ability to achieve complex designs effortlessly. Stainless steel 17-4PH is valued for its high strength and corrosion resistance, however intricate geometries are challenging to obtain due to rapid tool wear when machined. Binder jetting additive manufacturing was used to produce 17–4PH samples and pulsed electron beam surface treatment was investigated to enhance surface properties of components. The aim is to improve the tribological performance compared to the as-sintered condition and the H900 aging process, which optimizes hardness and wear resistance. Printed samples were sintered in a reducing atmosphere and superficially treated with an electron beam by varying the voltage (20 - 25 - 30 kV) and pulse count (20 – 40 pulses). Then, the surface was characterized from a microstructural and mechanical standpoint. Scratch tests were performed, and a reciprocating linear pin-on-disk wear test was conducted at 2 N and 10 Hz. Results showed that the voltage affects the roughness and thickness of the treated layer, whilst the number of pulses influences the hardening of the microstructure and consequently the wear resistance. Treated samples exhibited lower coefficients of friction compared to as-printed surfaces, though the values approached those of aged samples after the abrasion of the melted layer, indicating a deeper heat-affected zone formation. Different amounts of residual stress in the heat effected zone were individuated through the scratch tests. Still, the friction remained lower than that of as-printed specimens. This study demonstrates that optimizing electron beam parameters is vital for achieving surface performance comparable to bulk aging treatments, with significant implications for long-term wear resistance.Keywords: low energy high current pulsed electron beam, tribology, binder jetting 3D printing, 17-4PH stainless steel
Procedia PDF Downloads 9681 The Analyzer: Clustering Based System for Improving Business Productivity by Analyzing User Profiles to Enhance Human Computer Interaction
Authors: Dona Shaini Abhilasha Nanayakkara, Kurugamage Jude Pravinda Gregory Perera
Abstract:
E-commerce platforms have revolutionized the shopping experience, offering convenient ways for consumers to make purchases. To improve interactions with customers and optimize marketing strategies, it is essential for businesses to understand user behavior, preferences, and needs on these platforms. This paper focuses on recommending businesses to customize interactions with users based on their behavioral patterns, leveraging data-driven analysis and machine learning techniques. Businesses can improve engagement and boost the adoption of e-commerce platforms by aligning behavioral patterns with user goals of usability and satisfaction. We propose TheAnalyzer, a clustering-based system designed to enhance business productivity by analyzing user-profiles and improving human-computer interaction. The Analyzer seamlessly integrates with business applications, collecting relevant data points based on users' natural interactions without additional burdens such as questionnaires or surveys. It defines five key user analytics as features for its dataset, which are easily captured through users' interactions with e-commerce platforms. This research presents a study demonstrating the successful distinction of users into specific groups based on the five key analytics considered by TheAnalyzer. With the assistance of domain experts, customized business rules can be attached to each group, enabling The Analyzer to influence business applications and provide an enhanced personalized user experience. The outcomes are evaluated quantitatively and qualitatively, demonstrating that utilizing TheAnalyzer’s capabilities can optimize business outcomes, enhance customer satisfaction, and drive sustainable growth. The findings of this research contribute to the advancement of personalized interactions in e-commerce platforms. By leveraging user behavioral patterns and analyzing both new and existing users, businesses can effectively tailor their interactions to improve customer satisfaction, loyalty and ultimately drive sales.Keywords: data clustering, data standardization, dimensionality reduction, human computer interaction, user profiling
Procedia PDF Downloads 72680 Advancing Circular Economy Principles: Integrating AI Technology in Street Sanitation for Sustainable Urban Development
Authors: Xukai Fu
Abstract:
The concept of circular economy is interdisciplinary, intersecting environmental engineering, information technology, business, and social science domains. Over the course of its 15-year tenure in the sanitation industry, Jinkai has concentrated its efforts in the past five years on integrating artificial intelligence (AI) technology with street sanitation apparatus and systems. This endeavor has led to the development of various innovations, including the Intelligent Identification Sweeper Truck (Intelligent Waste Recognition and Energy-saving Control System), the Intelligent Identification Water Truck (Intelligent Flushing Control System), the intelligent food waste treatment machine, and the Intelligent City Road Sanitation Surveillance Platform. This study will commence with an examination of prevalent global challenges, elucidating how Jinkai effectively addresses each within the framework of circular economy principles. Utilizing a review and analysis of pertinent environmental management data, we will elucidate Jinkai's strategic approach. Following this, we will investigate how Jinkai utilizes the advantages of circular economy principles to guide the design of street sanitation machinery, with a focus on digitalization integration. Moreover, we will scrutinize Jinkai's sustainable practices throughout the invention and operation phases of street sanitation machinery, aligning with the triple bottom line theory. Finally, we will delve into the significance and enduring impact of corporate social responsibility (CSR) and environmental, social, and governance (ESG) initiatives. Special emphasis will be placed on Jinkai's contributions to community stakeholders, with a particular emphasis on human rights. Despite the widespread adoption of circular economy principles across various industries, achieving a harmonious equilibrium between environmental justice and social justice remains a formidable task. Jinkai acknowledges that the mere development of energy-saving technologies is insufficient for authentic circular economy implementation; rather, they serve as instrumental tools. To earnestly promote and embody circular economy principles, companies must consistently prioritize the UN Sustainable Development Goals and adapt their technologies to address the evolving exigencies of our world.Keywords: circular economy, core principles, benefits, the tripple bottom line, CSR, ESG, social justice, human rights, Jinkai
Procedia PDF Downloads 47679 Viability of EBT3 Film in Small Dimensions to Be Use for in-Vivo Dosimetry in Radiation Therapy
Authors: Abdul Qadir Jangda, Khadija Mariam, Usman Ahmed, Sharib Ahmed
Abstract:
The Gafchromic EBT3 film has the characteristic of high spatial resolution, weak energy dependence and near tissue equivalence which makes them viable to be used for in-vivo dosimetry in External Beam and Brachytherapy applications. The aim of this study is to assess the smallest film dimension that may be feasible for the use in in-vivo dosimetry. To evaluate the viability, the film sizes from 3 x 3 mm to 20 x 20 mm were calibrated with 6 MV Photon and 6 MeV electron beams. The Gafchromic EBT3 (Lot no. A05151201, Make: ISP) film was cut into five different sizes in order to establish the relationship between absorbed dose vs. film dimensions. The film dimension were 3 x 3, 5 x 5, 10 x 10, 15 x 15, and 20 x 20 mm. The films were irradiated on Varian Clinac® 2100C linear accelerator for dose range from 0 to 1000 cGy using PTW solid water phantom. The irradiation was performed as per clinical absolute dose rate calibratin setup, i.e. 100 cm SAD, 5.0 cm depth and field size of 10x10 cm2 and 100 cm SSD, 1.4 cm depth and 15x15 cm2 applicator for photon and electron respectively. The irradiated films were scanned with the landscape orientation and a post development time of 48 hours (minimum). Film scanning accomplished using Epson Expression 10000 XL Flatbed Scanner and quantitative analysis carried out with ImageJ freeware software. Results show that the dose variation with different film dimension ranging from 3 x 3 mm to 20 x 20 mm is very minimal with a maximum standard deviation of 0.0058 in Optical Density for a dose level of 3000 cGy and the the standard deviation increases with the increase in dose level. So the precaution must be taken while using the small dimension films for higher doses. Analysis shows that there is insignificant variation in the absorbed dose with a change in film dimension of EBT3 film. Study concludes that the film dimension upto 3 x 3 mm can safely be used up to a dose level of 3000 cGy without the need of recalibration for particular dimension in use for dosimetric application. However, for higher dose levels, one may need to calibrate the films for a particular dimension in use for higher accuracy. It was also noticed that the crystalline structure of the film got damage at the edges while cutting the film, which can contribute to the wrong dose if the region of interest includes the damage area of the filmKeywords: external beam radiotherapy, film calibration, film dosimetery, in-vivo dosimetery
Procedia PDF Downloads 494678 Effects of Vegetable Oils Supplementation on in Vitro Rumen Fermentation and Methane Production in Buffaloes
Authors: Avijit Dey, Shyam S. Paul, Satbir S. Dahiya, Balbir S. Punia, Luciano A. Gonzalez
Abstract:
Methane emitted from ruminant livestock not only reduces the efficiency of feed energy utilization but also contributes to global warming. Vegetable oils, a source of poly unsaturated fatty acids, have potential to reduce methane production and increase conjugated linoleic acid in the rumen. However, characteristics of oils, level of inclusion and composition of basal diet influences their efficacy. Therefore, this study was aimed to investigate the effects of sunflower (SFL) and cottonseed (CSL) oils on methanogenesis, volatile fatty acids composition and feed fermentation pattern by in vitro gas production (IVGP) test. Four concentrations (0, 0.1, 0.2 and 0.4ml /30ml buffered rumen fluid) of each oil were used. Fresh rumen fluid was collected before morning feeding from two rumen cannulated buffalo steers fed a mixed ration. In vitro incubation was carried out with sorghum hay (200 ± 5 mg) as substrate in 100 ml calibrated glass syringes following standard IVGP protocol. After 24h incubation, gas production was recorded by displacement of piston. Methane in the gas phase and volatile fatty acids in the fermentation medium were estimated by gas chromatography. Addition of oils resulted in increase (p<0.05) in total gas production and decrease (p<0.05) in methane production, irrespective of type and concentration. Although the increase in gas production was similar, methane production (ml/g DM) and its concentration (%) in head space gas was lower (p< 0.01) in CSL than in SFL at corresponding doses. Linear decrease (p<0.001) in degradability of DM was evident with increasing doses of oils (0.2ml onwards). However, these effects were more pronounced with SFL. Acetate production tended to decrease but propionate and butyrate production increased (p<0.05) with addition of oils, irrespective of type and doses. The ratio of acetate to propionate was reduced (p<0.01) with addition of oils but no difference between the oils was noted. It is concluded that both the oils can reduce methane production. However, feed degradability was also affected with higher doses. Cotton seed oil in small dose (0.1ml/30 ml buffered rumen fluid) exerted greater inhibitory effects on methane production without impeding dry matter degradability. Further in vivo studies need to be carried out for their practical application in animal ration.Keywords: buffalo, methanogenesis, rumen fermentation, vegetable oils
Procedia PDF Downloads 406677 Molecular Diagnosis of a Virus Associated with Red Tip Disease and Its Detection by Non Destructive Sensor in Pineapple (Ananas comosus)
Authors: A. K. Faizah, G. Vadamalai, S. K. Balasundram, W. L. Lim
Abstract:
Pineapple (Ananas comosus) is a common crop in tropical and subtropical areas of the world. Malaysia once ranked as one of the top 3 pineapple producers in the world in the 60's and early 70's, after Hawaii and Brazil. Moreover, government’s recognition of the pineapple crop as one of priority commodities to be developed for the domestics and international markets in the National Agriculture Policy. However, pineapple industry in Malaysia still faces numerous challenges, one of which is the management of disease and pest. Red tip disease on pineapple was first recognized about 20 years ago in a commercial pineapple stand located in Simpang Renggam, Johor, Peninsular Malaysia. Since its discovery, there has been no confirmation on its causal agent of this disease. The epidemiology of red tip disease is still not fully understood. Nevertheless, the disease symptoms and the spread within the field seem to point toward viral infection. Bioassay test on nucleic acid extracted from the red tip-affected pineapple was done on Nicotiana tabacum cv. Coker by rubbing the extracted sap. Localised lesions were observed 3 weeks after inoculation. Negative staining of the fresh inoculated Nicotiana tabacum cv. Coker showed the presence of membrane-bound spherical particles with an average diameter of 94.25nm under transmission electron microscope. The shape and size of the particles were similar to tospovirus. SDS-PAGE analysis of partial purified virions from inoculated N. tabacum produced a strong and a faint protein bands with molecular mass of approximately 29 kDa and 55 kDa. Partial purified virions of symptomatic pineapple leaves from field showed bands with molecular mass of approximately 29 kDa, 39 kDa and 55kDa. These bands may indicate the nucleocapsid protein identity of tospovirus. Furthermore, a handheld sensor, Greenseeker, was used to detect red tip symptoms on pineapple non-destructively based on spectral reflectance, measured as Normalized Difference Vegetation Index (NDVI). Red tip severity was estimated and correlated with NDVI. Linear regression models were calibrated and tested developed in order to estimate red tip disease severity based on NDVI. Results showed a strong positive relationship between red tip disease severity and NDVI (r= 0.84).Keywords: pineapple, diagnosis, virus, NDVI
Procedia PDF Downloads 791676 Physico-Mechanical Properties of Wood-Plastic Composites Produced from Polyethylene Terephthalate Plastic Bottle Wastes and Sawdust of Three Tropical Hardwood Species
Authors: Amos Olajide Oluyege, Akpanobong Akpan Ekong, Emmanuel Uchechukwu Opara, Sunday Adeniyi Adedutan, Joseph Adeola Fuwape, Olawale John Olukunle
Abstract:
This study was carried out to evaluate the influence of wood species and wood plastic ratio on the physical and mechanical properties of wood plastic composites (WPCs) produced from polyethylene terephthalate (PET) plastic bottle wastes and sawdust from three hardwood species, namely, Terminalia superba, Gmelina arborea, and Ceiba pentandra. The experimental WPCs were prepared from sawdust particle size classes of ≤ 0.5, 0.5 – 1.0, and 1.0 – 2.0 mm at wood/plastic ratios of 40:60, 50:50 and 60:40 (percentage by weight). The WPCs for each study variable combination were prepared in 3 replicates and laid out in a randomized complete block design (RCBD). The physical properties investigated water absorption (WA), linear expansion (LE) and thickness swelling (TS) while the mechanical properties evaluated were Modulus of Elasticity (MOE) and Modulus of Rupture (MOR). The mean values for WA, LE and TS ranged from 1.07 to 34.04, 0.11 to 1.76 and 0.11 to 4.05 %, respectively. The mean values of the three physical properties increased with decrease in wood plastic ratio. Wood plastic ratio of 40:60 at each particle size class generally resulted in the lowest values while wood plastic ratio of 60:40 had the highest values for each of the three species. For each of the physical properties, T. superba had the least mean values followed by G. arborea, while the highest values were observed C. pentandra. The mean values for MOE and MOR ranged from 458.17 to 1875.67 and 2.64 to 18.39 N/mm2, respectively. The mean values of the two mechanical properties decreased with increase in wood plastic ratio. Wood plastic ratio of 40:60 at each wood particle size class generally had the highest values while wood plastic ratio of 60:40 had the least values for each of the three species. For each of the mechanical properties, C. pentandra had the highest mean values followed by G. arborea, while the least values were observed T. superba. There were improvements in both the physical and mechanical properties due to decrease in sawdust particle size class with the particle size class of ≤ 0.5 mm giving the best result. The results of the Analysis of variance revealed significant (P < 0.05) effects of the three study variables – wood species, sawdust particle size class and wood/plastic ratio on all the physical and mechanical properties of the WPCs. It can be concluded from the results of this study that wood plastic composites from sawdust particle size ≤ 0.5 and PET plastic bottle wastes with acceptable physical and mechanical properties are better produced using 40:60 wood/plastic ratio, and that at this ratio, all the three species are suitable for the production of wood plastic composites.Keywords: polyethylene terephthalate plastic bottle wastes, wood plastic composite, physical properties, mechanical properties
Procedia PDF Downloads 201675 Prevalence of ESBL E. coli Susceptibility to Oral Antibiotics in Outpatient Urine Culture: Multicentric, Analysis of Three Years Data (2019-2021)
Authors: Mazoun Nasser Rashid Al Kharusi, Nada Al Siyabi
Abstract:
Objectives: The main aim of this study is to Find the rate of susceptibility of ESBL E. coli causing UTI to oral antibiotics. Secondary objectives: Prevalence of ESBL E. coli from community urine samples, identify the best empirical oral antibiotics with the least resistance rate for UTI and identify alternative oral antibiotics for testing and utilization. Methods: This study is a retrospective descriptive study of the last three years in five major hospitals in Oman (Khowla Hospital, AN’Nahdha Hospital, Rustaq Hospital, Nizwa Hospital, and Ibri Hospital) equipped with a microbiologist. Inclusion criteria include all eligible outpatient urine culture isolates, excluding isolates from admitted patients with hospital-acquired urinary tract infections. Data was collected through the MOH database. The MOH hospitals are using different types of testing, automated methods like Vitek2 and manual methods. Vitek2 machine uses the principle of the fluorogenic method for organism identification and a turbidimetric method for susceptibility testing. The manual method is done by double disc diffusion for identifying ESBL and the disc diffusion method is for antibiotic susceptibility. All laboratories follow the clinical laboratory science institute (CLSI) guidelines. Analysis was done by SPSS statistical package. Results: Total urine cultures were (23048). E. coli grew in (11637) 49.6% of the urine, whereas (2199) 18.8% of those were confirmed as ESBL. As expected, the resistance rate to amoxicillin and cefuroxime is 100%. Moreover, the susceptibility of those ESBL-producing E. coli to nitrofurantoin, trimethoprim+sulfamethoxazole, ciprofloxacin and amoxicillin-clavulanate is progressing over the years; however, still low. ESBL E. coli was predominating in the female gender and those aged 66-74 years old throughout all the years. Other oral antibiotic options need to be explored and tested so that we add to the pool of oral antibiotics for ESBL E. coli causing UTI in the community. Conclusion: High rate of ESBL E. coli in urine from the community. The high resistance rates to oral antibiotics highlight the need for alternative treatment options for UTIs caused by these bacteria. Further research is needed to identify new and effective treatments for UTIs caused by ESBL-E. Coli.Keywords: UTI, ESBL, oral antibiotics, E. coli, susceptibility
Procedia PDF Downloads 93674 Application of Complete Ensemble Empirical Mode Decomposition with Adaptive Noise and Multipoint Optimal Minimum Entropy Deconvolution in Railway Bearings Fault Diagnosis
Authors: Yao Cheng, Weihua Zhang
Abstract:
Although the measured vibration signal contains rich information on machine health conditions, the white noise interferences and the discrete harmonic coming from blade, shaft and mash make the fault diagnosis of rolling element bearings difficult. In order to overcome the interferences of useless signals, a new fault diagnosis method combining Complete Ensemble Empirical Mode Decomposition with adaptive noise (CEEMDAN) and Multipoint Optimal Minimum Entropy Deconvolution (MOMED) is proposed for the fault diagnosis of high-speed train bearings. Firstly, the CEEMDAN technique is applied to adaptively decompose the raw vibration signal into a series of finite intrinsic mode functions (IMFs) and a residue. Compared with Ensemble Empirical Mode Decomposition (EEMD), the CEEMDAN can provide an exact reconstruction of the original signal and a better spectral separation of the modes, which improves the accuracy of fault diagnosis. An effective sensitivity index based on the Pearson's correlation coefficients between IMFs and raw signal is adopted to select sensitive IMFs that contain bearing fault information. The composite signal of the sensitive IMFs is applied to further analysis of fault identification. Next, for propose of identifying the fault information precisely, the MOMED is utilized to enhance the periodic impulses in composite signal. As a non-iterative method, the MOMED has better deconvolution performance than the classical deconvolution methods such Minimum Entropy Deconvolution (MED) and Maximum Correlated Kurtosis Deconvolution (MCKD). Third, the envelope spectrum analysis is applied to detect the existence of bearing fault. The simulated bearing fault signals with white noise and discrete harmonic interferences are used to validate the effectiveness of the proposed method. Finally, the superiorities of the proposed method are further demonstrated by high-speed train bearing fault datasets measured from test rig. The analysis results indicate that the proposed method has strong practicability.Keywords: bearing, complete ensemble empirical mode decomposition with adaptive noise, fault diagnosis, multipoint optimal minimum entropy deconvolution
Procedia PDF Downloads 374673 35 MHz Coherent Plane Wave Compounding High Frequency Ultrasound Imaging
Authors: Chih-Chung Huang, Po-Hsun Peng
Abstract:
Ultrasound transient elastography has become a valuable tool for many clinical diagnoses, such as liver diseases and breast cancer. The pathological tissue can be distinguished by elastography due to its stiffness is different from surrounding normal tissues. An ultrafast frame rate of ultrasound imaging is needed for transient elastography modality. The elastography obtained in the ultrafast system suffers from a low quality for resolution, and affects the robustness of the transient elastography. In order to overcome these problems, a coherent plane wave compounding technique has been proposed for conventional ultrasound system which the operating frequency is around 3-15 MHz. The purpose of this study is to develop a novel beamforming technique for high frequency ultrasound coherent plane-wave compounding imaging and the simulated results will provide the standards for hardware developments. Plane-wave compounding imaging produces a series of low-resolution images, which fires whole elements of an array transducer in one shot with different inclination angles and receives the echoes by conventional beamforming, and compounds them coherently. Simulations of plane-wave compounding image and focused transmit image were performed using Field II. All images were produced by point spread functions (PSFs) and cyst phantoms with a 64-element linear array working at 35MHz center frequency, 55% bandwidth, and pitch of 0.05 mm. The F number is 1.55 in all the simulations. The simulated results of PSFs and cyst phantom which were obtained using single, 17, 43 angles plane wave transmission (angle of each plane wave is separated by 0.75 degree), and focused transmission. The resolution and contrast of image were improved with the number of angles of firing plane wave. The lateral resolutions for different methods were measured by -10 dB lateral beam width. Comparison of the plane-wave compounding image and focused transmit image, both images exhibited the same lateral resolution of 70 um as 37 angles were performed. The lateral resolution can reach 55 um as the plane-wave was compounded 47 angles. All the results show the potential of using high-frequency plane-wave compound imaging for realizing the elastic properties of the microstructure tissue, such as eye, skin and vessel walls in the future.Keywords: plane wave imaging, high frequency ultrasound, elastography, beamforming
Procedia PDF Downloads 538672 Experimental Evaluation of Contact Interface Stiffness and Damping to Sustain Transients and Resonances
Authors: Krystof Kryniski, Asa Kassman Rudolphi, Su Zhao, Per Lindholm
Abstract:
ABB offers range of turbochargers from 500 kW to 80+ MW diesel and gas engines. Those operate on ships, power stations, generator-sets, diesel locomotives and large, off-highway vehicles. The units need to sustain harsh operating conditions, exposure to high speeds, temperatures and varying loads. They are expected to work at over-critical speeds damping effectively any transients and encountered resonances. Components are often connected via friction joints. Designs of those interfaces need to account for surface roughness, texture, pre-stress, etc. to sustain against fretting fatigue. The experience from field contributed with valuable input on components performance in hash sea environment and their exposure to high temperature, speed and load conditions. Study of tribological interactions of oxide formations provided an insight into dynamic activities occurring between the surfaces. Oxidation was recognized as the dominant factor of a wear. Microscopic inspections of fatigue cracks on turbine indicated insufficient damping and unrestrained structural stress leading to catastrophic failure, if not prevented in time. The contact interface exhibits strongly non-linear mechanism and to describe it the piecewise approach was used. Set of samples representing the combinations of materials, texture, surface and heat treatment were tested on a friction rig under range of loads, frequencies and excitation amplitudes. Developed numerical technique extracted the friction coefficient, tangential contact stiffness and damping. Vast amount of experimental data was processed with the multi-harmonics balance (MHB) method to categorize the components subjected to the periodic excitations. At the pre-defined excitation level both force and displacement formed semi-elliptical hysteresis curves having the same area and secant as the actual ones. By cross-correlating the terms remaining in the phase and out of the phase, respectively it was possible to separate an elastic energy from dissipation and derive the stiffness and damping characteristics.Keywords: contact interface, fatigue, rotor-dynamics, torsional resonances
Procedia PDF Downloads 375