Search results for: portfolio optimization task
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5278

Search results for: portfolio optimization task

118 Force Sensing Resistor Testing of Hand Forces and Grasps during Daily Functional Activities in the Covid-19 Pandemic

Authors: Monique M. Keller, Roline Barnes, Corlia Brandt

Abstract:

Introduction Scientific evidence on the hand forces and the types of grasps measurement during daily tasks are lacking, leaving a gap in the field of hand rehabilitation and robotics. Measuring the grasp forces and types produced by the individual fingers during daily functional tasks is valuable to inform and grade rehabilitation practices for second to fifth metacarpal fractures with robust scientific evidence. Feix et al, 2016 identified the most extensive and complete grasp study that resulted in the GRASP taxonomy. Covid-19 virus changed data collection across the globe and safety precautions in research are essential to ensure the health of participants and researchers. Methodology A cross-sectional study investigated six healthy adults aged 20 to 59 years, pilot participants’ hand forces during 105 tasks. The tasks were categorized into five sections namely, personal care, transport and moving around, home environment and inside, gardening and outside, and office. The predominant grasp of each task was identified guided by the GRASP Taxonomy. Grasp forces were measured with 13mm force-sensing resistors glued onto a glove attached to each of the dominant and non-dominant hand’s individual fingers. Testing equipment included Flexiforce 13millimetres FSR .5" circle, calibrated prior to testing, 10k 1/4w resistors, Arduino pro mini 5.0v – compatible, Esp-01-kit, Arduino uno r3 – compatible board, USB ab cable - 1m, Ftdi ft232 mini USB to serial, Sil 40 inline connectors, ribbon cable combo male header pins, female to female, male to female, two gloves, glue to attach the FSR to glove, Arduino software programme downloaded on a laptop. Grip strength measurements with Jamar dynamometer prior to testing and after every 25 daily tasks were taken to will avoid fatigue and ensure reliability in testing. Covid-19 precautions included wearing face masks at all times, screening questionnaires, temperatures taken, wearing surgical gloves before putting on the testing gloves 1.5 metres long wires attaching the FSR to the Arduino to maintain social distance. Findings Predominant grasps observed during 105 tasks included, adducted thumb (17), lateral tripod (10), prismatic three fingers (12), small diameter (9), prismatic two fingers (9), medium wrap (7), fixed hook (5), sphere four fingers (4), palmar (4), parallel extension (4), index finger extension (3), distal (3), power sphere (2), tripod (2), quadpod (2), prismatic four fingers (2), lateral (2), large-diameter (2), ventral (2), precision sphere (1), palmar pinch (1), light tool (1), inferior pincher (1), and writing tripod (1). Range of forces applied per category, personal care (1-25N), transport and moving around (1-9 N), home environment and inside (1-41N), gardening and outside (1-26.5N), and office (1-20N). Conclusion Scientifically measurements of finger forces with careful consideration to types of grasps used in daily tasks should guide rehabilitation practices and robotic design to ensure a return to the full participation of the individual into the community.

Keywords: activities of daily living (ADL), Covid-19, force-sensing resistors, grasps, hand forces

Procedia PDF Downloads 170
117 An E-Maintenance IoT Sensor Node Designed for Fleets of Diverse Heavy-Duty Vehicles

Authors: George Charkoftakis, Panagiotis Liosatos, Nicolas-Alexander Tatlas, Dimitrios Goustouridis, Stelios M. Potirakis

Abstract:

E-maintenance is a relatively new concept, generally referring to maintenance management by monitoring assets over the Internet. One of the key links in the chain of an e-maintenance system is data acquisition and transmission. Specifically for the case of a fleet of heavy-duty vehicles, where the main challenge is the diversity of the vehicles and vehicle-embedded self-diagnostic/reporting technologies, the design of the data acquisition and transmission unit is a demanding task. This clear if one takes into account that a heavy-vehicles fleet assortment may range from vehicles with only a limited number of analog sensors monitored by dashboard light indicators and gauges to vehicles with plethora of sensors monitored by a vehicle computer producing digital reporting. The present work proposes an adaptable internet of things (IoT) sensor node that is capable of addressing this challenge. The proposed sensor node architecture is based on the increasingly popular single-board computer – expansion boards approach. In the proposed solution, the expansion boards undertake the tasks of position identification by means of a global navigation satellite system (GNSS), cellular connectivity by means of 3G/long-term evolution (LTE) modem, connectivity to on-board diagnostics (OBD), and connectivity to analog and digital sensors by means of a novel design of expansion board. Specifically, the later provides eight analog plus three digital sensor channels, as well as one on-board temperature / relative humidity sensor. The specific device offers a number of adaptability features based on appropriate zero-ohm resistor placement and appropriate value selection for limited number of passive components. For example, although in the standard configuration four voltage analog channels with constant voltage sources for the power supply of the corresponding sensors are available, up to two of these voltage channels can be converted to provide power to the connected sensors by means of corresponding constant current source circuits, whereas all parameters of analog sensor power supply and matching circuits are fully configurable offering the advantage of covering a wide variety of industrial sensors. Note that a key feature of the proposed sensor node, ensuring the reliable operation of the connected sensors, is the appropriate supply of external power to the connected sensors and their proper matching to the IoT sensor node. In standard mode, the IoT sensor node communicates to the data center through 3G/LTE, transmitting all digital/digitized sensor data, IoT device identity, and position. Moreover, the proposed IoT sensor node offers WiFi connectivity to mobile devices (smartphones, tablets) equipped with an appropriate application for the manual registration of vehicle- and driver-specific information, and these data are also forwarded to the data center. All control and communication tasks of the IoT sensor node are performed by dedicated firmware. It is programmed with a high-level language (Python) on top of a modern operating system (Linux). Acknowledgment: This research has been co-financed by the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship, and Innovation, under the call RESEARCH—CREATE—INNOVATE (project code: T1EDK- 01359, IntelligentLogger).

Keywords: IoT sensor nodes, e-maintenance, single-board computers, sensor expansion boards, on-board diagnostics

Procedia PDF Downloads 131
116 Application of the Standard Deviation in Regulating Design Variation of Urban Solutions Generated through Evolutionary Computation

Authors: Mohammed Makki, Milad Showkatbakhsh, Aiman Tabony

Abstract:

Computational applications of natural evolutionary processes as problem-solving tools have been well established since the mid-20th century. However, their application within architecture and design has only gained ground in recent years, with an increasing number of academics and professionals in the field electing to utilize evolutionary computation to address problems comprised from multiple conflicting objectives with no clear optimal solution. Recent advances in computer science and its consequent constructive influence on the architectural discourse has led to the emergence of multiple algorithmic processes capable of simulating the evolutionary process in nature within an efficient timescale. Many of the developed processes of generating a population of candidate solutions to a design problem through an evolutionary based stochastic search process are often driven through the application of both environmental and architectural parameters. These methods allow for conflicting objectives to be simultaneously, independently, and objectively optimized. This is an essential approach in design problems with a final product that must address the demand of a multitude of individuals with various requirements. However, one of the main challenges encountered through the application of an evolutionary process as a design tool is the ability for the simulation to maintain variation amongst design solutions in the population while simultaneously increasing in fitness. This is most commonly known as the ‘golden rule’ of balancing exploration and exploitation over time; the difficulty of achieving this balance in the simulation is due to the tendency of either variation or optimization being favored as the simulation progresses. In such cases, the generated population of candidate solutions has either optimized very early in the simulation, or has continued to maintain high levels of variation to which an optimal set could not be discerned; thus, providing the user with a solution set that has not evolved efficiently to the objectives outlined in the problem at hand. As such, the experiments presented in this paper seek to achieve the ‘golden rule’ by incorporating a mathematical fitness criterion for the development of an urban tissue comprised from the superblock as its primary architectural element. The mathematical value investigated in the experiments is the standard deviation factor. Traditionally, the standard deviation factor has been used as an analytical value rather than a generative one, conventionally used to measure the distribution of variation within a population by calculating the degree by which the majority of the population deviates from the mean. A higher standard deviation value delineates a higher number of the population is clustered around the mean and thus limited variation within the population, while a lower standard deviation value is due to greater variation within the population and a lack of convergence towards an optimal solution. The results presented will aim to clarify the extent to which the utilization of the standard deviation factor as a fitness criterion can be advantageous to generating fitter individuals in a more efficient timeframe when compared to conventional simulations that only incorporate architectural and environmental parameters.

Keywords: architecture, computation, evolution, standard deviation, urban

Procedia PDF Downloads 111
115 Optimization of Multi-Disciplinary Expertise and Resource for End-Stage Renal Failure (ESRF) Patient Care

Authors: Mohamed Naser Zainol, P. P. Angeline Song

Abstract:

Over the years, the profile of end-stage renal patients placed under The National Kidney Foundation Singapore (NKFS) dialysis program has evolved, with a gradual incline in the number of patients with behavior-related issues. With these challenging profiles, social workers and counsellors are often expected to oversee behavior management, through referrals from its partnering colleagues. Due to the segregation of tasks usually found in many hospital-based multi-disciplinary settings, social workers’ and counsellors’ interventions are often seen as an endpoint, limiting other stakeholders’ involvement that could otherwise be potentially crucial in managing such patients. While patients’ contact in local hospitals often leads to eventual discharge, NKFS patients are mostly long term. It is interesting to note that these patients are regularly seen by a team of professionals that includes doctors, nurses, dietitians, exercise specialists in NKFS. The dynamism of relationships presents an opportunity for any of these professionals to take ownership of their potentials in leading interventions that can be helpful to patients. As such, it is important to have a framework that incorporates the strength of these professionals and also channels empowerment across the multi-disciplinary team in working towards wholistic patient care. This paper would like to suggest a new framework for NKFS’s multi-disciplinary team, where the group synergy and dynamics are used to encourage ownership and promote empowerment. The social worker and counsellor use group work skills and his/her knowledge of its members’ strengths, to generate constructive solutions that are centered towards patient’s growth. Using key ideas from Karl’s Tomm Interpersonal Communications, the Communication Management of Meaning and Motivational Interviewing, the social worker and counsellor through a series of guided meeting with other colleagues, facilitates the transmission of understanding, responsibility sharing and tapping on team resources for patient care. As a result, the patient can experience personal and concerted approach and begins to flow in a direction that is helpful for him. Using seven case studies of identified patients with behavioral issues, the social worker and counsellor apply this framework for a period of six months. Patient’s overall improvement through interventions as a result of this framework are recorded using the AB single case design, with baseline measured three months before referral. Interviews with patients and their families, as well as other colleagues that are not part of the multi-disciplinary team are solicited at mid and end points to gather their experiences about patient’s progress as a by-product of this framework. Expert interviews will be conducted on each member of the multi-disciplinary team to study their observations and experience in using this new framework. Hence, this exploratory framework hopes to identify the inherent usefulness in managing patients with behavior related issues. Moreover, it would provide indicators in improving aspects of the framework when applied to a larger population.

Keywords: behavior management, end-stage renal failure, satellite dialysis, multi-disciplinary team

Procedia PDF Downloads 125
114 Image Segmentation with Deep Learning of Prostate Cancer Bone Metastases on Computed Tomography

Authors: Joseph M. Rich, Vinay A. Duddalwar, Assad A. Oberai

Abstract:

Prostate adenocarcinoma is the most common cancer in males, with osseous metastases as the commonest site of metastatic prostate carcinoma (mPC). Treatment monitoring is based on the evaluation and characterization of lesions on multiple imaging studies, including Computed Tomography (CT). Monitoring of the osseous disease burden, including follow-up of lesions and identification and characterization of new lesions, is a laborious task for radiologists. Deep learning algorithms are increasingly used to perform tasks such as identification and segmentation for osseous metastatic disease and provide accurate information regarding metastatic burden. Here, nnUNet was used to produce a model which can segment CT scan images of prostate adenocarcinoma vertebral bone metastatic lesions. nnUNet is an open-source Python package that adds optimizations to deep learning-based UNet architecture but has not been extensively combined with transfer learning techniques due to the absence of a readily available functionality of this method. The IRB-approved study data set includes imaging studies from patients with mPC who were enrolled in clinical trials at the University of Southern California (USC) Health Science Campus and Los Angeles County (LAC)/USC medical center. Manual segmentation of metastatic lesions was completed by an expert radiologist Dr. Vinay Duddalwar (20+ years in radiology and oncologic imaging), to serve as ground truths for the automated segmentation. Despite nnUNet’s success on some medical segmentation tasks, it only produced an average Dice Similarity Coefficient (DSC) of 0.31 on the USC dataset. DSC results fell in a bimodal distribution, with most scores falling either over 0.66 (reasonably accurate) or at 0 (no lesion detected). Applying more aggressive data augmentation techniques dropped the DSC to 0.15, and reducing the number of epochs reduced the DSC to below 0.1. Datasets have been identified for transfer learning, which involve balancing between size and similarity of the dataset. Identified datasets include the Pancreas data from the Medical Segmentation Decathlon, Pelvic Reference Data, and CT volumes with multiple organ segmentations (CT-ORG). Some of the challenges of producing an accurate model from the USC dataset include small dataset size (115 images), 2D data (as nnUNet generally performs better on 3D data), and the limited amount of public data capturing annotated CT images of bone lesions. Optimizations and improvements will be made by applying transfer learning and generative methods, including incorporating generative adversarial networks and diffusion models in order to augment the dataset. Performance with different libraries, including MONAI and custom architectures with Pytorch, will be compared. In the future, molecular correlations will be tracked with radiologic features for the purpose of multimodal composite biomarker identification. Once validated, these models will be incorporated into evaluation workflows to optimize radiologist evaluation. Our work demonstrates the challenges of applying automated image segmentation to small medical datasets and lays a foundation for techniques to improve performance. As machine learning models become increasingly incorporated into the workflow of radiologists, these findings will help improve the speed and accuracy of vertebral metastatic lesions detection.

Keywords: deep learning, image segmentation, medicine, nnUNet, prostate carcinoma, radiomics

Procedia PDF Downloads 73
113 An Approach on Intelligent Tolerancing of Car Body Parts Based on Historical Measurement Data

Authors: Kai Warsoenke, Maik Mackiewicz

Abstract:

To achieve a high quality of assembled car body structures, tolerancing is used to ensure a geometric accuracy of the single car body parts. There are two main techniques to determine the required tolerances. The first is tolerance analysis which describes the influence of individually tolerated input values on a required target value. Second is tolerance synthesis to determine the location of individual tolerances to achieve a target value. Both techniques are based on classical statistical methods, which assume certain probability distributions. To ensure competitiveness in both saturated and dynamic markets, production processes in vehicle manufacturing must be flexible and efficient. The dimensional specifications selected for the individual body components and the resulting assemblies have a major influence of the quality of the process. For example, in the manufacturing of forming tools as operating equipment or in the higher level of car body assembly. As part of the metrological process monitoring, manufactured individual parts and assemblies are recorded and the measurement results are stored in databases. They serve as information for the temporary adjustment of the production processes and are interpreted by experts in order to derive suitable adjustments measures. In the production of forming tools, this means that time-consuming and costly changes of the tool surface have to be made, while in the body shop, uncertainties that are difficult to control result in cost-intensive rework. The stored measurement results are not used to intelligently design tolerances in future processes or to support temporary decisions based on real-world geometric data. They offer potential to extend the tolerancing methods through data analysis and machine learning models. The purpose of this paper is to examine real-world measurement data from individual car body components, as well as assemblies, in order to develop an approach for using the data in short-term actions and future projects. For this reason, the measurement data will be analyzed descriptively in the first step in order to characterize their behavior and to determine possible correlations. In the following, a database is created that is suitable for developing machine learning models. The objective is to create an intelligent way to determine the position and number of measurement points as well as the local tolerance range. For this a number of different model types are compared and evaluated. The models with the best result are used to optimize equally distributed measuring points on unknown car body part geometries and to assign tolerance ranges to them. The current results of this investigation are still in progress. However, there are areas of the car body parts which behave more sensitively compared to the overall part and indicate that intelligent tolerancing is useful here in order to design and control preceding and succeeding processes more efficiently.

Keywords: automotive production, machine learning, process optimization, smart tolerancing

Procedia PDF Downloads 92
112 Ternary Organic Blend for Semitransparent Solar Cells with Enhanced Short Circuit Current Density

Authors: Mohammed Makha, Jakob Heier, Frank Nüesch, Roland Hany

Abstract:

Organic solar cells (OSCs) have made rapid progress and currently achieve power conversion efficiencies (PCE) of over 10%. OSCs have several merits over other direct light-to-electricity generating cells and can be processed at low cost from solution on flexible substrates over large areas. Moreover, combining organic semiconductors with transparent and conductive electrodes allows for the fabrication of semitransparent OSCs (SM-OSCs). For SM-OSCs the challenge is to achieve a high average visible transmission (AVT) while maintaining a high short circuit current (Jsc). Typically, Jsc of SM-OSCs is smaller than when using an opaque metal top electrode. This is because the non-absorbed light during the first transit through the active layer and the transparent electrode is forward-transmitted out of the device. Recently, OSCs using a ternary blend of organic materials have received attention. This strategy was pursued to extend the light harvesting over the visible range. However, it is a general challenge to manipulate the performance of ternary OSCs in a predictable way, because many key factors affect the charge generation and extraction in ternary solar cells. Consequently, the device performance is affected by the compatibility between the blend components and the resulting film morphology, the energy levels and bandgaps, the concentration of the guest material and its location in the active layer. In this work, we report on a solvent-free lamination process for the fabrication of efficient and semitransparent ternary blend OSCs. The ternary blend was composed of PC70BM and the electron donors PBDTTT-C and an NIR cyanine absorbing dye (Cy7T). Using an opaque metal top electrode, a PCE of 6% was achieved for the optimized binary polymer: fullerene blend (AVT = 56%). However, the PCE dropped to ~2% when decreasing (to 30 nm) the active film thickness to increase the AVT value (75%). Therefore we resorted to the ternary blend and measured for non-transparent cells a PCE of 5.5% when using an active polymer: dye: fullerene (0.7: 0.3: 1.5 wt:wt:wt) film of 95 nm thickness (AVT = 65% when omitting the top electrode). In a second step, the optimized ternary blend was used of the fabrication of SM-OSCs. We used a plastic/metal substrate with a light transmission of over 90% as a transparent electrode that was applied via a lamination process. The interfacial layer between the active layer and the top electrode was optimized in order to improve the charge collection and the contact with the laminated top electrode. We demonstrated a PCE of 3% with AVT of 51%. The parameter space for ternary OSCs is large and it is difficult to find the best concentration ratios by trial and error. A rational approach for device optimization is the construction of a ternary blend phase diagram. We discuss our attempts to construct such a phase diagram for the PBDTTT-C: Cy7T: PC70BM system via a combination of using selective Cy7T selective solvents and atomic force microscopy. From the ternary diagram suitable morphologies for efficient light-to-current conversion can be identified. We compare experimental OSC data with these predictions.

Keywords: organic photovoltaics, ternary phase diagram, ternary organic solar cells, transparent solar cell, lamination

Procedia PDF Downloads 244
111 Detection of High Fructose Corn Syrup in Honey by Near Infrared Spectroscopy and Chemometrics

Authors: Mercedes Bertotto, Marcelo Bello, Hector Goicoechea, Veronica Fusca

Abstract:

The National Service of Agri-Food Health and Quality (SENASA), controls honey to detect contamination by synthetic or natural chemical substances and establishes and controls the traceability of the product. The utility of near-infrared spectroscopy for the detection of adulteration of honey with high fructose corn syrup (HFCS) was investigated. First of all, a mixture of different authentic artisanal Argentinian honey was prepared to cover as much heterogeneity as possible. Then, mixtures were prepared by adding different concentrations of high fructose corn syrup (HFCS) to samples of the honey pool. 237 samples were used, 108 of them were authentic honey and 129 samples corresponded to honey adulterated with HFCS between 1 and 10%. They were stored unrefrigerated from time of production until scanning and were not filtered after receipt in the laboratory. Immediately prior to spectral collection, honey was incubated at 40°C overnight to dissolve any crystalline material, manually stirred to achieve homogeneity and adjusted to a standard solids content (70° Brix) with distilled water. Adulterant solutions were also adjusted to 70° Brix. Samples were measured by NIR spectroscopy in the range of 650 to 7000 cm⁻¹. The technique of specular reflectance was used, with a lens aperture range of 150 mm. Pretreatment of the spectra was performed by Standard Normal Variate (SNV). The ant colony optimization genetic algorithm sample selection (ACOGASS) graphical interface was used, using MATLAB version 5.3, to select the variables with the greatest discriminating power. The data set was divided into a validation set and a calibration set, using the Kennard-Stone (KS) algorithm. A combined method of Potential Functions (PF) was chosen together with Partial Least Square Linear Discriminant Analysis (PLS-DA). Different estimators of the predictive capacity of the model were compared, which were obtained using a decreasing number of groups, which implies more demanding validation conditions. The optimal number of latent variables was selected as the number associated with the minimum error and the smallest number of unassigned samples. Once the optimal number of latent variables was defined, we proceeded to apply the model to the training samples. With the calibrated model for the training samples, we proceeded to study the validation samples. The calibrated model that combines the potential function methods and PLSDA can be considered reliable and stable since its performance in future samples is expected to be comparable to that achieved for the training samples. By use of Potential Functions (PF) and Partial Least Square Linear Discriminant Analysis (PLS-DA) classification, authentic honey and honey adulterated with HFCS could be identified with a correct classification rate of 97.9%. The results showed that NIR in combination with the PT and PLS-DS methods can be a simple, fast and low-cost technique for the detection of HFCS in honey with high sensitivity and power of discrimination.

Keywords: adulteration, multivariate analysis, potential functions, regression

Procedia PDF Downloads 104
110 Rhizobium leguminosarum: Selecting Strain and Exploring Delivery Systems for White Clover

Authors: Laura Villamizar, David Wright, Claudia Baena, Marie Foxwell, Maureen O'Callaghan

Abstract:

Leguminous crops can be self-sufficient for their nitrogen requirements when their roots are nodulated with an effective Rhizobium strain and for this reason seed or soil inoculation is practiced worldwide to ensure nodulation and nitrogen fixation in grain and forage legumes. The most widely used method of applying commercially available inoculants is using peat cultures which are coated onto seeds prior to sowing. In general, rhizobia survive well in peat, but some species die rapidly after inoculation onto seeds. The development of improved formulation methodology is essential to achieve extended persistence of rhizobia on seeds, and improved efficacy. Formulations could be solid or liquid. Most popular solid formulations or delivery systems are: wettable powders (WP), water dispersible granules (WG), and granules (DG). Liquid formulation generally are: suspension concentrates (SC) or emulsifiable concentrates (EC). In New Zealand, R. leguminosarum bv. trifolii strain TA1 has been used as a commercial inoculant for white clover over wide areas for many years. Seeds inoculation is carried out by mixing the seeds with inoculated peat, some adherents and lime, but rhizobial populations on stored seeds decline over several weeks due to a number of factors including desiccation and antibacterial compounds produced by the seeds. In order to develop a more stable and suitable delivery system to incorporate rhizobia in pastures, two strains of R. leguminosarum (TA1 and CC275e) and several formulations and processes were explored (peat granules, self-sticky peat for seed coating, emulsions and a powder containing spray dried microcapsules). Emulsions prepared with fresh broth of strain TA1 were very unstable under storage and after seed inoculation. Formulations where inoculated peat was used as the active ingredient were significantly more stable than those prepared with fresh broth. The strain CC275e was more tolerant to stress conditions generated during formulation and seed storage. Peat granules and peat inoculated seeds using strain CC275e maintained an acceptable loading of 108 CFU/g of granules or 105 CFU/g of seeds respectively, during six months of storage at room temperature. Strain CC275e inoculated on peat was also microencapsulated with a natural biopolymer by spray drying and after optimizing operational conditions, microparticles containing 107 CFU/g and a mean particle size between 10 and 30 micrometers were obtained. Survival of rhizobia during storage of the microcapsules is being assessed. The development of a stable product depends on selecting an active ingredient (microorganism), robust enough to tolerate some adverse conditions generated during formulation, storage, and commercialization and after its use in the field. However, the design and development of an adequate formulation, using compatible ingredients, optimization of the formulation process and selecting the appropriate delivery system, is possibly the best tool to overcome the poor survival of rhizobia and provide farmers with better quality inoculants to use.

Keywords: formulation, Rhizobium leguminosarum, storage stability, white clover

Procedia PDF Downloads 132
109 Induction Machine Design Method for Aerospace Starter/Generator Applications and Parametric FE Analysis

Authors: Wang Shuai, Su Rong, K. J.Tseng, V. Viswanathan, S. Ramakrishna

Abstract:

The More-Electric-Aircraft concept in aircraft industry levies an increasing demand on the embedded starter/generators (ESG). The high-speed and high-temperature environment within an engine poses great challenges to the operation of such machines. In view of such challenges, squirrel cage induction machines (SCIM) have shown advantages due to its simple rotor structure, absence of temperature-sensitive components as well as low torque ripples etc. The tight operation constraints arising from typical ESG applications together with the detailed operation principles of SCIMs have been exploited to derive the mathematical interpretation of the ESG-SCIM design process. The resultant non-linear mathematical treatment yielded unique solution to the SCIM design problem for each configuration of pole pair number p, slots/pole/phase q and conductors/slot zq, easily implemented via loop patterns. It was also found that not all configurations led to feasible solutions and corresponding observations have been elaborated. The developed mathematical procedures also proved an effective framework for optimization among electromagnetic, thermal and mechanical aspects by allocating corresponding degree-of-freedom variables. Detailed 3D FEM analysis has been conducted to validate the resultant machine performance against design specifications. To obtain higher power ratings, electrical machines often have to increase the slot areas for accommodating more windings. Since the available space for embedding such machines inside an engine is usually short in length, axial air gap arrangement appears more appealing compared to its radial gap counterpart. The aforementioned approach has been adopted in case studies of designing series of AFIMs and RFIMs respectively with increasing power ratings. Following observations have been obtained. Under the strict rotor diameter limitation AFIM extended axially for the increased slot areas while RFIM expanded radially with the same axial length. Beyond certain power ratings AFIM led to long cylinder geometry while RFIM topology resulted in the desired short disk shape. Besides the different dimension growth patterns, AFIMs and RFIMs also exhibited dissimilar performance degradations regarding power factor, torque ripples as well as rated slip along with increased power ratings. Parametric response curves were plotted to better illustrate the above influences from increased power ratings. The case studies may provide a basic guideline that could assist potential users in making decisions between AFIM and RFIM for relevant applications.

Keywords: axial flux induction machine, electrical starter/generator, finite element analysis, squirrel cage induction machine

Procedia PDF Downloads 436
108 Hardware Implementation for the Contact Force Reconstruction in Tactile Sensor Arrays

Authors: María-Luisa Pinto-Salamanca, Wilson-Javier Pérez-Holguín

Abstract:

Reconstruction of contact forces is a fundamental technique for analyzing the properties of a touched object and is essential for regulating the grip force in slip control loops. This is based on the processing of the distribution, intensity, and direction of the forces during the capture of the sensors. Currently, efficient hardware alternatives have been used more frequently in different fields of application, allowing the implementation of computationally complex algorithms, as is the case with tactile signal processing. The use of hardware for smart tactile sensing systems is a research area that promises to improve the processing time and portability requirements of applications such as artificial skin and robotics, among others. The literature review shows that hardware implementations are present today in almost all stages of smart tactile detection systems except in the force reconstruction process, a stage in which they have been less applied. This work presents a hardware implementation of a model-driven reported in the literature for the contact force reconstruction of flat and rigid tactile sensor arrays from normal stress data. From the analysis of a software implementation of such a model, this implementation proposes the parallelization of tasks that facilitate the execution of matrix operations and a two-dimensional optimization function to obtain a vector force by each taxel in the array. This work seeks to take advantage of the parallel hardware characteristics of Field Programmable Gate Arrays, FPGAs, and the possibility of applying appropriate techniques for algorithms parallelization using as a guide the rules of generalization, efficiency, and scalability in the tactile decoding process and considering the low latency, low power consumption, and real-time execution as the main parameters of design. The results show a maximum estimation error of 32% in the tangential forces and 22% in the normal forces with respect to the simulation by the Finite Element Modeling (FEM) technique of Hertzian and non-Hertzian contact events, over sensor arrays of 10×10 taxels of different sizes. The hardware implementation was carried out on an MPSoC XCZU9EG-2FFVB1156 platform of Xilinx® that allows the reconstruction of force vectors following a scalable approach, from the information captured by means of tactile sensor arrays composed of up to 48 × 48 taxels that use various transduction technologies. The proposed implementation demonstrates a reduction in estimation time of x / 180 compared to software implementations. Despite the relatively high values of the estimation errors, the information provided by this implementation on the tangential and normal tractions and the triaxial reconstruction of forces allows to adequately reconstruct the tactile properties of the touched object, which are similar to those obtained in the software implementation and in the two FEM simulations taken as reference. Although errors could be reduced, the proposed implementation is useful for decoding contact forces for portable tactile sensing systems, thus helping to expand electronic skin applications in robotic and biomedical contexts.

Keywords: contact forces reconstruction, forces estimation, tactile sensor array, hardware implementation

Procedia PDF Downloads 171
107 Seismic Perimeter Surveillance System (Virtual Fence) for Threat Detection and Characterization Using Multiple ML Based Trained Models in Weighted Ensemble Voting

Authors: Vivek Mahadev, Manoj Kumar, Neelu Mathur, Brahm Dutt Pandey

Abstract:

Perimeter guarding and protection of critical installations require prompt intrusion detection and assessment to take effective countermeasures. Currently, visual and electronic surveillance are the primary methods used for perimeter guarding. These methods can be costly and complicated, requiring careful planning according to the location and terrain. Moreover, these methods often struggle to detect stealthy and camouflaged insurgents. The object of the present work is to devise a surveillance technique using seismic sensors that overcomes the limitations of existing systems. The aim is to improve intrusion detection, assessment, and characterization by utilizing seismic sensors. Most of the similar systems have only two types of intrusion detection capability viz., human or vehicle. In our work we could even categorize further to identify types of intrusion activity such as walking, running, group walking, fence jumping, tunnel digging and vehicular movements. A virtual fence of 60 meters at GCNEP, Bahadurgarh, Haryana, India, was created by installing four underground geophones at a distance of 15 meters each. The signals received from these geophones are then processed to find unique seismic signatures called features. Various feature optimization and selection methodologies, such as LightGBM, Boruta, Random Forest, Logistics, Recursive Feature Elimination, Chi-2 and Pearson Ratio were used to identify the best features for training the machine learning models. The trained models were developed using algorithms such as supervised support vector machine (SVM) classifier, kNN, Decision Tree, Logistic Regression, Naïve Bayes, and Artificial Neural Networks. These models were then used to predict the category of events, employing weighted ensemble voting to analyze and combine their results. The models were trained with 1940 training events and results were evaluated with 831 test events. It was observed that using the weighted ensemble voting increased the efficiency of predictions. In this study we successfully developed and deployed the virtual fence using geophones. Since these sensors are passive, do not radiate any energy and are installed underground, it is impossible for intruders to locate and nullify them. Their flexibility, quick and easy installation, low costs, hidden deployment and unattended surveillance make such systems especially suitable for critical installations and remote facilities with difficult terrain. This work demonstrates the potential of utilizing seismic sensors for creating better perimeter guarding and protection systems using multiple machine learning models in weighted ensemble voting. In this study the virtual fence achieved an intruder detection efficiency of over 97%.

Keywords: geophone, seismic perimeter surveillance, machine learning, weighted ensemble method

Procedia PDF Downloads 48
106 Criminal Attitude vs Transparency in the Arab World

Authors: Keroles Akram Saed Ghatas

Abstract:

The political violence that characterized 1992 continued into 1993, creating a major security crisis for President Hosni Mubarak's government as the death toll and human rights abuses soared. Increasingly sensitive to criticism of 's human rights activities, the government established human rights departments in key ministries, beginning with the Foreign Office in February. Similar offices have been set up in the Justice and Agriculture Ministries, and plans to set up an office in the Home Office have been announced. It turned out that the main task of the law unit was to overturn the conclusions of international human rights organizations.President Mubarak was elected in a national referendum on October 4 for a third six-year term after being appointed on July 21 by the People's Assembly, an elected parliament overwhelmingly dominated by the in-power National Democratic Party will Mr. Mubarak ran unhindered. The Interior Ministry announced that nearly 16 million people cast their votes (84% of eligible voters), of which 96.28%. voted for presidential re-election.In 1993, armed Islamic extremists escalated their attacks on Christian citizens, government officials, police officers and senior security officials, resulting in casualties among the intended victims and bystanders. Sporadic attacks on buses, boats and tourist attractions also occurred throughout the year. From March 1992 to October 28, 1993, a total of 222 people lost their lives in the riots: 36 Coptic Christians and 38 other citizens; If one is a foreigner; sixty-six members of the Security Forces; and seventy-six known or suspected activists who were killed while resisting arrest. The latter was killed in airstrikes and firefights with security forces and at the site of planned attacks. On March 9-10, a series of airstrikes in Cairo, Giza, Qalyubiya province north of the capital and Aswan killed fifteen suspected militants and five members of the security forces.One of the airstrikes in Giza, part of Greater Cairo, killed the wife and son of Khalifa Mahmoud Ramadan, a suspected militant who was himself killed. The government agency Middle East News Agency reported on March 10 that the raids were part of a "broad confrontational plan aimed at ofterrorist elements"The state of emergency declared in October 1981 after the assassination of President Anwar el-Sadat was still in force in Egypt. The law, previously in effect continuously from June 1967 to May 1980, continued to grant the executive branch unique legal powers that effectively overrode the human rights guarantees of the Egyptian constitution. These provisions included wide discretionary powers in arresting and detaining individuals, as well as the ability to try civilians in military courts. The Cairo-based Independent Organization for Human Rights said so in a document sent to the United Nations in July 1993The human rights committee said the continued imposition of the state of emergency had resulted in "another constitution for the country" and "led to widespread misconduct by the security apparatus".

Keywords: constitution, human rights, legal power, president, anwar, el-sadat, assassination, state of emergency, middle east, news, agency, confrontational, arresting, fugitive, leaders, terrorist, elements, armed islamic extremists.

Procedia PDF Downloads 15
105 Translation, Cross-Cultural Adaption, and Validation of the Vividness of Movement Imagery Questionnaire 2 (VMIQ-2) to Classical Arabic Language

Authors: Majid Alenezi, Abdelbare Algamode, Amy Hayes, Gavin Lawrence, Nichola Callow

Abstract:

The purpose of this study was to translate and culturally adapt the Vividness of Movement Imagery Questionnaire-2 (VMIQ-2) from English to produce a new Arabic version (VMIQ-2A), and to evaluate the reliability and validity of the translated questionnaire. The questionnaire assesses how vividly and clearly individuals are able to imagine themselves performing everyday actions. Its purpose is to measure individuals’ ability to conduct movement imagery, which can be defined as “the cognitive rehearsal of a task in the absence of overt physical movement.” Movement imagery has been introduced in physiotherapy as a promising intervention technique, especially when physical exercise is not possible (e.g. pain, immobilisation.) Considerable evidence indicates movement imagery interventions improve physical function, but to maximize efficacy it is important to know the imagery abilities of the individuals being treated. Given the increase in the global sharing of knowledge it is desirable to use standard measures of imagery ability across language and cultures, thus motivating this project. The translation procedure followed guidelines from the Translation and Cultural Adaptation group of the International Society for Pharmacoeconomics and Outcomes Research and involved the following phases: Preparation; the original VMIQ-2 was adapted slightly to provide additional information and simplified grammar. Forward translation; three native speakers resident in Saudi Arabia translated the original VMIQ-2 from English to Arabic, following instruction to preserve meaning (not literal translation), and cultural relevance. Reconciliation; the project manager (first author), the primary translator and a physiotherapist reviewed the three independent translations to produce a reconciled first Arabic draft of VMIQ-2A. Backward translation; a fourth translator (native Arabic speaker fluent in English) translated literally the reconciled first Arabic draft to English. The project manager and two study authors compared the English back translation to the original VMIQ-2 and produced the second Arabic draft. Cognitive debriefing; to assess participants’ understanding of the second Arabic draft, 7 native Arabic speakers resident in the UK completed the questionnaire, and rated the clearness of the questions, specified difficult words or passages, and wrote in their own words their understanding of key terms. Following review of this feedback, a final Arabic version was created. 142 native Arabic speakers completed the questionnaire in community meeting places or at home; a subset of 44 participants completed the questionnaire a second time 1 week later. Results showed the translated questionnaire to be valid and reliable. Correlation coefficients indicated good test-retest reliability. Cronbach’s a indicated high internal consistency. Construct validity was tested in two ways. Imagery ability scores have been found to be invariant across gender; this result was replicated within the current study, assessed by independent-samples t-test. Additionally, experienced sports participants have higher imagery ability than those less experienced; this result was also replicated within the current study, assessed by analysis of variance, supporting construct validity. Results provide preliminary evidence that the VMIQ-2A is reliable and valid to be used with a general population who are native Arabic speakers. Future research will include validation of the VMIQ-2A in a larger sample, and testing validity in specific patient populations.

Keywords: motor imagery, physiotherapy, translation and validation, imagery ability

Procedia PDF Downloads 304
104 Technology Optimization of Compressed Natural Gas Home Fast Refueling Units

Authors: Szymon Kuczynski, Krystian Liszka, Mariusz Laciak, Andrii Oliinyk, Robert Strods, Adam Szurlej

Abstract:

Despіte all glоbal ecоnоmіc shіfts and the fact that Natural Gas іs recоgnіzed wоrldwіde as the maіn and the leadіng alternatіve tо оіl prоducts іn transpоrtatіоn sectоr, there іs a huge barrіer tо swіtch passenger vehіcle segment tо Natural gas - the lack оf refuelіng іnfrastructure fоr Natural Gas Vehіcles. Whіle іnvestments іn publіc gas statіоns requіre establіshed NGV market іn оrder tо be cоst effectіve, the market іs nоt there due tо lack оf refuelіng statіоns. The key tо sоlvіng that prоblem and prоvіdіng barrіer breakіng refuelіng іnfrastructure sоlutіоn fоr Natural Gas Vehіcles (NGV) іs Hоme Fast Refuelіng Unіts. Іt оperates usіng Natural Gas (Methane), whіch іs beіng prоvіded thrоugh gas pіpelіnes at clіents hоme, and electrіcіty cоnnectіоn pоіnt. Іt enables an envіrоnmentally frіendly NGV’s hоme refuelіng just іn mіnutes. The underlyіng technоlоgy іs a patented technоlоgy оf оne stage hydraulіc cоmpressоr (іnstead оf multіstage mechanіcal cоmpressоr technоlоgy avaіlable оn the market nоw) whіch prоvіdes the pоssіbіlіty tо cоmpress lоw pressure gas frоm resіdentіal gas grіd tо 200 bar fоr іts further usage as a fuel fоr NGVs іn the mоst ecоnоmіcally effіcіent and cоnvenіent fоr custоmer way. Descrіptіоn оf wоrkіng algоrіthm: Twо hіgh pressure cylіnders wіth upper necks cоnnected tо lоw pressure gas sоurce are placed vertіcally. Іnіtіally оne оf them іs fіlled wіth lіquіd and anоther оne – wіth lоw pressure gas. Durіng the wоrkіng prоcess lіquіd іs transferred by means оf hydraulіc pump frоm оne cylіnder tо anоther and back. Wоrkіng lіquіd plays a rоle оf pіstоns іnsіde cylіnders. Mоvement оf wоrkіng lіquіd іnsіde cylіnders prоvіdes sіmultaneоus suctіоn оf a pоrtіоn оf lоw pressure gas іntо оne оf the cylіnder (where lіquіd mоves dоwn) and fоrcіng оut gas оf hіgher pressure frоm anоther cylіnder (where lіquіd mоves up) tо the fuel tank оf the vehіcle / stоrage tank. Each cycle оf fоrcіng the gas оut оf the cylіnder rіses up the pressure оf gas іn the fuel tank оf a vehіcle wіth 2 cylіnders. The prоcess іs repeated untіl the pressure оf gas іn the fuel tank reaches 200 bar. Mоbіlіty has becоme a necessіty іn peоple’s everyday lіfe, whіch led tо оіl dependence. CNG Hоme Fast Refuelіng Unіts can become a part fоr exіstіng natural gas pіpelіne іnfrastructure and becоme the largest vehіcle refuelіng іnfrastructure. Hоme Fast Refuelіng Unіts оwners wіll enjоy day-tо-day tіme savіngs and cоnvenіence - Hоme Car refuelіng іn mіnutes, mоnth-tо-mоnth fuel cоst ecоnоmy, year-tо-year іncentіves and tax deductіbles оn NG refuelіng systems as per cоuntry, reduce CО2 lоcal emіssіоns, savіng cоsts and mоney.

Keywords: CNG (compressed natural gas), CNG stations, NGVs (natural gas vehicles), natural gas

Procedia PDF Downloads 186
103 Energy Efficiency of Secondary Refrigeration with Phase Change Materials and Impact on Greenhouse Gases Emissions

Authors: Michel Pons, Anthony Delahaye, Laurence Fournaison

Abstract:

Secondary refrigeration consists of splitting large-size direct-cooling units into volume-limited primary cooling units complemented by secondary loops for transporting and distributing cold. Such a design reduces the refrigerant leaks, which represents a source of greenhouse gases emitted into the atmosphere. However, inserting the secondary circuit between the primary unit and the ‘users’ heat exchangers (UHX) increases the energy consumption of the whole process, which induces an indirect emission of greenhouse gases. It is thus important to check whether that efficiency loss is sufficiently limited for the change to be globally beneficial to the environment. Among the likely secondary fluids, phase change slurries offer several advantages: they transport latent heat, they stabilize the heat exchange temperature, and the formerly evaporators still can be used as UHX. The temperature level can also be adapted to the desired cooling application. Herein, the slurry {ice in mono-propylene-glycol solution} (melting temperature Tₘ of 6°C) is considered for food preservation, and the slurry {mixed hydrate of CO₂ + tetra-n-butyl-phosphonium-bromide in aqueous solution of this salt + CO₂} (melting temperature Tₘ of 13°C) is considered for air conditioning. For the sake of thermodynamic consistency, the analysis encompasses the whole process, primary cooling unit plus secondary slurry loop, and the various properties of the slurries, including their non-Newtonian viscosity. The design of the whole process is optimized according to the properties of the chosen slurry and under explicit constraints. As a first constraint, all the units must deliver the same cooling power to the user. The other constraints concern the heat exchanges areas, which are prescribed, and the flow conditions, which prevent deposition of the solid particles transported in the slurry, and their agglomeration. Minimization of the total energy consumption leads to the optimal design. In addition, the results are analyzed in terms of exergy losses, which allows highlighting the couplings between the primary unit and the secondary loop. One important difference between the ice-slurry and the mixed-hydrate one is the presence of gaseous carbon dioxide in the latter case. When the mixed-hydrate crystals melt in the UHX, CO₂ vapor is generated at a rate that depends on the phase change kinetics. The flow in the UHX, and its heat and mass transfer properties are significantly modified. This effect has never been investigated before. Lastly, inserting the secondary loop between the primary unit and the users increases the temperature difference between the refrigerated space and the evaporator. This results in a loss of global energy efficiency, and therefore in an increased energy consumption. The analysis shows that this loss of efficiency is not critical in the first case (Tₘ = 6°C), while the second case leads to more ambiguous results, partially because of the higher melting temperature.The consequences in terms of greenhouse gases emissions are also analyzed.

Keywords: exergy, hydrates, optimization, phase change material, thermodynamics

Procedia PDF Downloads 109
102 Geographic Information Systems and a Breath of Opportunities for Supply Chain Management: Results from a Systematic Literature Review

Authors: Anastasia Tsakiridi

Abstract:

Geographic information systems (GIS) have been utilized in numerous spatial problems, such as site research, land suitability, and demographic analysis. Besides, GIS has been applied in scientific fields like geography, health, and economics. In business studies, GIS has been used to provide insights and spatial perspectives in demographic trends, spending indicators, and network analysis. To date, the information regarding the available usages of GIS in supply chain management (SCM) and how these analyses can benefit businesses is limited. A systematic literature review (SLR) of the last 5-year peer-reviewed academic literature was conducted, aiming to explore the existing usages of GIS in SCM. The searches were performed in 3 databases (Web of Science, ProQuest, and Business Source Premier) and reported using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology. The analysis resulted in 79 papers. The results indicate that the existing GIS applications used in SCM were in the following domains: a) network/ transportation analysis (in 53 of the papers), b) location – allocation site search/ selection (multiple-criteria decision analysis) (in 45 papers), c) spatial analysis (demographic or physical) (in 34 papers), d) combination of GIS and supply chain/network optimization tools (in 32 papers), and e) visualization/ monitoring or building information modeling applications (in 8 papers). An additional categorization of the literature was conducted by examining the usage of GIS in the supply chain (SC) by the business sectors, as indicated by the volume of the papers. The results showed that GIS is mainly being applied in the SC of the biomass biofuel/wood industry (33 papers). Other industries that are currently utilizing GIS in their SC were the logistics industry (22 papers), the humanitarian/emergency/health care sector (10 papers), the food/agro-industry sector (5 papers), the petroleum/ coal/ shale gas sector (3 papers), the faecal sludge sector (2 papers), the recycle and product footprint industry (2 papers), and the construction sector (2 papers). The results were also presented by the geography of the included studies and the GIS software used to provide critical business insights and suggestions for future research. The results showed that research case studies of GIS in SCM were conducted in 26 countries (mainly in the USA) and that the most prominent GIS software provider was the Environmental Systems Research Institute’s ArcGIS (in 51 of the papers). This study is a systematic literature review of the usage of GIS in SCM. The results showed that the GIS capabilities could offer substantial benefits in SCM decision-making by providing key insights to cost minimization, supplier selection, facility location, SC network configuration, and asset management. However, as presented in the results, only eight industries/sectors are currently using GIS in their SCM activities. These findings may offer essential tools to SC managers who seek to optimize the SC activities and/or minimize logistic costs and to consultants and business owners that want to make strategic SC decisions. Furthermore, the findings may be of interest to researchers aiming to investigate unexplored research areas where GIS may improve SCM.

Keywords: supply chain management, logistics, systematic literature review, GIS

Procedia PDF Downloads 116
101 Numerical Analysis of NOₓ Emission in Staged Combustion for the Optimization of Once-Through-Steam-Generators

Authors: Adrien Chatel, Ehsan Askari Mahvelati, Laurent Fitschy

Abstract:

Once-Through-Steam-Generators are commonly used in the oil-sand industry in the heavy fuel oil extraction process. They are composed of three main parts: the burner, the radiant and convective sections. Natural gas is burned through staged diffusive flames stabilized by the burner. The heat generated by the combustion is transferred to the water flowing through the piping system in the radiant and convective sections. The steam produced within the pipes is then directed to the ground to reduce the oil viscosity and allow its pumping. With the rapid development of the oil-sand industry, the number of OTSG in operation has increased as well as the associated emissions of environmental pollutants, especially the Nitrous Oxides (NOₓ). To limit the environmental degradation, various international environmental agencies have established regulations on the pollutant discharge and pushed to reduce the NOₓ release. To meet these constraints, OTSG constructors have to rely on more and more advanced tools to study and predict the NOₓ emission. With the increase of the computational resources, Computational Fluid Dynamics (CFD) has emerged as a flexible tool to analyze the combustion and pollutant formation process. Moreover, to optimize the burner operating condition regarding the NOx emission, field characterization and measurements are usually accomplished. However, these kinds of experimental campaigns are particularly time-consuming and sometimes even impossible for industrial plants with strict operation schedule constraints. Therefore, the application of CFD seems to be more adequate in order to provide guidelines on the NOₓ emission and reduction problem. In the present work, two different software are employed to simulate the combustion process in an OTSG, namely the commercial software ANSYS Fluent and the open source software OpenFOAM. RANS (Reynolds-Averaged Navier–Stokes) equations combined with the Eddy Dissipation Concept to model the combustion and closed by the k-epsilon model are solved. A mesh sensitivity analysis is performed to assess the independence of the solution on the mesh. In the first part, the results given by the two software are compared and confronted with experimental data as a mean to assess the numerical modelling. Flame temperatures and chemical composition are used as reference fields to perform this validation. Results show a fair agreement between experimental and numerical data. In the last part, OpenFOAM is employed to simulate several operating conditions, and an Emission Characteristic Map of the combustion system is generated. The sources of high NOₓ production inside the OTSG are pointed and correlated to the physics of the flow. CFD is, therefore, a useful tool for providing an insight into the NOₓ emission phenomena in OTSG. Sources of high NOₓ production can be identified, and operating conditions can be adjusted accordingly. With the help of RANS simulations, an Emission Characteristics Map can be produced and then be used as a guide for a field tune-up.

Keywords: combustion, computational fluid dynamics, nitrous oxides emission, once-through-steam-generators

Procedia PDF Downloads 92
100 Metagenomic analysis of Irish cattle faecal samples using Oxford Nanopore MinION Next Generation Sequencing

Authors: Niamh Higgins, Dawn Howard

Abstract:

The Irish agri-food sector is of major importance to Ireland’s manufacturing sector and to the Irish economy through employment and the exporting of animal products worldwide. Infectious diseases and parasites have an impact on farm animal health causing profitability and productivity to be affected. For the sustainability of Irish dairy farming, there must be the highest standard of animal health. There can be a lack of information in accounting for > 1% of complete microbial diversity in an environment. There is the tendency of culture-based methods of microbial identification to overestimate the prevalence of species which grow easily on an agar surface. There is a need for new technologies to address these issues to assist with animal health. Metagenomic approaches provide information on both the whole genome and transcriptome present through DNA sequencing of total DNA from environmental samples producing high determination of functional and taxonomic information. Nanopore Next Generation Technologies have the ability to be powerful sequencing technologies. They provide high throughput, low material requirements and produce ultra-long reads, simplifying the experimental process. The aim of this study is to use a metagenomics approach to analyze dairy cattle faecal samples using the Oxford Nanopore MinION Next Generation Sequencer and to establish an in-house pipeline for metagenomic characterization of complex samples. Faecal samples will be obtained from Irish dairy farms, DNA extracted and the MinION will be used for sequencing, followed by bioinformatics analysis. Of particular interest, will be the parasite Buxtonella sulcata, which there has been little research on and which there is no research on its presence on Irish dairy farms. Preliminary results have shown the ability of the MinION to produce hundreds of reads in a relatively short time frame of eight hours. The faecal samples were obtained from 90 dairy cows on a Galway farm. The results from Oxford Nanopore ‘What’s in my pot’ (WIMP) using the Epi2me workflow, show that from a total of 926 classified reads, 87% were from the Kingdom Bacteria, 10% were from the Kingdom Eukaryota, 3% were from the Kingdom Archaea and < 1% were from the Kingdom Viruses. The most prevalent bacteria were those from the Genus Acholeplasma (71 reads), Bacteroides (35 reads), Clostridium (33 reads), Acinetobacter (20 reads). The most prevalent species present were those from the Genus Acholeplasma and included Acholeplasma laidlawii (39 reads) and Acholeplasma brassicae (26 reads). The preliminary results show the ability of the MinION for the identification of microorganisms to species level coming from a complex sample. With ongoing optimization of the pipe-line, the number of classified reads are likely to increase. Metagenomics has the potential in animal health for diagnostics of microorganisms present on farms. This would support wprevention rather than a cure approach as is outlined in the DAFMs National Farmed Animal Health Strategy 2017-2022.

Keywords: animal health, buxtonella sulcata, infectious disease, irish dairy cattle, metagenomics, minION, next generation sequencing

Procedia PDF Downloads 129
99 Photoluminescence of Barium and Lithium Silicate Glasses and Glass Ceramics Doped with Rare Earth Ions

Authors: Augustas Vaitkevicius, Mikhail Korjik, Eugene Tretyak, Ekaterina Trusova, Gintautas Tamulaitis

Abstract:

Silicate materials are widely used as luminescent materials in amorphous and crystalline phase. Lithium silicate glass is popular for making neutron sensitive scintillation glasses. Cerium-doped single crystalline silicates of rare earth elements and yttrium have been demonstrated to be good scintillation materials. Due to their high thermal and photo-stability, silicate glass ceramics are supposed to be suitable materials for producing light converters for high power white light emitting diodes. In this report, the influence of glass composition and crystallization on photoluminescence (PL) of different silicate glasses was studied. Barium (BaO-2SiO₂) and lithium (Li₂O-2SiO₂) glasses were under study. Cerium, dysprosium, erbium and europium ions as well as their combinations were used for doping. The influence of crystallization was studied after transforming the doped glasses into glass ceramics by heat treatment in the temperature range of 550-850 degrees Celsius for 1 hour. The study was carried out by comparing the photoluminescence (PL) spectra, spatial distributions of PL parameters and quantum efficiency in the samples under study. The PL spectra and spatial distributions of their parameters were obtained by using confocal PL microscopy. A WITec Alpha300 S confocal microscope coupled with an air cooled CCD camera was used. A CW laser diode emitting at 405 nm was exploited for excitation. The spatial resolution was in sub-micrometer domain in plane and ~1 micrometer perpendicularly to the sample surface. An integrating sphere with a xenon lamp coupled with a monochromator was used to measure the external quantum efficiency. All measurements were performed at room temperature. Chromatic properties of the light emission from the glasses and glass ceramics have been evaluated. We observed that the quantum efficiency of the glass ceramics is higher than that of the corresponding glass. The investigation of spatial distributions of PL parameters revealed that heat treatment of the glasses leads to a decrease in sample homogeneity. In the case of BaO-2SiO₂: Eu, 10 micrometer long needle-like objects are formed, when transforming the glass into glass ceramics. The comparison of PL spectra from within and outside the needle-like structure reveals that the ratio between intensities of PL bands associated with Eu²⁺ and Eu³⁺ ions is larger in the bright needle-like structures. This indicates a higher degree of crystallinity in the needle-like objects. We observed that the spectral positions of the PL bands are the same in the background and the needle-like areas, indicating that heat treatment imposes no significant change to the valence state of the europium ions. The evaluation of chromatic properties confirms applicability of the glasses under study for fabrication of white light sources with high thermal stability. The ability to combine barium and lithium glass matrixes and doping by Eu, Ce, Dy, and Tb enables optimization of chromatic properties.

Keywords: glass ceramics, luminescence, phosphor, silicate

Procedia PDF Downloads 294
98 The Proposal for a Framework to Face Opacity and Discrimination ‘Sins’ Caused by Consumer Creditworthiness Machines in the EU

Authors: Diogo José Morgado Rebelo, Francisco António Carneiro Pacheco de Andrade, Paulo Jorge Freitas de Oliveira Novais

Abstract:

Not everything in AI-power consumer credit scoring turns out to be a wonder. When using AI in Creditworthiness Assessment (CWA), opacity and unfairness ‘sins’ must be considered to the task be deemed Responsible. AI software is not always 100% accurate, which can lead to misclassification. Discrimination of some groups can be exponentiated. A hetero personalized identity can be imposed on the individual(s) affected. Also, autonomous CWA sometimes lacks transparency when using black box models. However, for this intended purpose, human analysts ‘on-the-loop’ might not be the best remedy consumers are looking for in credit. This study seeks to explore the legality of implementing a Multi-Agent System (MAS) framework in consumer CWA to ensure compliance with the regulation outlined in Article 14(4) of the Proposal for an Artificial Intelligence Act (AIA), dated 21 April 2021 (as per the last corrigendum by the European Parliament on 19 April 2024), Especially with the adoption of Art. 18(8)(9) of the EU Directive 2023/2225, of 18 October, which will go into effect on 20 November 2026, there should be more emphasis on the need for hybrid oversight in AI-driven scoring to ensure fairness and transparency. In fact, the range of EU regulations on AI-based consumer credit will soon impact the AI lending industry locally and globally, as shown by the broad territorial scope of AIA’s Art. 2. Consequently, engineering the law of consumer’s CWA is imperative. Generally, the proposed MAS framework consists of several layers arranged in a specific sequence, as follows: firstly, the Data Layer gathers legitimate predictor sets from traditional sources; then, the Decision Support System Layer, whose Neural Network model is trained using k-fold Cross Validation, provides recommendations based on the feeder data; the eXplainability (XAI) multi-structure comprises Three-Step-Agents; and, lastly, the Oversight Layer has a 'Bottom Stop' for analysts to intervene in a timely manner. From the analysis, one can assure a vital component of this software is the XAY layer. It appears as a transparent curtain covering the AI’s decision-making process, enabling comprehension, reflection, and further feasible oversight. Local Interpretable Model-agnostic Explanations (LIME) might act as a pillar by offering counterfactual insights. SHapley Additive exPlanation (SHAP), another agent in the XAI layer, could address potential discrimination issues, identifying the contribution of each feature to the prediction. Alternatively, for thin or no file consumers, the Suggestion Agent can promote financial inclusion. It uses lawful alternative sources such as the share of wallet, among others, to search for more advantageous solutions to incomplete evaluation appraisals based on genetic programming. Overall, this research aspires to bring the concept of Machine-Centered Anthropocentrism to the table of EU policymaking. It acknowledges that, when put into service, credit analysts no longer exert full control over the data-driven entities programmers have given ‘birth’ to. With similar explanatory agents under supervision, AI itself can become self-accountable, prioritizing human concerns and values. AI decisions should not be vilified inherently. The issue lies in how they are integrated into decision-making and whether they align with non-discrimination principles and transparency rules.

Keywords: creditworthiness assessment, hybrid oversight, machine-centered anthropocentrism, EU policymaking

Procedia PDF Downloads 15
97 Parameter Selection and Monitoring for Water-Powered Percussive Drilling in Green-Fields Mineral Exploration

Authors: S. J. Addinell, T. Richard, B. Evans

Abstract:

The Deep Exploration Technologies Cooperative Research Centre (DET CRC) is researching and developing a new coiled tubing based greenfields mineral exploration drilling system utilising downhole water powered percussive drill tooling. This new drilling system is aimed at significantly reducing the costs associated with identifying mineral resource deposits beneath deep, barron cover. This system has shown superior rates of penetration in water-rich hard rock formations at depths exceeding 500 meters. Several key challenges exist regarding the deployment and use of these bottom hole assemblies for mineral exploration, and this paper discusses some of the key technical challenges. This paper presents experimental results obtained from the research program during laboratory and field testing of the prototype drilling system. A study of the morphological aspects of the cuttings generated during the percussive drilling process is presented and shows a strong power law relationship for particle size distributions. Several percussive drilling parameters such as RPM, applied fluid pressure and weight on bit have been shown to influence the particle size distributions of the cuttings generated. This has direct influence on other drilling parameters such as flow loop performance, cuttings dewatering, and solids control. Real-time, accurate knowledge of percussive system operating parameters will assist the driller in maximising the efficiency of the drilling process. The applied fluid flow, fluid pressure, and rock properties are known to influence the natural oscillating frequency of the percussive hammer, but this paper also shows that drill bit design, drill bit wear and the applied weight on bit can also influence the oscillation frequency. Due to the changing drilling conditions and therefore changing operating parameters, real-time understanding of the natural operating frequency is paramount to achieving system optimisation. Several techniques to understand the oscillating frequency have been investigated and presented. With a conventional top drive drilling rig, spectral analysis of applied fluid pressure, hydraulic feed force pressure, hold back pressure and drill string vibrations have shown the presence of the operating frequency of the bottom hole tooling. Unfortunately, however, with the implementation of a coiled tubing drilling rig, implementing a positive displacement downhole motor to provide drill bit rotation, these signals are not available for interrogation at the surface and therefore another method must be considered. The investigation and analysis of ground vibrations using geophone sensors, similar to seismic-while-drilling techniques have indicated the presence of the natural oscillating frequency of the percussive hammer. This method is shown to provide a robust technique for the determination of the downhole percussive oscillation frequency when used with a coiled tubing drill rig.

Keywords: cuttings characterization, drilling optimization, oscillation frequency, percussive drilling, spectral analysis

Procedia PDF Downloads 213
96 The Role of Supply Chain Agility in Improving Manufacturing Resilience

Authors: Maryam Ziaee

Abstract:

This research proposes a new approach and provides an opportunity for manufacturing companies to produce large amounts of products that meet their prospective customers’ tastes, needs, and expectations and simultaneously enable manufacturers to increase their profit. Mass customization is the production of products or services to meet each individual customer’s desires to the greatest possible extent in high quantities and at reasonable prices. This process takes place at different levels such as the customization of goods’ design, assembly, sale, and delivery status, and classifies in several categories. The main focus of this study is on one class of mass customization, called optional customization, in which companies try to provide their customers with as many options as possible to customize their products. These options could range from the design phase to the manufacturing phase, or even methods of delivery. Mass customization values customers’ tastes, but it is only one side of clients’ satisfaction; on the other side is companies’ fast responsiveness delivery. It brings the concept of agility, which is the ability of a company to respond rapidly to changes in volatile markets in terms of volume and variety. Indeed, mass customization is not effectively feasible without integrating the concept of agility. To gain the customers’ satisfaction, the companies need to be quick in responding to their customers’ demands, thus highlighting the significance of agility. This research offers a different method that successfully integrates mass customization and fast production in manufacturing industries. This research is built upon the hypothesis that the success key to being agile in mass customization is to forecast demand, cooperate with suppliers, and control inventory. Therefore, the significance of the supply chain (SC) is more pertinent when it comes to this stage. Since SC behavior is dynamic and its behavior changes constantly, companies have to apply one of the predicting techniques to identify the changes associated with SC behavior to be able to respond properly to any unwelcome events. System dynamics utilized in this research is a simulation approach to provide a mathematical model among different variables to understand, control, and forecast SC behavior. The final stage is delayed differentiation, the production strategy considered in this research. In this approach, the main platform of products is produced and stocked and when the company receives an order from a customer, a specific customized feature is assigned to this platform and the customized products will be created. The main research question is to what extent applying system dynamics for the prediction of SC behavior improves the agility of mass customization. This research is built upon a qualitative approach to bring about richer, deeper, and more revealing results. The data is collected through interviews and is analyzed through NVivo software. This proposed model offers numerous benefits such as reduction in the number of product inventories and their storage costs, improvement in the resilience of companies’ responses to their clients’ needs and tastes, the increase of profits, and the optimization of productivity with the minimum level of lost sales.

Keywords: agility, manufacturing, resilience, supply chain

Procedia PDF Downloads 71
95 Alternative Energy and Carbon Source for Biosurfactant Production

Authors: Akram Abi, Mohammad Hossein Sarrafzadeh

Abstract:

Because of their several advantages over chemical surfactants, biosurfactants have given rise to a growing interest in the past decades. Advantages such as lower toxicity, higher biodegradability, higher selectivity and applicable at extreme temperature and pH which enables them to be used in a variety of applications such as: enhanced oil recovery, environmental and pharmaceutical applications, etc. Bacillus subtilis produces a cyclic lipopeptide, called surfactin, which is one of the most powerful biosurfactants with ability to decrease surface tension of water from 72 mN/m to 27 mN/m. In addition to its biosurfactant character, surfactin exhibits interesting biological activities such as: inhibition of fibrin clot formation, lyses of erythrocytes and several bacterial spheroplasts, antiviral, anti-tumoral and antibacterial properties. Surfactin is an antibiotic substance and has been shown recently to possess anti-HIV activity. However, application of biosurfactants is limited by their high production cost. The cost can be reduced by optimizing biosurfactant production using cheap feed stock. Utilization of inexpensive substrates and unconventional carbon sources like urban or agro-industrial wastes is a promising strategy to decrease the production cost of biosurfactants. With suitable engineering optimization and microbiological modifications, these wastes can be used as substrates for large-scale production of biosurfactants. As an effort to fulfill this purpose, in this work we have tried to utilize olive oil as second carbon source and also yeast extract as second nitrogen source to investigate the effect on both biomass and biosurfactant production improvement in Bacillus subtilis cultures. Since the turbidity of the culture was affected by presence of the oil, optical density was compromised and no longer could be used as an index of growth and biomass concentration. Therefore, cell Dry Weight measurements with applying necessary tactics for removing oil drops to prevent interference with biomass weight were carried out to monitor biomass concentration during the growth of the bacterium. The surface tension and critical micelle dilutions (CMD-1, CMD-2) were considered as an indirect measurement of biosurfactant production. Distinctive and promising results were obtained in the cultures containing olive oil compared to cultures without it: more than two fold increase in biomass production (from 2 g/l to 5 g/l) and considerable reduction in surface tension, down to 40 mN/m at surprisingly early hours of culture time (only 5hr after inoculation). This early onset of biosurfactant production in this culture is specially interesting when compared to the conventional cultures at which this reduction in surface tension is not obtained until 30 hour of culture time. Reducing the production time is a very prominent result to be considered for large scale process development. Furthermore, these results can be used to develop strategies for utilization of agro-industrial wastes (such as olive oil mill residue, molasses, etc.) as cheap and easily accessible feed stocks to decrease the high costs of biosurfactant production.

Keywords: agro-industrial waste, bacillus subtilis, biosurfactant, fermentation, second carbon and nitrogen source, surfactin

Procedia PDF Downloads 271
94 The Impact of Efflux Pump Inhibitor on the Activity of Benzosiloxaboroles and Benzoxadiboroles against Gram-Negative Rods

Authors: Agnieszka E. Laudy, Karolina Stępien, Sergiusz Lulinski, Krzysztof Durka, Stefan Tyski

Abstract:

1,3-dihydro-1-hydroxy-2,1-benzoxaborole and its derivatives are a particularly interesting group of synthetic agents and were successfully employed in supramolecular chemistry medicine. The first important compounds, 5-fluoro-1,3-dihydro-1-hydroxy-2,1-benzoxaborole and 5-chloro-1,3-dihydro-1-hydroxy-2,1-benzoxaborole were identified as potent antifungal agents. In contrast, (S)-3-(aminomethyl)-7-(3-hydroxypropoxy)-1-hydroxy-1,3-dihydro-2,1-benzoxaborole hydrochloride is in the second phase of clinical trials as a drug for the treatment of Gram-negative bacterial infections of the Enterobacteriaceae family and Pseudomonas aeruginosa. Equally important and difficult task is to search for compounds active against Gram-negative bacilli, which have multi-drug-resistance efflux pumps actively removing many of the antibiotics from bacterial cells. We have examined whether halogen-substituted benzoxaborole-based derivatives and their analogues possess antibacterial activity and are substrates for multi-drug-resistance efflux pumps. The antibacterial activity of 1,3-dihydro-3-hydroxy-1,1-dimethyl-1,2,3-benzosiloxaborole and 10 halogen-substituted its derivatives, as well as 1,2-phenylenediboronic acid and 3 synthesised fluoro-substituted its analogs, were evaluated. The activity against the reference strains of Gram-positive (n=5) and Gram-negative bacteria (n=10) was screened by the disc-diffusion test (0.4 mg of tested compounds was applied onto paper disc). The minimal inhibitory concentration values and the minimal bactericidal concentration values were estimated according to The Clinical and Laboratory Standards Institute and The European Committee on Antimicrobial Susceptibility Testing recommendations. During the minimal inhibitory concentration values determination with or without phenylalanine-arginine beta-naphthylamide (50 mg/L) efflux pump inhibitor, the concentrations of tested compounds ranged 0.39-400 mg/L in the broth medium supplemented with 1 mM magnesium sulfate. Generally, the studied benzosiloxaboroles and benzoxadiboroles showed a higher activity against Gram-positive cocci than against Gram-negative rods. Moreover, benzosiloxaboroles have the higher activity than benzoxadiboroles compounds. In this study, we demonstrated that substitution (mono-, di- or tetra-) of 1,3-dihydro-3-hydroxy-1,1-dimethyl-1,2,3-benzosiloxaborole with halogen groups resulted in an increase in antimicrobial activity as compared to the parent substance. Interestingly, the 6,7-dichloro-substituted parent substance was found to be the most potent against Gram-positive cocci: Staphylococcus sp. (minimal inhibitory concentration 6.25 mg/L) and Enterococcus sp. (minimal inhibitory concentration 25 mg/L). On the other hand, mono- and dichloro-substituted compounds were the most actively removed by efflux pumps present in Gram-negative bacteria mainly from Enterobacteriaceae family. In the presence of efflux pump inhibitor the minimal inhibitory concentration values of chloro-substituted benzosiloxaboroles decreased from 400 mg/L to 3.12 mg/L. Of note, the highest increase in bacterial susceptibility to tested compounds in the presence of phenylalanine-arginine beta-naphthylamide was observed for 6-chloro-, 6,7-dichloro- and 6,7-difluoro-substituted benzosiloxaboroles. In the case of Escherichia coli, Enterobacter cloacae and P. aeruginosa strains at least a 32-fold decrease in the minimal inhibitory concentration values of these agents were observed. These data demonstrate structure-activity relationships of the tested derivatives and highlight the need for further search for benzoxaboroles and related compounds with significant antimicrobial properties. Moreover, the influence of phenylalanine-arginine beta-naphthylamide on the susceptibility of Gram-negative rods to studied benzosiloxaboroles indicate that some tested agents are substrates for efflux pumps in Gram-negative rods.

Keywords: antibacterial activity, benzosiloxaboroles, efflux pumps, phenylalanine-arginine beta-naphthylamide

Procedia PDF Downloads 247
93 Effectiveness of Prehabilitation on Improving Emotional and Clinical Recovery of Patients Undergoing Open Heart Surgeries

Authors: Fatma Ahmed, Heba Mostafa, Bassem Ramdan, Azza El-Soussi

Abstract:

Background: World Health Organization stated that by 2020 cardiac disease will be the number one cause of death worldwide and estimates that 25 million people per year will suffer from heart disease. Cardiac surgery is considered an effective treatment for severe forms of cardiovascular diseases that cannot be treated by medical treatment or cardiac interventions. In spite of the benefits of cardiac surgery, it is considered a major stressful experience for patients who are candidate for surgery. Prehabilitation can decrease incidences of postoperative complications as it prepares patients for surgical stress through enhancing their defenses to meet the demands of surgery. When patients anticipate the postoperative sequence of events, they will prepare themselves to act certain behaviors, identify their roles and actively participate in their own recovery, therefore, anxiety levels are decreased and functional capacity is enhanced. Prehabilitation programs can comprise interventions that include physical exercise, psychological prehabilitation, nutritional optimization and risk factor modification. Physical exercises are associated with improvements in the functioning of the various physiological systems, reflected in increased functional capacity, improved cardiac and respiratory functions and make patients fit for surgical intervention. Prehabilitation programs should also prepare patients psychologically in order to cope with stress, anxiety and depression associated with postoperative pain, fatigue, limited ability to perform the usual activities of daily living through acting in a healthy manner. Notwithstanding the benefits of psychological preparations, there are limited studies which investigated the effect of psychological prehabilitation to confirm its effect on psychological, quality of life and physiological outcomes of patients who had undergone cardiac surgery. Aim of the study: The study aims to determine the effect of prehabilitation interventions on outcomes of patients undergoing cardiac surgeries. Methods: Quasi experimental study design was used to conduct this study. Sixty eligible and consenting patients were recruited and divided into two groups: control and intervention group (30 participants in each). One tool namely emotional, physiological, clinical, cognitive and functional capacity outcomes of prehabilitation intervention assessment tool was utilized to collect the data of this study. Results: Data analysis showed significant improvement in patients' emotional state, physiological and clinical outcomes (P < 0.000) with the use of prehabilitation interventions. Conclusions: Cardiac prehabilitation in the form of providing information about surgery, circulation exercise, deep breathing exercise, incentive spirometer training and nutritional education implemented daily by patients scheduled for elective open heart surgery one week before surgery have been shown to improve patients' emotional state, physiological and clinical outcomes.

Keywords: emotional recovery, clinical recovery, coronary artery bypass grafting patients, prehabilitation

Procedia PDF Downloads 182
92 Furnishing Ancillary Alternatives for High Speed Corridors and Pedestrian Crossing: Elevated Cycle Track, an Expedient to Urban Space Prototype in New Delhi

Authors: Suneet Jagdev, Hrishabh Amrodia, Siddharth Menon, Abhishek Singh, Mansi Shivhare

Abstract:

Delhi, the National Capital, has undergone a surge in development rate, consequently engendering an unprecedented increase in population. Over the years the city has transformed into a car-centric infrastructure with high-speed corridors, flyovers and fast lanes. A considerable section of the population is hankering to rehabilitate to the good old cycling days, in order to contribute towards a green environment as well as to maintain their physical well-being. Furthermore, an extant section of Delhi’s population relies on cycles as their primary means of commuting in the city. Delhi has the highest number of cyclists and second highest number of pedestrians in the country. However, the tumultuous problems of unregulated traffic, inadequate space on roads, adverse weather conditions stifle them to opt for cycling. Lately, the city has been facing a conglomeration of problems such as haphazard traffic movement, clogged roads, congestion, pollution, accidents, safety issues, etc. In 1957, Delhi’s cyclists accounted for 36 per cent of trips which dropped down to a mere 4 per cent in 2008. The declining rate is due to unsafe roads and lack of proper cycle lanes. Now as the 10 percent of the city has cycle tracks. There is also a lack of public recreational activities in the city. These conundrums incite the need of a covered elevated cycling bridge track to facilitate the safe and smooth cycle commutation in the city which would also serve the purpose of an alternate urban public space over the cycle bridge reducing the cost as well as the space requirement for the same, developing a user–friendly transportation and public interaction system for urban areas in the city. Based on the archival research methodologies, the following research draws information and extracts records from the data accounts of the Delhi Metro Rail Corporation Ltd. as well as the Centre for Science and Environment, India. This research will predominantly focus on developing a prototype design for high speed elevated bicycle lanes based on different road typologies, which can be replicated with minor variations in similar situations, all across the major cities of our country including the proposed smart cities. Furthermore, how these cycling lanes could be utilized for the place making process accommodating cycle parking and renting spaces, public recreational spaces, food courts as well as convenient shopping facilities with appropriate optimization. How to preserve and increase the share of smooth and safe cycling commute cycling for the routine transportation of the urban community of the polluted capital which has been on a steady decline over the past few decades.

Keywords: bicycle track, prototype, road safety, urban spaces

Procedia PDF Downloads 134
91 Numerical Study of Leisure Home Chassis under Various Loads by Using Finite Element Analysis

Authors: Asem Alhnity, Nicholas Pickett

Abstract:

The leisure home industry is experiencing an increase in sales due to the rise in popularity of staycations. However, there is also a demand for improvements in thermal and structural behaviour from customers. Existing standards and codes of practice outline the requirements for leisure home design. However, there is a lack of expertise in applying Finite Element Analysis (FEA) to complex structures in this industry. As a result, manufacturers rely on standardized design approaches, which often lead to excessively engineered or inadequately designed products. This study aims to address this issue by investigating the impact of the habitation structure on chassis performance in leisure homes. The aim of this research is to comprehensively analyse the impact of the habitation structure on chassis performance in leisure homes. By employing FEA on the entire unit, including both the habitation structure and the chassis, this study seeks to develop a novel framework for designing and analysing leisure homes. The objectives include material reduction, enhancing structural stability, resolving existing design issues, and developing innovative modular and wooden chassis designs. The methodology used in this research is quantitative in nature. The study utilizes FEA to analyse the performance of leisure home chassis under various loads. The analysis procedures involve running the FEA simulations on the numerical model of the leisure home chassis. Different load scenarios are applied to assess the stress and deflection performance of the chassis under various conditions. FEA is a numerical method that allows for accurate analysis of complex systems. The research utilizes flexible mesh sizing to calculate small deflections around doors and windows, with large meshes used for macro deflections. This approach aims to minimize run-time while providing meaningful stresses and deflections. Moreover, it aims to investigate the limitations and drawbacks of the popular approach of applying FEA only to the chassis and replacing the habitation structure with a distributed load. The findings of this study indicate that the popular approach of applying FEA only to the chassis and replacing the habitation structure with a distributed load overlooks the strengthening generated from the habitation structure. By employing FEA on the entire unit, it is possible to optimize stress and deflection performance while achieving material reduction and enhanced structural stability. The study also introduces innovative modular and wooden chassis designs, which show promising weight reduction compared to the existing heavily fabricated lattice chassis. In conclusion, this research provides valuable insights into the impact of the habitation structure on chassis performance in leisure homes. By employing FEA on the entire unit, the study demonstrates the importance of considering the strengthening generated from the habitation structure in chassis design. The research findings contribute to advancements in material reduction, structural stability, and overall performance optimization. The novel framework developed in this study promotes sustainability, cost-efficiency, and innovation in leisure home design.

Keywords: static homes, caravans, motor homes, holiday homes, finite element analysis (FEA)

Procedia PDF Downloads 73
90 Potential for Massive Use of Biodiesel for Automotive in Italy

Authors: Domenico Carmelo Mongelli

Abstract:

The context of this research is that of the Italian reality, which, in order to adapt to the EU Directives that prohibit the production of internal combustion engines in favor of electric mobility from 2035, is extremely concerned about the significant loss of jobs resulting from the difficulty of the automotive industry in converting in such a short time and due to the reticence of potential buyers in the face of such an epochal change. The aim of the research is to evaluate for Italy the potential of the most valid alternative to this transition to electric: leaving the current production of diesel engines unchanged, no longer powered by gasoil, imported and responsible for greenhouse gas emissions, but powered entirely by a nationally produced and eco-sustainable fuel such as biodiesel. Today in Italy, the percentage of biodiesel mixed with gasoil for diesel engines is too low (around 10%); for this reason, this research aims to evaluate the functioning of current diesel engines powered 100% by biodiesel and the ability of the Italian production system to cope to this hypothesis. The research geographically identifies those abandoned lands in Italy, now out of the food market, which is best suited to an energy crop for the final production of biodiesel. The cultivation of oilseeds is identified, which for the Italian agro-industrial reality allows maximizing the agricultural and industrial yields of the transformation of the agricultural product into a final energy product and minimizing the production costs of the entire agro-industrial chain. To achieve this objective, specific databases are used, and energy and economic balances are prepared for the different agricultural product alternatives. Solutions are proposed and tested that allow the optimization of all production phases in both the agronomic and industrial phases. The biodiesel obtained from the most feasible of the alternatives examined is analyzed, and its compatibility with current diesel engines is identified, and from the evaluation of its thermo-fluid-dynamic properties, the engineering measures that allow the perfect functioning of current internal combustion engines are examined. The results deriving from experimental tests on the engine bench are evaluated to evaluate the performance of different engines fueled with biodiesel alone in terms of power, torque, specific consumption and useful thermal efficiency and compared with the performance of engines fueled with the current mixture of fuel on the market. The results deriving from experimental tests on the engine bench are evaluated to evaluate the polluting emissions of engines powered only by biodiesel and compared with current emissions. At this point, we proceed with the simulation of the total replacement of gasoil with biodiesel as a fuel for the current fleet of diesel vehicles in Italy, drawing the necessary conclusions in technological, energy, economic, and environmental terms and in terms of social and employment implications. The results allow us to evaluate the potential advantage of a total replacement of diesel fuel with biodiesel for powering road vehicles with diesel cycle internal combustion engines without significant changes to the current vehicle fleet and without requiring future changes to the automotive industry.

Keywords: biodiesel, economy, engines, environment

Procedia PDF Downloads 42
89 Covariate-Adjusted Response-Adaptive Designs for Semi-Parametric Survival Responses

Authors: Ayon Mukherjee

Abstract:

Covariate-adjusted response-adaptive (CARA) designs use the available responses to skew the treatment allocation in a clinical trial in towards treatment found at an interim stage to be best for a given patient's covariate profile. Extensive research has been done on various aspects of CARA designs with the patient responses assumed to follow a parametric model. However, ranges of application for such designs are limited in real-life clinical trials where the responses infrequently fit a certain parametric form. On the other hand, robust estimates for the covariate-adjusted treatment effects are obtained from the parametric assumption. To balance these two requirements, designs are developed which are free from distributional assumptions about the survival responses, relying only on the assumption of proportional hazards for the two treatment arms. The proposed designs are developed by deriving two types of optimum allocation designs, and also by using a distribution function to link the past allocation, covariate and response histories to the present allocation. The optimal designs are based on biased coin procedures, with a bias towards the better treatment arm. These are the doubly-adaptive biased coin design (DBCD) and the efficient randomized adaptive design (ERADE). The treatment allocation proportions for these designs converge to the expected target values, which are functions of the Cox regression coefficients that are estimated sequentially. These expected target values are derived based on constrained optimization problems and are updated as information accrues with sequential arrival of patients. The design based on the link function is derived using the distribution function of a probit model whose parameters are adjusted based on the covariate profile of the incoming patient. To apply such designs, the treatment allocation probabilities are sequentially modified based on the treatment allocation history, response history, previous patients’ covariates and also the covariates of the incoming patient. Given these information, an expression is obtained for the conditional probability of a patient allocation to a treatment arm. Based on simulation studies, it is found that the ERADE is preferable to the DBCD when the main aim is to minimize the variance of the observed allocation proportion and to maximize the power of the Wald test for a treatment difference. However, the former procedure being discrete tends to be slower in converging towards the expected target allocation proportion. The link function based design achieves the highest skewness of patient allocation to the best treatment arm and thus ethically is the best design. Other comparative merits of the proposed designs have been highlighted and their preferred areas of application are discussed. It is concluded that the proposed CARA designs can be considered as suitable alternatives to the traditional balanced randomization designs in survival trials in terms of the power of the Wald test, provided that response data are available during the recruitment phase of the trial to enable adaptations to the designs. Moreover, the proposed designs enable more patients to get treated with the better treatment during the trial thus making the designs more ethically attractive to the patients. An existing clinical trial has been redesigned using these methods.

Keywords: censored response, Cox regression, efficiency, ethics, optimal allocation, power, variability

Procedia PDF Downloads 145