Search results for: mechanical efficiency
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9842

Search results for: mechanical efficiency

212 Learning the Most Common Causes of Major Industrial Accidents and Apply Best Practices to Prevent Such Accidents

Authors: Rajender Dahiya

Abstract:

Investigation outcomes of major process incidents have been consistent for decades and validate that the causes and consequences are often identical. The debate remains as we continue to experience similar process incidents even with enormous development of new tools, technologies, industry standards, codes, regulations, and learning processes? The objective of this paper is to investigate the most common causes of major industrial incidents and reveal industry challenges and best practices to prevent such incidents. The author, in his current role, performs audits and inspections of a variety of high-hazard industries in North America, including petroleum refineries, chemicals, petrochemicals, manufacturing, etc. In this paper, he shares real life scenarios, examples, and case studies from high hazards operating facilities including key challenges and best practices. This case study will provide a clear understanding of the importance of near miss incident investigation. The incident was a Safe operating limit excursion. The case describes the deficiencies in management programs, the competency of employees, and the culture of the corporation that includes hazard identification and risk assessment, maintaining the integrity of safety-critical equipment, operating discipline, learning from process safety near misses, process safety competency, process safety culture, audits, and performance measurement. Failure to identify the hazards and manage the risks of highly hazardous materials and processes is one of the primary root-causes of an incident, and failure to learn from past incidents is the leading cause of the recurrence of incidents. Several investigations of major incidents discovered that each showed several warning signs before occurring, and most importantly, all were preventable. The author will discuss why preventable incidents were not prevented and review the mutual causes of learning failures from past major incidents. The leading causes of past incidents are summarized below. Management failure to identify the hazard and/or mitigate the risk of hazardous processes or materials. This process starts early in the project stage and continues throughout the life cycle of the facility. For example, a poorly done hazard study such as HAZID, PHA, or LOPA is one of the leading causes of the failure. If this step is performed correctly, then the next potential cause is. Management failure to maintain the integrity of safety critical systems and equipment. In most of the incidents, mechanical integrity of the critical equipment was not maintained, safety barriers were either bypassed, disabled, or not maintained. The third major cause is Management failure to learn and/or apply learning from the past incidents. There were several precursors before those incidents. These precursors were either ignored altogether or not taken seriously. This paper will conclude by sharing how a well-implemented operating management system, good process safety culture, and competent leaders and staff contributed to managing the risks to prevent major incidents.

Keywords: incident investigation, risk management, loss prevention, process safety, accident prevention

Procedia PDF Downloads 51
211 The Death of Ruan Lingyu: Leftist Aesthetics and Cinematic Reality in the 1930s Shanghai

Authors: Chen Jin

Abstract:

This topic seeks to re-examine the New Women Incident in 1935 Shanghai from the perspective of the influence of leftist cinematic aesthetics on public discourse in 1930s Shanghai. Accordingly, an original means of interpreting the death of Ruan Lingyu will be provided. On 8th March 1935, Ruan Lingyu, the queen of Chinese silent film, committed suicide through overdosing on sleeping tablets. Her last words, ‘gossip is fearful thing’, interlinks her destiny with the protagonist she played in the film The New Women (Cai Chusheng, 1935). The coincidence was constantly questioned by the masses following her suicide, constituting the enduring question: ‘who killed Ruan Lingyu?’ Responding to this query, previous scholars primarily analyze the characters played by women -particularly new women as part of the leftist movement or public discourse of 1930s Shanghai- as a means of approaching the truth. Nevertheless, alongside her status as a public celebrity, Ruan Lingyu also plays as a screen image of mechanical reproduction. The overlap between her screen image and personal destiny attracts limited academic focus in terms of the effect and implications of leftist aesthetics of reality in relation to her death, which itself has provided impetus to this research. With the reconfiguration of early Chinese film theory in the 1980s, early discourses on the relationship between cinematic reality and consciousness proposed by Hou Yao and Gu Kenfu in the 1920s are integrated into the category of Chinese film ontology, which constitutes a transcultural contrast with the Euro-American ontology that advocates the representation of reality. The discussion of Hou and Gu overlaps cinematic reality with effect, which emphasizes the empathy of cinema that is directly reflected in the leftist aesthetics of the 1930s. As the main purpose of leftist cinema is to encourage revolution through depicting social reality truly, Ruan Lingyu became renowned for her natural and realistic acting proficiency, playing leading roles in several esteemed leftist films. The realistic reproduction and natural acting skill together constitute the empathy of leftist films, which establishes a dialogue with the virtuous female image within the 1930s public discourse. On this basis, this research considers Chinese cinematic ontology and affect theory as the theoretical foundation for investigating the relationship between the screen image of Ruan Lingyu reproduced by the leftist film The New Women and the female image in the 1930s public discourse. Through contextualizing Ruan Lingyu’s death within the Chinese leftist movement, the essay indicates that the empathy embodied within leftist cinematic reality limits viewers’ cognition of the actress, who project their sentiments for the perfect screen image on to Ruan Lingyu’s image in reality. Essentially, Ruan Lingyu is imprisoned in her own perfect replication. Consequently, this article states that alongside leftist anti-female consciousness, the leftist aesthetics of reality restricts women in a passive position within public discourse, which ultimately plays a role in facilitating the death of Ruan Lingyu.

Keywords: cinematic reality, leftist aesthetics, Ruan Lingyu, The New Women

Procedia PDF Downloads 115
210 Mesenchymal Stem Cells on Fibrin Assemblies with Growth Factors

Authors: Elena Filova, Ondrej Kaplan, Marie Markova, Helena Dragounova, Roman Matejka, Eduard Brynda, Lucie Bacakova

Abstract:

Decellularized vessels have been evaluated as small-diameter vascular prostheses. Reseeding autologous cells onto decellularized tissue prior implantation should prolong prostheses function and make them living tissues. Suitable cell types for reseeding are both endothelial cells and bone marrow-derived stem cells, with a capacity for differentiation into smooth muscle cells upon mechanical loading. Endothelial cells assure antithrombogenicity of the vessels and MSCs produce growth factors and, after their differentiation into smooth muscle cells, they are contractile and produce extracellular matrix proteins as well. Fibrin is a natural scaffold, which allows direct cell adhesion based on integrin receptors. It can be prepared autologous. Fibrin can be modified with bound growth factors, such as basic fibroblast growth factor (FGF-2) and vascular endothelial growth factor (VEGF). These modifications in turn make the scaffold more attractive for cells ingrowth into the biological scaffold. The aim of the study was to prepare thin surface-attached fibrin assemblies with bound FGF-2 and VEGF, and to evaluate growth and differentiation of bone marrow-derived mesenchymal stem cells on the fibrin (Fb) assemblies. Following thin surface-attached fibrin assemblies were prepared: Fb, Fb+VEGF, Fb+FGF2, Fb+heparin, Fb+heparin+VEGF, Fb+heparin+FGF2, Fb+heparin+FGF2+VEGF. Cell culture poly-styrene and glass coverslips were used as controls. Human MSCs (passage 3) were seeded at the density of 8800 cells/1.5 mL alpha-MEM medium with 2.5% FS and 200 U/mL aprotinin per well of a 24-well cell culture. The cells have been cultured on the samples for 6 days. Cell densities on day 1, 3, and 6 were analyzed after staining with LIVE/DEAD cytotoxicity/viability assay kit. The differentiation of MSCs is being analyzed using qPCR. On day 1, the highest density of MSCs was observed on Fb+VEGF and Fb+FGF2. On days 3 and 6, there were similar densities on all samples. On day 1, cell morphology was polygonal and spread on all sample. On day 3 and 6, MSCs growing on Fb assemblies with FGF2 became apparently elongated. The evaluation of expression of genes for von Willebrand factor and CD31 (endothelial cells), for alpha-actin (smooth muscle cells), and for alkaline phosphatase (osteoblasts) is in progress. We prepared fibrin assemblies with bound VEGF and FGF-2 that supported attachment and growth of mesenchymal stem cells. The layers are promising for improving the ingrowth of MSCs into the biological scaffold. Supported by the Technology Agency of the Czech Republic TA04011345, and Ministry of Health NT11270-4/2010, and BIOCEV – Biotechnology and Biomedicine Centre of the Academy of Sciences and Charles University” project (CZ.1.05/1.1.00/02.0109), funded by the European Regional Development Fund for their financial supports.

Keywords: fibrin assemblies, FGF-2, mesenchymal stem cells, VEGF

Procedia PDF Downloads 323
209 Wheat Cluster Farming Approach: Challenges and Prospects for Smallholder Farmers in Ethiopia

Authors: Hanna Mamo Ergando

Abstract:

Climate change is already having a severe influence on agriculture, affecting crop yields, the nutritional content of main grains, and livestock productivity. Significant adaptation investments will be necessary to sustain existing yields and enhance production and food quality to fulfill demand. Climate-smart agriculture (CSA) provides numerous potentials in this regard, combining a focus on enhancing agricultural output and incomes while also strengthening resilience and responding to climate change. To improve agriculture production and productivity, the Ethiopian government has adopted and implemented a series of strategies, including the recent agricultural cluster farming that is practiced as an effort to change, improve, and transform subsistence farming to modern, productive, market-oriented, and climate-smart approach through farmers production cluster. Besides, greater attention and focus have been given to wheat production and productivity by the government, and wheat is the major crop grown in cluster farming. Therefore, the objective of this assessment was to examine various opportunities and challenges farmers face in a cluster farming system. A qualitative research approach was used to generate primary and secondary data. Respondents were chosen using the purposeful sampling technique. Accordingly, experts from the Federal Ministry of Agriculture, the Ethiopian Agricultural Transformation Institute, the Ethiopian Agricultural Research Institute, and the Ethiopian Environment Protection Authority were interviewed. The assessment result revealed that farming in clusters is an economically viable technique for sustaining small, resource-limited, and socially disadvantaged farmers' agricultural businesses. The method assists farmers in consolidating their products and delivering them in bulk to save on transportation costs while increasing income. Smallholders' negotiating power has improved as a result of cluster membership, as has knowledge and information spillover. The key challenges, on the other hand, were identified as a lack of timely provision of modern inputs, insufficient access to credit services, conflict of interest in crop selection, and a lack of output market for agro-processing firms. Furthermore, farmers in the cluster farming approach grow wheat year after year without crop rotation or diversification techniques. Mono-cropping has disadvantages because it raises the likelihood of disease and insect outbreaks. This practice may result in long-term consequences, including soil degradation, reduced biodiversity, and economic risk for farmers. Therefore, the government must devote more resources to addressing the issue of environmental sustainability. Farmers' access to complementary services that promote production and marketing efficiencies through infrastructure and institutional services has to be improved. In general, the assessment begins with some hint that leads to a deeper study into the efficiency of the strategy implementation, upholding existing policy, and scaling up good practices in a sustainable and environmentally viable manner.

Keywords: cluster farming, smallholder farmers, wheat, challenges, opportunities

Procedia PDF Downloads 205
208 Identification of Failures Occurring on a System on Chip Exposed to a Neutron Beam for Safety Applications

Authors: S. Thomet, S. De-Paoli, F. Ghaffari, J. M. Daveau, P. Roche, O. Romain

Abstract:

In this paper, we present a hardware module dedicated to understanding the fail reason of a System on Chip (SoC) exposed to a particle beam. Impact of Single-Event Effects (SEE) on processor-based SoCs is a concern that has increased in the past decade, particularly for terrestrial applications with automotive safety increasing requirements, as well as consumer and industrial domains. The SEE created by the impact of a particle on an SoC may have consequences that can end to instability or crashes. Specific hardening techniques for hardware and software have been developed to make such systems more reliable. SoC is then qualified using cosmic ray Accelerated Soft-Error Rate (ASER) to ensure the Soft-Error Rate (SER) remains in mission profiles. Understanding where errors are occurring is another challenge because of the complexity of operations performed in an SoC. Common techniques to monitor an SoC running under a beam are based on non-intrusive debug, consisting of recording the program counter and doing some consistency checking on the fly. To detect and understand SEE, we have developed a module embedded within the SoC that provide support for recording probes, hardware watchpoints, and a memory mapped register bank dedicated to software usage. To identify CPU failure modes and the most important resources to probe, we have carried out a fault injection campaign on the RTL model of the SoC. Probes are placed on generic CPU registers and bus accesses. They highlight the propagation of errors and allow identifying the failure modes. Typical resulting errors are bit-flips in resources creating bad addresses, illegal instructions, longer than expected loops, or incorrect bus accesses. Although our module is processor agnostic, it has been interfaced to a RISC-V by probing some of the processor registers. Probes are then recorded in a ring buffer. Associated hardware watchpoints are allowing to do some control, such as start or stop event recording or halt the processor. Finally, the module is also providing a bank of registers where the firmware running on the SoC can log information. Typical usage is for operating system context switch recording. The module is connected to a dedicated debug bus and is interfaced to a remote controller via a debugger link. Thus, a remote controller can interact with the monitoring module without any intrusiveness on the SoC. Moreover, in case of CPU unresponsiveness, or system-bus stall, the recorded information can still be recovered, providing the fail reason. A preliminary version of the module has been integrated into a test chip currently being manufactured at ST in 28-nm FDSOI technology. The module has been triplicated to provide reliable information on the SoC behavior. As the primary application domain is automotive and safety, the efficiency of the module will be evaluated by exposing the test chip under a fast-neutron beam by the end of the year. In the meantime, it will be tested with alpha particles and electromagnetic fault injection (EMFI). We will report in the paper on fault-injection results as well as irradiation results.

Keywords: fault injection, SoC fail reason, SoC soft error rate, terrestrial application

Procedia PDF Downloads 226
207 Li2o Loss of Lithium Niobate Nanocrystals during High-Energy Ball-Milling

Authors: Laura Kocsor, Laszlo Peter, Laszlo Kovacs, Zsolt Kis

Abstract:

The aim of our research is to prepare rare-earth-doped lithium niobate (LiNbO3) nanocrystals, having only a few dopant ions in the focal point of an exciting laser beam. These samples will be used to achieve individual addressing of the dopant ions by light beams in a confocal microscope setup. One method for the preparation of nanocrystalline materials is to reduce the particle size by mechanical grinding. High-energy ball-milling was used in several works to produce nano lithium niobate. Previously, it was reported that dry high-energy ball-milling of lithium niobate in a shaker mill results in the partial reduction of the material, which leads to a balanced formation of bipolarons and polarons yielding gray color together with oxygen release and Li2O segregation on the open surfaces. In the present work we focus on preparing LiNbO3 nanocrystals by high-energy ball-milling using a Fritsch Pulverisette 7 planetary mill. Every ball-milling process was carried out in zirconia vial with zirconia balls of different sizes (from 3 mm to 0.1 mm), wet grinding with water, and the grinding time being less than an hour. Gradually decreasing the ball size to 0.1 mm, an average particle size of about 10 nm could be obtained determined by dynamic light scattering and verified by scanning electron microscopy. High-energy ball-milling resulted in sample darkening evidenced by optical absorption spectroscopy measurements indicating that the material underwent partial reduction. The unwanted lithium oxide loss decreases the Li/Nb ratio in the crystal, strongly influencing the spectroscopic properties of lithium niobate. Zirconia contamination was found in ground samples proved by energy-dispersive X-ray spectroscopy measurements; however, it cannot be explained based on the hardness properties of the materials involved in the ball-milling process. It can be understood taking into account the presence of lithium hydroxide formed the segregated lithium oxide and water during the ball-milling process, through chemically induced abrasion. The quantity of the segregated Li2O was measured by coulometric titration. During the wet milling process in the planetary mill, it was found that the lithium oxide loss increases linearly in the early phase of the milling process, then a saturation of the Li2O loss can be seen. This change goes along with the disappearance of the relatively large particles until a relatively narrow size distribution is achieved in accord with the dynamic light scattering measurements. With the 3 mm ball size and 1100 rpm rotation rate, the mean particle size achieved is 100 nm, and the total Li2O loss is about 1.2 wt.% of the original LiNbO3. Further investigations have been done to minimize the Li2O segregation during the ball-milling process. Since the Li2O loss was observed to increase with the growing total surface of the particles, the influence of ball-milling parameters on its quantity has also been studied.

Keywords: high-energy ball-milling, lithium niobate, mechanochemical reaction, nanocrystals

Procedia PDF Downloads 127
206 Satellite Connectivity for Sustainable Mobility

Authors: Roberta Mugellesi Dow

Abstract:

As the climate crisis becomes unignorable, it is imperative that new services are developed addressing not only the needs of customers but also taking into account its impact on the environment. The Telecommunication and Integrated Application (TIA) Directorate of ESA is supporting the green transition with particular attention to the sustainable mobility.“Accelerating the shift to sustainable and smart mobility” is at the core of the European Green Deal strategy, which seeks a 90% reduction in related emissions by 2050 . Transforming the way that people and goods move is essential to increasing mobility while decreasing environmental impact, and transport must be considered holistically to produce a shared vision of green intermodal mobility. The use of space technologies, integrated with terrestrial technologies, is an enabler of smarter traffic management and increased transport efficiency for automated and connected multimodal mobility. Satellite connectivity, including future 5G networks, and digital technologies such as Digital Twin, AI, Machine Learning, and cloud-based applications are key enablers of sustainable mobility.SatCom is essential to ensure that connectivity is ubiquitously available, even in remote and rural areas, or in case of a failure, by the convergence of terrestrial and SatCom connectivity networks, This is especially crucial when there are risks of network failures or cyber-attacks targeting terrestrial communication. SatCom ensures communication network robustness and resilience. The combination of terrestrial and satellite communication networks is making possible intelligent and ubiquitous V2X systems and PNT services with significantly enhanced reliability and security, hyper-fast wireless access, as well as much seamless communication coverage. SatNav is essential in providing accurate tracking and tracing capabilities for automated vehicles and in guiding them to target locations. SatNav can also enable location-based services like car sharing applications, parking assistance, and fare payment. In addition to GNSS receivers, wireless connections, radar, lidar, and other installed sensors can enable automated vehicles to monitor surroundings, to ‘talk to each other’ and with infrastructure in real-time, and to respond to changes instantaneously. SatEO can be used to provide the maps required by the traffic management, as well as evaluate the conditions on the ground, assess changes and provide key data for monitoring and forecasting air pollution and other important parameters. Earth Observation derived data are used to provide meteorological information such as wind speed and direction, humidity, and others that must be considered into models contributing to traffic management services. The paper will provide examples of services and applications that have been developed aiming to identify innovative solutions and new business models that are allowed by new digital technologies engaging space and non space ecosystem together to deliver value and providing innovative, greener solutions in the mobility sector. Examples include Connected Autonomous Vehicles, electric vehicles, green logistics, and others. For the technologies relevant are the hybrid satcom and 5G providing ubiquitous coverage, IoT integration with non space technologies, as well as navigation, PNT technology, and other space data.

Keywords: sustainability, connectivity, mobility, satellites

Procedia PDF Downloads 126
205 Investigation of Pu-238 Heat Source Modifications to Increase Power Output through (α,N) Reaction-Induced Fission

Authors: Alex B. Cusick

Abstract:

The objective of this study is to improve upon the current ²³⁸PuO₂ fuel technology for space and defense applications. Modern RTGs (radioisotope thermoelectric generators) utilize the heat generated from the radioactive decay of ²³⁸Pu to create heat and electricity for long term and remote missions. Application of RTG technology is limited by the scarcity and expense of producing the isotope, as well as the power output which is limited to only a few hundred watts. The scarcity and expense make the efficient use of ²³⁸Pu absolutely necessary. By utilizing the decay of ²³⁸Pu, not only to produce heat directly but to also indirectly induce fission in ²³⁹Pu (which is already present within currently used fuel), it is possible to see large increases in temperature which allows for a more efficient conversion to electricity and a higher power-to-weight ratio. This concept can reduce the quantity of ²³⁸Pu necessary for these missions, potentially saving millions on investment, while yielding higher power output. Current work investigating radioisotope power systems have focused on improving efficiency of the thermoelectric components and replacing systems which produce heat by virtue of natural decay with fission reactors. The technical feasibility of utilizing (α,n) reactions to induce fission within current radioisotopic fuels has not been investigated in any appreciable detail, and our study aims to thoroughly investigate the performance of many such designs, develop those with highest capabilities, and facilitate experimental testing of these designs. In order to determine the specific design parameters that maximize power output and the efficient use of ²³⁸Pu for future RTG units, MCNP6 simulations have been used to characterize the effects of modifying fuel composition, geometry, and porosity, as well as introducing neutron moderating, reflecting, and shielding materials to the system. Although this project is currently in the preliminary stages, the final deliverables will include sophisticated designs and simulation models that define all characteristics of multiple novel RTG fuels, detailed enough to allow immediate fabrication and testing. Preliminary work has consisted of developing a benchmark model to accurately represent the ²³⁸PuO₂ pellets currently in use by NASA; this model utilizes the alpha transport capabilities of MCNP6 and agrees well with experimental data. In addition, several models have been developed by varying specific parameters to investigate their effect on (α,n) and (n,fi ssion) reaction rates. Current practices in fuel processing are to exchange out the small portion of naturally occurring ¹⁸O and ¹⁷O to limit (α,n) reactions and avoid unnecessary neutron production. However, we have shown that enriching the oxide in ¹⁸O introduces a sufficient (α,n) reaction rate to support significant fission rates. For example, subcritical fission rates above 10⁸ f/cm³-s are easily achievable in cylindrical ²³⁸PuO₂ fuel pellets with a ¹⁸O enrichment of 100%, given an increase in size and a ⁹Be clad. Many viable designs exist and our intent is to discuss current results and future endeavors on this project.

Keywords: radioisotope thermoelectric generators (RTG), Pu-238, subcritical reactors, (alpha, n) reactions

Procedia PDF Downloads 167
204 Caged Compounds as Light-Dependent Initiators for Enzyme Catalysis Reactions

Authors: Emma Castiglioni, Nigel Scrutton, Derren Heyes, Alistair Fielding

Abstract:

By using light as trigger, it is possible to study many biological processes, such as the activity of genes, proteins, and other molecules, with precise spatiotemporal control. Caged compounds, where biologically active molecules are generated from an inert precursor upon laser photolysis, offer the potential to initiate such biological reactions with high temporal resolution. As light acts as the trigger for cleaving the protecting group, the ‘caging’ technique provides a number of advantages as it can be intracellular, rapid and controlled in a quantitative manner. We are developing caging strategies to study the catalytic cycle of a number of enzyme systems, such as nitric oxide synthase and ethanolamine ammonia lyase. These include the use of caged substrates, caged electrons and the possibility of caging the enzyme itself. In addition, we are developing a novel freeze-quench instrument to study these reactions, which combines rapid mixing and flashing capabilities. Reaction intermediates will be trapped at low temperatures and will be analysed by using electron paramagnetic resonance (EPR) spectroscopy to identify the involvement of any radical species during catalysis. EPR techniques typically require relatively long measurement times and very often, low temperatures to fully characterise these short-lived species. Therefore, common rapid mixing techniques, such as stopped-flow or quench-flow are not directly suitable. However, the combination of rapid freeze-quench (RFQ) followed by EPR analysis provides the ideal approach to kinetically trap and spectroscopically characterise these transient radical species. In a typical RFQ experiment, two reagent solutions are delivered to the mixer via two syringes driven by a pneumatic actuator or stepper motor. The new mixed solution is then sprayed into a cryogenic liquid or surface, and the frozen sample is then collected and packed into an EPR tube for analysis. The earliest RFQ instrument consisted of a hydraulic ram unit as a drive unit with direct spraying of the sample into a cryogenic liquid (nitrogen, isopentane or petroleum). Improvements to the RFQ technique have arisen from the design of new mixers in order to reduce both the volume and the mixing time. In addition, the cryogenic isopentane bath has been coupled to a filtering system or replaced by spraying the solution onto a surface that is frozen via thermal conductivity with a cryogenic liquid. In our work, we are developing a novel RFQ instrument which combines the freeze-quench technology with flashing capabilities to enable the studies of both thermally-activated and light-activated biological reactions. This instrument also uses a new rotating plate design based on magnetic couplings and removes the need for mechanical motorised rotation, which can otherwise be problematic at cryogenic temperatures.

Keywords: caged compounds, freeze-quench apparatus, photolysis, radicals

Procedia PDF Downloads 206
203 A Lung Cancer Patients with Septic Shock Nursing Experience

Authors: Syue-Wen Lin

Abstract:

Objective: This article explores the nursing experience of an 84-year-old male lung cancer patient who underwent a thoracoscopic right lower lobectomy and treatment. The patient has multiple medical histories, including hypertension and diabetes. The nursing process involved cancer treatment, postoperative pain management, as well as wound care and healing. Methods: The nursing period is from February 10 to February 17, 2024. During the nursing process, pain management strategies are implemented, including morphine drugs and non-drug methods, and music therapy, essential oil massage, and extended reception time are used to make patients feel physically and mentally comfortable so as to reduce postoperative pain and encourage active participation in rehabilitation. Strict sterile wound dressing procedures and advanced wound care techniques are used to promote wound healing and prevent infection. Due to septic shock, dialysis is used to relieve worsening symptoms. Taking into account the patient's cancer status, the nursing team provides comprehensive cancer care based on the patient's physical and psychological needs. Given the complexity of the patient's condition, including advanced cancer, palliative care is also incorporated throughout the care process to relieve discomfort and provide psychological support. Results: Through comprehensive health assessment, the nursing team fully understood the patient's condition and developed a personalized care plan based on the patient's condition. The interprofessional critical care team provides respiratory therapy and lung expansion exercises to reduce muscle loss while addressing the patient's psychological status, pain management, and vital sign stabilization needs, resulting in a comprehensive approach to care. Lung expansion exercises and the use of a high-frequency chest wall oscillation vest successfully improved sputum drainage and facilitated weaning from mechanical ventilation. In addition, helping patients stabilize their vital signs and the integration of cancer care, pain management, wound care and palliative care helps the patient be fully supported throughout the recovery process, ultimately improving his quality of life. Conclusion: Lung cancer and septic shock present significant challenges to patients, and the nursing team not only provides critical care but also addresses the unique needs of patients through comprehensive infection control, cancer care, pain management, wound care, and palliative care interventions. These measures effectively improve patients' quality of life, promote recovery, and provide compassionate palliative care for terminally ill patients. Nursing staff work closely with family members to develop a comprehensive care plan to ensure that patients receive high-quality medical care as well as psychological support and a comfortable recovery environment.

Keywords: septic shock, lung cancer, palliative care, nursing experience

Procedia PDF Downloads 13
202 Multi-Objectives Genetic Algorithm for Optimizing Machining Process Parameters

Authors: Dylan Santos De Pinho, Nabil Ouerhani

Abstract:

Energy consumption of machine-tools is becoming critical for machine-tool builders and end-users because of economic, ecological and legislation-related reasons. Many machine-tool builders are seeking for solutions that allow the reduction of energy consumption of machine-tools while preserving the same productivity rate and the same quality of machined parts. In this paper, we present the first results of a project conducted jointly by academic and industrial partners to reduce the energy consumption of a Swiss-Type lathe. We employ genetic algorithms to find optimal machining parameters – the set of parameters that lead to the best trade-off between energy consumption, part quality and tool lifetime. Three main machining process parameters are considered in our optimization technique, namely depth of cut, spindle rotation speed and material feed rate. These machining process parameters have been identified as the most influential ones in the configuration of the Swiss-type machining process. A state-of-the-art multi-objective genetic algorithm has been used. The algorithm combines three fitness functions, which are objective functions that permit to evaluate a set of parameters against the three objectives: energy consumption, quality of the machined parts, and tool lifetime. In this paper, we focus on the investigation of the fitness function related to energy consumption. Four different energy consumption related fitness functions have been investigated and compared. The first fitness function refers to the Kienzle cutting force model. The second fitness function uses the Material Removal Rate (RMM) as an indicator of energy consumption. The two other fitness functions are non-deterministic, learning-based functions. One fitness function uses a simple Neural Network to learn the relation between the process parameters and the energy consumption from experimental data. Another fitness function uses Lasso regression to determine the same relation. The goal is, then, to find out which fitness functions predict best the energy consumption of a Swiss-Type machining process for the given set of machining process parameters. Once determined, these functions may be used for optimization purposes – determine the optimal machining process parameters leading to minimum energy consumption. The performance of the four fitness functions has been evaluated. The Tornos DT13 Swiss-Type Lathe has been used to carry out the experiments. A mechanical part including various Swiss-Type machining operations has been selected for the experiments. The evaluation process starts with generating a set of CNC (Computer Numerical Control) programs for machining the part at hand. Each CNC program considers a different set of machining process parameters. During the machining process, the power consumption of the spindle is measured. All collected data are assigned to the appropriate CNC program and thus to the set of machining process parameters. The evaluation approach consists in calculating the correlation between the normalized measured power consumption and the normalized power consumption prediction for each of the four fitness functions. The evaluation shows that the Lasso and Neural Network fitness functions have the highest correlation coefficient with 97%. The fitness function “Material Removal Rate” (MRR) has a correlation coefficient of 90%, whereas the Kienzle-based fitness function has a correlation coefficient of 80%.

Keywords: adaptive machining, genetic algorithms, smart manufacturing, parameters optimization

Procedia PDF Downloads 143
201 Temperature Distribution Inside Hybrid photovoltaic-Thermoelectric Generator Systems and their Dependency on Exposition Angles

Authors: Slawomir Wnuk

Abstract:

Due to widespread implementation of the renewable energy development programs the, solar energy use increasing constantlyacross the world. Accordingly to REN21, in 2020, both on-grid and off-grid solar photovoltaic systems installed capacity reached 760 GWDCand increased by 139 GWDC compared to previous year capacity. However, the photovoltaic solar cells used for primary solar energy conversion into electrical energy has exhibited significant drawbacks. The fundamentaldownside is unstable andlow efficiencythe energy conversion being negatively affected by a rangeof factors. To neutralise or minimise the impact of those factors causing energy losses, researchers have come out withvariedideas. One ofpromising technological solutionsoffered by researchers is PV-MTEG multilayer hybrid system combiningboth photovoltaic cells and thermoelectric generators advantages. A series of experiments was performed on Glasgow Caledonian University laboratory to investigate such a system in operation. In the experiments, the solar simulator Sol3A series was employed as a stable solar irradiation source, and multichannel voltage and temperature data loggers were utilised for measurements. The two layer proposed hybrid systemsimulation model was built up and tested for its energy conversion capability under a variety of the exposure angles to the solar irradiation with a concurrent examination of the temperature distribution inside proposed PV-MTEG structure. The same series of laboratory tests were carried out for a range of various loads, with the temperature and voltage generated being measured and recordedfor each exposure angle and load combination. It was found that increase of the exposure angle of the PV-MTEG structure to an irradiation source causes the decrease of the temperature gradient ΔT between the system layers as well as reduces overall system heating. The temperature gradient’s reduction influences negatively the voltage generation process. The experiments showed that for the exposureangles in the range from 0° to 45°, the ‘generated voltage – exposure angle’ dependence is reflected closely by the linear characteristics. It was also found that the voltage generated by MTEG structures working with the optimal load determined and applied would drop by approximately 0.82% per each 1° degree of the exposure angle increase. This voltage drop occurs at the higher loads applied, getting more steep with increasing the load over the optimal value, however, the difference isn’t significant. Despite of linear character of the generated by MTEG voltage-angle dependence, the temperature reduction between the system structure layers andat tested points on its surface was not linear. In conclusion, the PV-MTEG exposure angle appears to be important parameter affecting efficiency of the energy generation by thermo-electrical generators incorporated inside those hybrid structures. The research revealedgreat potential of the proposed hybrid system. The experiments indicated interesting behaviour of the tested structures, and the results appear to provide valuable contribution into thedevelopment and technological design process for large energy conversion systems utilising similar structural solutions.

Keywords: photovoltaic solar systems, hybrid systems, thermo-electrical generators, renewable energy

Procedia PDF Downloads 85
200 Metal-Semiconductor Transition in Ultra-Thin Titanium Oxynitride Films Deposited by ALD

Authors: Farzan Gity, Lida Ansari, Ian M. Povey, Roger E. Nagle, James C. Greer

Abstract:

Titanium nitride (TiN) films have been widely used in variety of fields, due to its unique electrical, chemical, physical and mechanical properties, including low electrical resistivity, chemical stability, and high thermal conductivity. In microelectronic devices, thin continuous TiN films are commonly used as diffusion barrier and metal gate material. However, as the film thickness decreases below a few nanometers, electrical properties of the film alter considerably. In this study, the physical and electrical characteristics of 1.5nm to 22nm thin films deposited by Plasma-Enhanced Atomic Layer Deposition (PE-ALD) using Tetrakis(dimethylamino)titanium(IV), (TDMAT) chemistry and Ar/N2 plasma on 80nm SiO2 capped in-situ by 2nm Al2O3 are investigated. ALD technique allows uniformly-thick films at monolayer level in a highly controlled manner. The chemistry incorporates low level of oxygen into the TiN films forming titanium oxynitride (TiON). Thickness of the films is characterized by Transmission Electron Microscopy (TEM) which confirms the uniformity of the films. Surface morphology of the films is investigated by Atomic Force Microscopy (AFM) indicating sub-nanometer surface roughness. Hall measurements are performed to determine the parameters such as carrier mobility, type and concentration, as well as resistivity. The >5nm-thick films exhibit metallic behavior; however, we have observed that thin film resistivity is modulated significantly by film thickness such that there are more than 5 orders of magnitude increment in the sheet resistance at room temperature when comparing 5nm and 1.5nm films. Scattering effects at interfaces and grain boundaries could play a role in thickness-dependent resistivity in addition to quantum confinement effect that could occur at ultra-thin films: based on our measurements the carrier concentration is decreased from 1.5E22 1/cm3 to 5.5E17 1/cm3, while the mobility is increased from < 0.1 cm2/V.s to ~4 cm2/V.s for the 5nm and 1.5nm films, respectively. Also, measurements at different temperatures indicate that the resistivity is relatively constant for the 5nm film, while for the 1.5nm film more than 2 orders of magnitude reduction has been observed over the range of 220K to 400K. The activation energy of the 2.5nm and 1.5nm films is 30meV and 125meV, respectively, indicating that the TiON ultra-thin films are exhibiting semiconducting behaviour attributing this effect to a metal-semiconductor transition. By the same token, the contact is no longer Ohmic for the thinnest film (i.e., 1.5nm-thick film); hence, a modified lift-off process was developed to selectively deposit thicker films allowing us to perform electrical measurements with low contact resistance on the raised contact regions. Our atomic scale simulations based on molecular dynamic-generated amorphous TiON structures with low oxygen content confirm our experimental observations indicating highly n-type thin films.

Keywords: activation energy, ALD, metal-semiconductor transition, resistivity, titanium oxynitride, ultra-thin film

Procedia PDF Downloads 286
199 Application of Alumina-Aerogel in Post-Combustion CO₂ Capture: Optimization by Response Surface Methodology

Authors: S. Toufigh Bararpour, Davood Karami, Nader Mahinpey

Abstract:

Dependence of global economics on fossil fuels has led to a large growth in the emission of greenhouse gases (GHGs). Among the various GHGs, carbon dioxide is the main contributor to the greenhouse effect due to its huge emission amount. To mitigate the threatening effect of CO₂, carbon capture and sequestration (CCS) technologies have been studied widely in recent years. For the combustion processes, three main CO₂ capture techniques have been proposed such as post-combustion, pre-combustion and oxyfuel combustion. Post-combustion is the most commonly used CO₂ capture process as it can be readily retrofit into the existing power plants. Multiple advantages have been reported for the post-combustion by solid sorbents such as high CO₂ selectivity, high adsorption capacity, and low required regeneration energy. Chemical adsorption of CO₂ over alkali-metal-based solid sorbents such as K₂CO₃ is a promising method for the selective capture of diluted CO₂ from the huge amount of nitrogen existing in the flue gas. To improve the CO₂ capture performance, K₂CO₃ is supported by a stable and porous material. Al₂O₃ has been employed commonly as the support and enhanced the cyclic CO₂ capture efficiency of K₂CO₃. Different phases of alumina can be obtained by setting the calcination temperature of boehmite at 300, 600 (γ-alumina), 950 (δ-alumina) and 1200 °C (α-alumina). By increasing the calcination temperature, the regeneration capacity of alumina increases, while the surface area reduces. However, sorbents with lower surface areas have lower CO₂ capture capacity as well (except for the sorbents prepared by hydrophilic support materials). To resolve this issue, a highly efficient alumina-aerogel support was synthesized with a BET surface area of over 2000 m²/g and then calcined at a high temperature. The synthesized alumina-aerogel was impregnated on K₂CO₃ based on 50 wt% support/K₂CO₃, which resulted in the preparation of a sorbent with remarkable CO₂ capture performance. The effect of synthesis conditions such as types of alcohols, solvent-to-co-solvent ratios, and aging times was investigated on the performance of the support. The best support was synthesized using methanol as the solvent, after five days of aging time, and at a solvent-to-co-solvent (methanol-to-toluene) ratio (v/v) of 1/5. Response surface methodology was used to investigate the effect of operating parameters such as carbonation temperature and H₂O-to-CO₂ flowrate ratio on the CO₂ capture capacity. The maximum CO₂ capture capacity, at the optimum amounts of operating parameters, was 7.2 mmol CO₂ per gram K₂CO₃. Cyclic behavior of the sorbent was examined over 20 carbonation and regenerations cycles. The alumina-aerogel-supported K₂CO₃ showed a great performance compared to unsupported K₂CO₃ and γ-alumina-supported K₂CO₃. Fundamental performance analyses and long-term thermal and chemical stability test will be performed on the sorbent in the future. The applicability of the sorbent for a bench-scale process will be evaluated, and a corresponding process model will be established. The fundamental material knowledge and respective process development will be delivered to industrial partners for the design of a pilot-scale testing unit, thereby facilitating the industrial application of alumina-aerogel.

Keywords: alumina-aerogel, CO₂ capture, K₂CO₃, optimization

Procedia PDF Downloads 111
198 Superoleophobic Nanocellulose Aerogel Membrance as Bioinspired Cargo Carrier on Oil by Sol-Gel Method

Authors: Zulkifli, I. W. Eltara, Anawati

Abstract:

Understanding the complementary roles of surface energy and roughness on natural nonwetting surfaces has led to the development of a number of biomimetic superhydrophobic surfaces, which exhibit apparent contact angles with water greater than 150 degrees and low contact angle hysteresis. However, superoleophobic surfaces—those that display contact angles greater than 150 degrees with organic liquids having appreciably lower surface tensions than that of water—are extremely rare. In addition to chemical composition and roughened texture, a third parameter is essential to achieve superoleophobicity, namely, re-entrant surface curvature in the form of overhang structures. The overhangs can be realized as fibers. Superoleophobic surfaces are appealing for example, antifouling, since purely superhydrophobic surfaces are easily contaminated by oily substances in practical applications, which in turn will impair the liquid repellency. On the other studied have demonstrate that such aqueous nanofibrillar gels are unexpectedly robust to allow formation of highly porous aerogels by direct water removal by freeze-drying, they are flexible, unlike most aerogels that suffer from brittleness, and they allow flexible hierarchically porous templates for functionalities, e.g. for electrical conductivity. No crosslinking, solvent exchange nor supercritical drying are required to suppress the collapse during the aerogel preparation, unlike in typical aerogel preparations. The aerogel used in current work is an ultralight weight solid material composed of native cellulose nanofibers. The native cellulose nanofibers are cleaved from the self-assembled hierarchy of macroscopic cellulose fibers. They have become highly topical, as they are proposed to show extraordinary mechanical properties due to their parallel and grossly hydrogen bonded polysaccharide chains. We demonstrate that superoleophobic nanocellulose aerogels coating by sol-gel method, the aerogel is capable of supporting a weight nearly 3 orders of magnitude larger than the weight of the aerogel itself. The load support is achieved by surface tension acting at different length scales: at the macroscopic scale along the perimeter of the carrier, and at the microscopic scale along the cellulose nanofibers by preventing soaking of the aerogel thus ensuring buoyancy. Superoleophobic nanocellulose aerogels have recently been achieved using unmodified cellulose nanofibers and using carboxy methylated, negatively charged cellulose nanofibers as starting materials. In this work, the aerogels made from unmodified cellulose nanofibers were subsequently treated with fluorosilanes. To complement previous work on superoleophobic aerogels, we demonstrate their application as cargo carriers on oil, gas permeability, plastrons, and drag reduction, and we show that fluorinated nanocellulose aerogels are high-adhesive superoleophobic surfaces. We foresee applications including buoyant, gas permeable, dirt-repellent coatings for miniature sensors and other devices floating on generic liquid surfaces.

Keywords: superoleophobic, nanocellulose, aerogel, sol-gel

Procedia PDF Downloads 346
197 Induction Machine Design Method for Aerospace Starter/Generator Applications and Parametric FE Analysis

Authors: Wang Shuai, Su Rong, K. J.Tseng, V. Viswanathan, S. Ramakrishna

Abstract:

The More-Electric-Aircraft concept in aircraft industry levies an increasing demand on the embedded starter/generators (ESG). The high-speed and high-temperature environment within an engine poses great challenges to the operation of such machines. In view of such challenges, squirrel cage induction machines (SCIM) have shown advantages due to its simple rotor structure, absence of temperature-sensitive components as well as low torque ripples etc. The tight operation constraints arising from typical ESG applications together with the detailed operation principles of SCIMs have been exploited to derive the mathematical interpretation of the ESG-SCIM design process. The resultant non-linear mathematical treatment yielded unique solution to the SCIM design problem for each configuration of pole pair number p, slots/pole/phase q and conductors/slot zq, easily implemented via loop patterns. It was also found that not all configurations led to feasible solutions and corresponding observations have been elaborated. The developed mathematical procedures also proved an effective framework for optimization among electromagnetic, thermal and mechanical aspects by allocating corresponding degree-of-freedom variables. Detailed 3D FEM analysis has been conducted to validate the resultant machine performance against design specifications. To obtain higher power ratings, electrical machines often have to increase the slot areas for accommodating more windings. Since the available space for embedding such machines inside an engine is usually short in length, axial air gap arrangement appears more appealing compared to its radial gap counterpart. The aforementioned approach has been adopted in case studies of designing series of AFIMs and RFIMs respectively with increasing power ratings. Following observations have been obtained. Under the strict rotor diameter limitation AFIM extended axially for the increased slot areas while RFIM expanded radially with the same axial length. Beyond certain power ratings AFIM led to long cylinder geometry while RFIM topology resulted in the desired short disk shape. Besides the different dimension growth patterns, AFIMs and RFIMs also exhibited dissimilar performance degradations regarding power factor, torque ripples as well as rated slip along with increased power ratings. Parametric response curves were plotted to better illustrate the above influences from increased power ratings. The case studies may provide a basic guideline that could assist potential users in making decisions between AFIM and RFIM for relevant applications.

Keywords: axial flux induction machine, electrical starter/generator, finite element analysis, squirrel cage induction machine

Procedia PDF Downloads 452
196 Speech and Swallowing Function after Tonsillo-Lingual Sulcus Resection with PMMC Flap Reconstruction: A Case Study

Authors: K. Rhea Devaiah, B. S. Premalatha

Abstract:

Background: Tonsillar Lingual sulcus is the area between the tonsils and the base of the tongue. The surgical resection of the lesions in the head and neck results in changes in speech and swallowing functions. The severity of the speech and swallowing problem depends upon the site and extent of the lesion, types and extent of surgery and also the flexibility of the remaining structures. Need of the study: This paper focuses on the importance of speech and swallowing rehabilitation in an individual with the lesion in the Tonsillar Lingual Sulcus and post-operative functions. Aim: Evaluating the speech and swallow functions post-intensive speech and swallowing rehabilitation. The objectives are to evaluate the speech intelligibility and swallowing functions after intensive therapy and assess the quality of life. Method: The present study describes a report of an individual aged 47years male, with the diagnosis of basaloid squamous cell carcinoma, left tonsillar lingual sulcus (pT2n2M0) and underwent wide local excision with left radical neck dissection with PMMC flap reconstruction. Post-surgery the patient came with a complaint of reduced speech intelligibility, and difficulty in opening the mouth and swallowing. Detailed evaluation of the speech and swallowing functions were carried out such as OPME, articulation test, speech intelligibility, different phases of swallowing and trismus evaluation. Self-reported questionnaires such as SHI-E(Speech handicap Index- Indian English), DHI (Dysphagia handicap Index) and SESEQ -K (Self Evaluation of Swallowing Efficiency in Kannada) were also administered to know what the patient feels about his problem. Based on the evaluation, the patient was diagnosed with pharyngeal phase dysphagia associated with trismus and reduced speech intelligibility. Intensive speech and swallowing therapy was advised weekly twice for the duration of 1 hour. Results: Totally the patient attended 10 intensive speech and swallowing therapy sessions. Results indicated misarticulation of speech sounds such as lingua-palatal sounds. Mouth opening was restricted to one finger width with difficulty chewing, masticating, and swallowing the bolus. Intervention strategies included Oro motor exercise, Indirect swallowing therapy, usage of a trismus device to facilitate mouth opening, and change in the food consistency to help to swallow. A practice session was held with articulation drills to improve the production of speech sounds and also improve speech intelligibility. Significant changes in articulatory production and speech intelligibility and swallowing abilities were observed. The self-rated quality of life measures such as DHI, SHI and SESE Q-K revealed no speech handicap and near-normal swallowing ability indicating the improved QOL after the intensive speech and swallowing therapy. Conclusion: Speech and swallowing therapy post carcinoma in the tonsillar lingual sulcus is crucial as the tongue plays an important role in both speech and swallowing. The role of Speech-language and swallowing therapists in oral cancer should be highlighted in treating these patients and improving the overall quality of life. With intensive speech-language and swallowing therapy post-surgery for oral cancer, there can be a significant change in the speech outcome and swallowing functions depending on the site and extent of lesions which will thereby improve the individual’s QOL.

Keywords: oral cancer, speech and swallowing therapy, speech intelligibility, trismus, quality of life

Procedia PDF Downloads 104
195 Dietary Factors Contributing to Osteoporosis among Postmenopausal Women in Riyadh Armed Forces Hospital

Authors: Rabab Makki

Abstract:

Bone mineral density and bone metabolism are affected by various factors such as genetic, endocrine, mechanical and nutritional. Our understanding of nutritional influences on bone health is limited because most studies have focused on calcium. This study investigated the dietary factors which are likely t contribute to Osteoporosis in Saudi post-menopausal women, and correlated it with BMD. This is a case controlled study involved 36 postmenopausal Saudi females selected from the Orthopedics and osteoporosis outpatient clinics, and 25 postmenopausal Saudi females as controls from the primary clinic of Military Hospital in Riyadh. The women were diagnosed as osteoporotic based on the BMD measurement at any site (left femur neck, right femur neck, left total hip or right total hip or spine). Both the controls and the Osteoporotics were over 50 years of age and BMI between 31-34 kg/m2 had 2nd degree obesity, and were not free from other problems such as diabetes, hypertension, etc. Subjects (osteoporotics and controls) were interviewed to called data on demographic characterstics, medical history, dietary intake anthropometry (height and weight) bone mineral density. Blood samples were collected from subjects (Osteoporotics and controls). Analysis of serum calcium, vitamin D, phosphate were done at the main laboratory at Military Hospital Riyadh, by the laboratory technician while BMD was determined at the department of Nuclear Medicine by an expert technician and results were interpreted by radiologist.Data on frequency of consumption of animal food (meat, eggs, poultry and fish) and diary foods (milk, yogurt, cheese) of osteoporotic was less than control. In spite of the low intake there was no association with BMD.In general, the vegetables and fruits were consumed less by the osteoporotics than control. The only fruit which had shown a significant positive correlation is banana with right and left hip BMD total probably due to high potassium and minerals content which likely to prevent bone resorption. Mataziz vegetables combination of wheat showed a significant positive correlation with the same site (total right and left hip). Both osteoporotics abd controls were consuming table sugar. (But the sweet intake showed a significant negative correlation with left neck femur BMD, suggesting sucrose increase urinary calcium loss. Both osteoporotic and controls were consuming Arabic coffee. A negative significant correlation between intake of Arabic coffee and BMD of right neck femur of osteoporosis patient was observed. It could be suggested that increased intake of fruits and vegetables, might promote bone density while high intake of coffee and sugars might affect bone density, no significant correlation was observed between BMD at any site and diary product. We can say the major risk factors are inadequate nutrition. Further studies are needed among Saudi population to confirm these results.

Keywords: osteoporosi, Saudia Arabia, Riyadh Armed Forces, postmenopausal women

Procedia PDF Downloads 402
194 A Magnetic Hydrochar Nanocomposite as a Potential Adsorbent of Emerging Pollutants

Authors: Aura Alejandra Burbano Patino, Mariela Agotegaray, Veronica Lassalle, Fernanda Horst

Abstract:

Water pollution is of worldwide concern due to its importance as an essential resource for life. Industrial and urbanistic growth are anthropogenic activities that have caused an increase of undesirable compounds in water. In the last decade, emerging pollutants have become of great interest since, at very low concentrations (µg/L and ng/L), they exhibit a hazardous effect on wildlife, aquatic ecosystems, and human organisms. One group of emerging pollutants that are a matter of study are pharmaceuticals. Their high consumption rate and their inappropriate disposal have led to their detection in wastewater treatment plant influent, effluent, surface water, and drinking water. In consequence, numerous technologies have been developed to efficiently treat these pollutants. Adsorption appears like an easy and cost-effective technology. One of the most used adsorbents of emerging pollutants removal is carbon-based materials such as hydrochars. This study aims to use a magnetic hydrochar nanocomposite to be employed as an adsorbent for diclofenac removal. Kinetics models and the adsorption efficiency in real water samples were analyzed. For this purpose, a magnetic hydrochar nanocomposite was synthesized through the hydrothermal carbonization (HTC) technique hybridized to co-precipitation to add the magnetic component into the hydrochar, based on iron oxide nanoparticles. The hydrochar was obtained from sunflower husk residue as the precursor. TEM, TGA, FTIR, Zeta potential as a function of pH, DLS, BET technique, and elemental analysis were employed to characterize the material in terms of composition and chemical structure. Adsorption kinetics were carried out in distilled water and real water at room temperature, pH of 5.5 for distilled water and natural pH for real water samples, 1:1 adsorbent: adsorbate dosage ratio, contact times from 10-120 minutes, and 50% dosage concentration of DCF. Results have demonstrated that magnetic hydrochar presents superparamagnetic properties with a saturation magnetization value of 55.28 emu/g. Besides, it is mesoporous with a surface area of 55.52 m²/g. It is composed of magnetite nanoparticles incorporated into the hydrochar matrix, as can be proven by TEM micrographs, FTIR spectra, and zeta potential. On the other hand, kinetic studies were carried out using DCF models, finding percent removal efficiencies up to 85.34% after 80 minutes of contact time. In addition, after 120 minutes of contact time, desorption of emerging pollutants from active sites took place, which indicated that the material got saturated after that t time. In real water samples, percent removal efficiencies decrease up to 57.39%, ascribable to a possible mechanism of competitive adsorption of organic or inorganic compounds, ions for active sites of the magnetic hydrochar. The main suggested adsorption mechanism between the magnetic hydrochar and diclofenac include hydrophobic and electrostatic interactions as well as hydrogen bonds. It can be concluded that the magnetic hydrochar nanocomposite could be valorized into a by-product which appears as an efficient adsorbent for DCF removal as a model emerging pollutant. These results are being complemented by modifying experimental variables such as pollutant’s initial concentration, adsorbent: adsorbate dosage ratio, and temperature. Currently, adsorption assays of other emerging pollutants are being been carried out.

Keywords: environmental remediation, emerging pollutants, hydrochar, magnetite nanoparticles

Procedia PDF Downloads 185
193 Understanding New Zealand’s 19th Century Timber Churches: Techniques in Extracting and Applying Underlying Procedural Rules

Authors: Samuel McLennan, Tane Moleta, Andre Brown, Marc Aurel Schnabel

Abstract:

The development of Ecclesiastical buildings within New Zealand has produced some unique design characteristics that take influence from both international styles and local building methods. What this research looks at is how procedural modelling can be used to define such common characteristics and understand how they are shared and developed within different examples of a similar architectural style. This will be achieved through the creation of procedural digital reconstructions of the various timber Gothic Churches built during the 19th century in the city of Wellington, New Zealand. ‘Procedural modelling’ is a digital modelling technique that has been growing in popularity, particularly within the game and film industry, as well as other fields such as industrial design and architecture. Such a design method entails the creation of a parametric ‘ruleset’ that can be easily adjusted to produce many variations of geometry, rather than a single geometry as is typically found in traditional CAD software. Key precedents within this area of digital heritage includes work by Haegler, Müller, and Gool, Nicholas Webb and Andre Brown, and most notably Mark Burry. What these precedents all share is how the forms of the reconstructed architecture have been generated using computational rules and an understanding of the architects’ geometric reasoning. This is also true within this research as Gothic architecture makes use of only a select range of forms (such as the pointed arch) that can be accurately replicated using the same standard geometric techniques originally used by the architect. The methodology of this research involves firstly establishing a sample group of similar buildings, documenting the existing samples, researching any lost samples to find evidence such as architectural plans, photos, and written descriptions, and then culminating all the findings into a single 3D procedural asset within the software ‘Houdini’. The end result will be an adjustable digital model that contains all the architectural components of the sample group, such as the various naves, buttresses, and windows. These components can then be selected and arranged to create visualisations of the sample group. Because timber gothic churches in New Zealand share many details between designs, the created collection of architectural components can also be used to approximate similar designs not included in the sample group, such as designs found beyond the Wellington Region. This creates an initial library of architectural components that can be further expanded on to encapsulate as wide of a sample size as desired. Such a methodology greatly improves upon the efficiency and adjustability of digital modelling compared to current practices found in digital heritage reconstruction. It also gives greater accuracy to speculative design, as a lack of evidence for lost structures can be approximated using components from still existing or better-documented examples. This research will also bring attention to the cultural significance these types of buildings have within the local area, addressing the public’s general unawareness of architectural history that is identified in the Wellington based research ‘Moving Images in Digital Heritage’ by Serdar Aydin et al.

Keywords: digital forensics, digital heritage, gothic architecture, Houdini, procedural modelling

Procedia PDF Downloads 124
192 Improving Junior Doctor Induction Through the Use of Simple In-House Mobile Application

Authors: Dmitriy Chernov, Maria Karavassilis, Suhyoun Youn, Amna Izhar, Devasenan Devendra

Abstract:

Introduction and Background: A well-structured and comprehensive departmental induction improves patient safety and job satisfaction amongst doctors. The aims of our Project were as follows: 1. Assess the perceived preparedness of junior doctors starting their rotation in Acute Medicine at Watford General Hospital. 2. Develop a supplemental Induction Guide and Pocket reference in the form of an iOS mobile application. 3. To collect feedback after implementing the mobile application following a trial period of 8 weeks with a small cohort of junior doctors. Materials and Methods: A questionnaire was distributed to all new junior trainees starting in the department of Acute Medicine to assess their experience of current induction. A mobile Induction application was developed and trialled over a period of 8 weeks, distributed in addition to the existing didactic induction session. After the trial period, the same questionnaire was distributed to assess improvement in induction experience. Analytics data were collected with users’ consent to gauge user engagement and identify areas of improvement of the application. A feedback survey about the app was also distributed. Results: A total of 32 doctors used the application during the 8-week trial period. The application was accessed 7259 times in total, with the average user spending a cumulative of 37 minutes 22 seconds on the app. The most used section was Clinical Guidelines, accessed 1490 times. The App Feedback survey revealed positive reviews: 100% of participants (n=15/15) responded that the app improved their overall induction experience compared to other placements; 93% (n=14/15) responded that the app improved overall efficiency in completing daily ward jobs compared to previous rotations; and 93% (n=14/15) responded that the app improved patient safety overall. In the Pre-App and Post-App Induction Surveys, participants reported: a 48% improvement in awareness of practical aspects of the job; a 26% improvement of awareness on locating pathways and clinical guidelines; a 40% reduction of feelings of overwhelmingness. Conclusions and recommendations: This study demonstrates the importance of technology in Medical Education and Clinical Induction. The mobile application average engagement time equates to over 20 cumulative hours of on-the-job training delivered to each user, within an 8-week period. The most used and referred to section was clinical guidelines. This shows that there is high demand for an accessible pocket guide for this type of material. This simple mobile application resulted in a significant improvement in feedback about induction in our Department of Acute Medicine, and will likely impact workplace satisfaction. Limitations of the application include: post-app surveys had a small number of participants; the app is currently only available for iPhone users; some useful sections are nested deep within the app, lacks deep search functionality across all sections; lacks real time user feedback; and requires regular review and updates. Future steps for the app include: developing a web app, with an admin dashboard to simplify uploading and editing content; a comprehensive search functionality; and a user feedback and peer ratings system.

Keywords: mobile app, doctor induction, medical education, acute medicine

Procedia PDF Downloads 79
191 Functional Plasma-Spray Ceramic Coatings for Corrosion Protection of RAFM Steels in Fusion Energy Systems

Authors: Chen Jiang, Eric Jordan, Maurice Gell, Balakrishnan Nair

Abstract:

Nuclear fusion, one of the most promising options for reliably generating large amounts of carbon-free energy in the future, has seen a plethora of ground-breaking technological advances in recent years. An efficient and durable “breeding blanket”, needed to ensure a reactor’s self-sufficiency by maintaining the optimal coolant temperature as well as by minimizing radiation dosage behind the blanket, still remains a technological challenge for the various reactor designs for commercial fusion power plants. A relatively new dual-coolant lead-lithium (DCLL) breeder design has exhibited great potential for high-temperature (>700oC), high-thermal-efficiency (>40%) fusion reactor operation. However, the structural material, namely reduced activation ferritic-martensitic (RAFM) steel, is not chemically stable in contact with molten Pb-17%Li coolant. Thus, to utilize this new promising reactor design, the demand for effective corrosion-resistant coatings on RAFM steels represents a pressing need. Solution Spray Technologies LLC (SST) is developing a double-layer ceramic coating design to address the corrosion protection of RAFM steels, using a novel solution and solution/suspension plasma spray technology through a US Department of Energy-funded project. Plasma spray is a coating deposition method widely used in many energy applications. Novel derivatives of the conventional powder plasma spray process, known as the solution-precursor and solution/suspension-hybrid plasma spray process, are powerful methods to fabricate thin, dense ceramic coatings with complex compositions necessary for the corrosion protection in DCLL breeders. These processes can be used to produce ultra-fine molten splats and to allow fine adjustment of coating chemistry. Thin, dense ceramic coatings with chosen chemistry for superior chemical stability in molten Pb-Li, low activation properties, and good radiation tolerance, is ideal for corrosion-protection of RAFM steels. A key challenge is to accommodate its CTE mismatch with the RAFM substrate through the selection and incorporation of appropriate bond layers, thus allowing for enhanced coating durability and robustness. Systematic process optimization is being used to define the optimal plasma spray conditions for both the topcoat and bond-layer, and X-ray diffraction and SEM-EDS are applied to successfully validate the chemistry and phase composition of the coatings. The plasma-sprayed double-layer corrosion resistant coatings were also deposited onto simulated RAFM steel substrates, which are being tested separately under thermal cycling, high-temperature moist air oxidation as well as molten Pb-Li capsule corrosion conditions. Results from this testing on coated samples, and comparisons with bare RAFM reference samples will be presented and conclusions will be presented assessing the viability of the new ceramic coatings to be viable corrosion prevention systems for DCLL breeders in commercial nuclear fusion reactors.

Keywords: breeding blanket, corrosion protection, coating, plasma spray

Procedia PDF Downloads 303
190 Defining a Framework for Holistic Life Cycle Assessment of Building Components by Considering Parameters Such as Circularity, Material Health, Biodiversity, Pollution Control, Cost, Social Impacts, and Uncertainty

Authors: Naomi Grigoryan, Alexandros Loutsioli Daskalakis, Anna Elisse Uy, Yihe Huang, Aude Laurent (Webanck)

Abstract:

In response to the building and construction sectors accounting for a third of all energy demand and emissions, the European Union has placed new laws and regulations in the construction sector that emphasize material circularity, energy efficiency, biodiversity, and social impact. Existing design tools assess sustainability in early-stage design for products or buildings; however, there is no standardized methodology for measuring the circularity performance of building components. Existing assessment methods for building components focus primarily on carbon footprint but lack the comprehensive analysis required to design for circularity. The research conducted in this paper covers the parameters needed to assess sustainability in the design process of architectural products such as doors, windows, and facades. It maps a framework for a tool that assists designers with real-time sustainability metrics. Considering the life cycle of building components such as façades, windows, and doors involves the life cycle stages applied to product design and many of the methods used in the life cycle analysis of buildings. The current industry standards of sustainability assessment for metal building components follow cradle-to-grave life cycle assessment (LCA), track Global Warming Potential (GWP), and document the parameters used for an Environmental Product Declaration (EPD). Developed by the Ellen Macarthur Foundation, the Material Circularity Indicator (MCI) is a methodology utilizing the data from LCA and EPDs to rate circularity, with a "value between 0 and 1 where higher values indicate a higher circularity+". Expanding on the MCI with additional indicators such as the Water Circularity Index (WCI), the Energy Circularity Index (ECI), the Social Circularity Index (SCI), Life Cycle Economic Value (EV), and calculating biodiversity risk and uncertainty, the assessment methodology of an architectural product's impact can be targeted more specifically based on product requirements, performance, and lifespan. Broadening the scope of LCA calculation for products to incorporate aspects of building design allows product designers to account for the disassembly of architectural components. For example, the Material Circularity Indicator for architectural products such as windows and facades is typically low due to the impact of glass, as 70% of glass ends up in landfills due to damage in the disassembly process. The low MCI can be combatted by expanding beyond cradle-to-grave assessment and focusing the design process on disassembly, recycling, and repurposing with the help of real-time assessment tools. Design for Disassembly and Urban Mining has been integrated within the construction field on small scales as project-based exercises, not addressing the entire supply chain of architectural products. By adopting more comprehensive sustainability metrics and incorporating uncertainty calculations, the sustainability assessment of building components can be more accurately assessed with decarbonization and disassembly in mind, addressing the large-scale commercial markets within construction, some of the most significant contributors to climate change.

Keywords: architectural products, early-stage design, life cycle assessment, material circularity indicator

Procedia PDF Downloads 82
189 Bio-Functionalized Silk Nanofibers for Peripheral Nerve Regeneration

Authors: Kayla Belanger, Pascale Vigneron, Guy Schlatter, Bernard Devauchelle, Christophe Egles

Abstract:

A severe injury to a peripheral nerve leads to its degeneration and the loss of sensory and motor function. To this day, there still lacks a more effective alternative to the autograft which has long been considered the gold standard for nerve repair. In order to overcome the numerous drawbacks of the autograft, tissue engineered biomaterials may be effective alternatives. Silk fibroin is a favorable biomaterial due to its many advantageous properties such as its biocompatibility, its biodegradability, and its robust mechanical properties. In this study, bio-mimicking multi-channeled nerve guidance conduits made of aligned nanofibers achieved by electrospinning were functionalized with signaling biomolecules and were tested in vitro and in vivo for nerve regeneration support. Silk fibroin (SF) extracted directly from silkworm cocoons was put in solution at a concentration of 10wt%. Poly(ethylene oxide) (PEO) was added to the resulting SF solution to increase solution viscosity and the following three electrospinning solutions were made: (1) SF/PEO solution, (2) SF/PEO solution with nerve growth factor and ciliary neurotrophic factor, and (3) SF/PEO solution with nerve growth factor and neurotrophin-3. Each of these solutions was electrospun into a multi-layer architecture to obtain mechanically optimized aligned nanofibrous mats. For in vitro studies, aligned fibers were treated to induce β-sheet formation and thoroughly rinsed to eliminate presence of PEO. Each material was tested using rat embryo neuron cultures to evaluate neurite extension and the interaction with bio-functionalized or non-functionalized aligned fibers. For in vivo studies, the mats were rolled into 5mm long multi-, micro-channeled conduits then treated and thoroughly rinsed. The conduits were each subsequently implanted between a severed rat sciatic nerve. The effectiveness of nerve repair over a period of 8 months was extensively evaluated by cross-referencing electrophysiological, histological, and movement analysis results to comprehensively evaluate the progression of nerve repair. In vitro results show a more favorable interaction between growing neurons and bio-functionalized silk fibers compared to pure silk fibers. Neurites can also be seen having extended unidirectionally along the alignment of the nanofibers which confirms a guidance factor for the electrospun material. The in vivo study has produced positive results for the regeneration of the sciatic nerve over the length of the study, showing contrasts between the bio-functionalized material and the non-functionalized material along with comparisons to the experimental control. Nerve regeneration has been evaluated not only by histological analysis, but also by electrophysiological assessment and motion analysis of two separate natural movements. By studying these three components in parallel, the most comprehensive evaluation of nerve repair for the conduit designs can be made which can, therefore, more accurately depict their overall effectiveness. This work was supported by La Région Picardie and FEDER.

Keywords: electrospinning, nerve guidance conduit, peripheral nerve regeneration, silk fibroin

Procedia PDF Downloads 240
188 Temporal and Spacial Adaptation Strategies in Aerodynamic Simulation of Bluff Bodies Using Vortex Particle Methods

Authors: Dario Milani, Guido Morgenthal

Abstract:

Fluid dynamic computation of wind caused forces on bluff bodies e.g light flexible civil structures or high incidence of ground approaching airplane wings, is one of the major criteria governing their design. For such structures a significant dynamic response may result, requiring the usage of small scale devices as guide-vanes in bridge design to control these effects. The focus of this paper is on the numerical simulation of the bluff body problem involving multiscale phenomena induced by small scale devices. One of the solution methods for the CFD simulation that is relatively successful in this class of applications is the Vortex Particle Method (VPM). The method is based on a grid free Lagrangian formulation of the Navier-Stokes equations, where the velocity field is modeled by particles representing local vorticity. These vortices are being convected due to the free stream velocity as well as diffused. This representation yields the main advantages of low numerical diffusion, compact discretization as the vorticity is strongly localized, implicitly accounting for the free-space boundary conditions typical for this class of FSI problems, and a natural representation of the vortex creation process inherent in bluff body flows. When the particle resolution reaches the Kolmogorov dissipation length, the method becomes a Direct Numerical Simulation (DNS). However, it is crucial to note that any solution method aims at balancing the computational cost against the accuracy achievable. In the classical VPM method, if the fluid domain is discretized by Np particles, the computational cost is O(Np2). For the coupled FSI problem of interest, for example large structures such as long-span bridges, the aerodynamic behavior may be influenced or even dominated by small structural details such as barriers, handrails or fairings. For such geometrically complex and dimensionally large structures, resolving the complete domain with the conventional VPM particle discretization might become prohibitively expensive to compute even for moderate numbers of particles. It is possible to reduce this cost either by reducing the number of particles or by controlling its local distribution. It is also possible to increase the accuracy of the solution without increasing substantially the global computational cost by computing a correction of the particle-particle interaction in some regions of interest. In this paper different strategies are presented in order to extend the conventional VPM method to reduce the computational cost whilst resolving the required details of the flow. The methods include temporal sub stepping to increase the accuracy of the particles convection in certain regions as well as dynamically re-discretizing the particle map to locally control the global and the local amount of particles. Finally, these methods will be applied on a test case and the improvements in the efficiency as well as the accuracy of the proposed extension to the method are presented. The important benefits in terms of accuracy and computational cost of the combination of these methods will be thus presented as long as their relevant applications.

Keywords: adaptation, fluid dynamic, remeshing, substepping, vortex particle method

Procedia PDF Downloads 257
187 Deep Learning for SAR Images Restoration

Authors: Hossein Aghababaei, Sergio Vitale, Giampaolo Ferraioli

Abstract:

In the context of Synthetic Aperture Radar (SAR) data, polarization is an important source of information for Earth's surface monitoring. SAR Systems are often considered to transmit only one polarization. This constraint leads to either single or dual polarimetric SAR imaging modalities. Single polarimetric systems operate with a fixed single polarization of both transmitted and received electromagnetic (EM) waves, resulting in a single acquisition channel. Dual polarimetric systems, on the other hand, transmit in one fixed polarization and receive in two orthogonal polarizations, resulting in two acquisition channels. Dual polarimetric systems are obviously more informative than single polarimetric systems and are increasingly being used for a variety of remote sensing applications. In dual polarimetric systems, the choice of polarizations for the transmitter and the receiver is open. The choice of circular transmit polarization and coherent dual linear receive polarizations forms a special dual polarimetric system called hybrid polarimetry, which brings the properties of rotational invariance to geometrical orientations of features in the scene and optimizes the design of the radar in terms of reliability, mass, and power constraints. The complete characterization of target scattering, however, requires fully polarimetric data, which can be acquired with systems that transmit two orthogonal polarizations. This adds further complexity to data acquisition and shortens the coverage area or swath of fully polarimetric images compared to the swath of dual or hybrid polarimetric images. The search for solutions to augment dual polarimetric data to full polarimetric data will therefore take advantage of full characterization and exploitation of the backscattered field over a wider coverage with less system complexity. Several methods for reconstructing fully polarimetric images using hybrid polarimetric data can be found in the literature. Although the improvements achieved by the newly investigated and experimented reconstruction techniques are undeniable, the existing methods are, however, mostly based upon model assumptions (especially the assumption of reflectance symmetry), which may limit their reliability and applicability to vegetation and forest scenarios. To overcome the problems of these techniques, this paper proposes a new framework for reconstructing fully polarimetric information from hybrid polarimetric data. The framework uses Deep Learning solutions to augment hybrid polarimetric data without relying on model assumptions. A convolutional neural network (CNN) with a specific architecture and loss function is defined for this augmentation problem by focusing on different scattering properties of the polarimetric data. In particular, the method controls the CNN training process with respect to several characteristic features of polarimetric images defined by the combination of different terms in the cost or loss function. The proposed method is experimentally validated with real data sets and compared with a well-known and standard approach from the literature. From the experiments, the reconstruction performance of the proposed framework is superior to conventional reconstruction methods. The pseudo fully polarimetric data reconstructed by the proposed method also agree well with the actual fully polarimetric images acquired by radar systems, confirming the reliability and efficiency of the proposed method.

Keywords: SAR image, polarimetric SAR image, convolutional neural network, deep learnig, deep neural network

Procedia PDF Downloads 65
186 Simulation-based Decision Making on Intra-hospital Patient Referral in a Collaborative Medical Alliance

Authors: Yuguang Gao, Mingtao Deng

Abstract:

The integration of independently operating hospitals into a unified healthcare service system has become a strategic imperative in the pursuit of hospitals’ high-quality development. Central to the concept of group governance over such transformation, exemplified by a collaborative medical alliance, is the delineation of shared value, vision, and goals. Given the inherent disparity in capabilities among hospitals within the alliance, particularly in the treatment of different diseases characterized by Disease Related Groups (DRG) in terms of effectiveness, efficiency and resource utilization, this study aims to address the centralized decision-making of intra-hospital patient referral within the medical alliance to enhance the overall production and quality of service provided. We first introduce the notion of production utility, where a higher production utility for a hospital implies better performance in treating patients diagnosed with that specific DRG group of diseases. Then, a Discrete-Event Simulation (DES) framework is established for patient referral among hospitals, where patient flow modeling incorporates a queueing system with fixed capacities for each hospital. The simulation study begins with a two-member alliance. The pivotal strategy examined is a "whether-to-refer" decision triggered when the bed usage rate surpasses a predefined threshold for either hospital. Then, the decision encompasses referring patients to the other hospital based on DRG groups’ production utility differentials as well as bed availability. The objective is to maximize the total production utility of the alliance while minimizing patients’ average length of stay and turnover rate. Thus the parameter under scrutiny is the bed usage rate threshold, influencing the efficacy of the referral strategy. Extending the study to a three-member alliance, which could readily be generalized to multi-member alliances, we maintain the core setup while introducing an additional “which-to-refer" decision that involves referring patients with specific DRG groups to the member hospital according to their respective production utility rankings. The overarching goal remains consistent, for which the bed usage rate threshold is once again a focal point for analysis. For the two-member alliance scenario, our simulation results indicate that the optimal bed usage rate threshold hinges on the discrepancy in the number of beds between member hospitals, the distribution of DRG groups among incoming patients, and variations in production utilities across hospitals. Transitioning to the three-member alliance, we observe similar dependencies on these parameters. Additionally, it becomes evident that an imbalanced distribution of DRG diagnoses and further disparity in production utilities among member hospitals may lead to an increase in the turnover rate. In general, it was found that the intra-hospital referral mechanism enhances the overall production utility of the medical alliance compared to individual hospitals without partnership. Patients’ average length of stay is also reduced, showcasing the positive impact of the collaborative approach. However, the turnover rate exhibits variability based on parameter setups, particularly when patients are redirected within the alliance. In conclusion, the re-structuring of diagnostic disease groups within the medical alliance proves instrumental in improving overall healthcare service outcomes, providing a compelling rationale for the government's promotion of patient referrals within collaborative medical alliances.

Keywords: collaborative medical alliance, disease related group, patient referral, simulation

Procedia PDF Downloads 49
185 Conceptualizing a Biomimetic Fablab Based on the Makerspace Concept and Biomimetics Design Research

Authors: Petra Gruber, Ariana Rupp, Peter Niewiarowski

Abstract:

This paper presents a concept for a biomimetic fablab as a physical space for education, research and development of innovation inspired by nature. Biomimetics as a discipline finds increasing recognition in academia and has started to be institutionalized at universities in programs and centers. The Biomimicry Research and Innovation Center was founded in 2012 at the University of Akron as an interdisciplinary venture for the advancement of innovation inspired by nature and is part of a larger community fostering the approach of bioimimicry in the Great Lakes region of the US. With 30 faculty members the center has representatives from Colleges of Arts and Sciences (e.g., biology, chemistry, geoscience, and philosophy) Engineering (e.g., mechanical, civil, and biomedical), Polymer Science, and Myers School of Arts. A platform for training PhDs in Biomimicry (17 students currently enrolled) is co-funded by educational institutions and industry partners. Research at the center touches on many areas but is also currently biased towards materials and structures, with highlights being materials based on principles found in spider silk and gecko attachment mechanisms. As biomimetics is also a novel scientific discipline, there is little standardisation in programming and the equipment of research facilities. As a field targeting innovation, design and prototyping processes are fundamental parts of the developments. For experimental design and prototyping, MIT's maker space concept seems to fit well to the requirements, but facilities need to be more specialised in terms of accessing biological systems and knowledge, specific research, production or conservation requirements. For the education and research facility BRIC we conceptualize the concept of a biomimicry fablab, that ties into the existing maker space concept and creates the setting for interdisciplinary research and development carried out in the program. The concept takes on the process of biomimetics as a guideline to define core activities that shall be enhanced by the allocation of specific spaces and tools. The limitations of such a facility and the intersections to further specialised labs housed in the classical departments are of special interest. As a preliminary proof of concept two biomimetic design courses carried out in 2016 are investigated in terms of needed tools and infrastructure. The spring course was a problem based biomimetic design challenge in collaboration with an innovation company interested in product design for assisted living and medical devices. The fall course was a solution based biomimetic design course focusing on order and hierarchy in nature with the goal of finding meaningful translations into art and technology. The paper describes the background of the BRIC center, identifies and discusses the process of biomimetics, evaluates the classical maker space concept and explores how these elements can shape the proposed research facility of a biomimetic fablab by examining two examples of design courses held in 2016.

Keywords: biomimetics, biomimicry, design, biomimetic fablab

Procedia PDF Downloads 273
184 Polymer Composites Containing Gold Nanoparticles for Biomedical Use

Authors: Bozena Tyliszczak, Anna Drabczyk, Sonia Kudlacik-Kramarczyk, Agnieszka Sobczak-Kupiec

Abstract:

Introduction: Nanomaterials become one of the leading materials in the synthesis of various compounds. This is a reason for the fact that nano-size materials exhibit other properties compared to their macroscopic equivalents. Such a change in size is reflected in a change in optical, electric or mechanical properties. Among nanomaterials, particular attention is currently directed into gold nanoparticles. They find application in a wide range of areas including cosmetology or pharmacy. Additionally, nanogold may be a component of modern wound dressings, which antibacterial activity is beneficial in the viewpoint of the wound healing process. Specific properties of this type of nanomaterials result in the fact that they may also be applied in cancer treatment. Studies on the development of new techniques of the delivery of drugs are currently an important research subject of many scientists. This is due to the fact that along with the development of such fields of science as medicine or pharmacy, the need for better and more effective methods of administering drugs is constantly growing. The solution may be the use of drug carriers. These are materials that combine with the active substance and lead it directly to the desired place. A role of such a carrier may be played by gold nanoparticles that are able to covalently bond with many organic substances. This allows the combination of nanoparticles with active substances. Therefore gold nanoparticles are widely used in the preparation of nanocomposites that may be used for medical purposes with special emphasis on drug delivery. Methodology: As part of the presented research, synthesis of composites was carried out. The mentioned composites consisted of the polymer matrix and gold nanoparticles that were introduced into the polymer network. The synthesis was conducted with the use of a crosslinking agent, and photoinitiator and the materials were obtained by means of the photopolymerization process. Next, incubation studies were conducted using selected liquids that simulated fluids are occurring in the human body. The study allows determining the biocompatibility of the tested composites in relation to selected environments. Next, the chemical structure of the composites was characterized as well as their sorption properties. Conclusions: Conducted research allowed for the preliminary characterization of prepared polymer composites containing gold nanoparticles in the viewpoint of their application for biomedical use. Tested materials were characterized by biocompatibility in tested environments. What is more, synthesized composites exhibited relatively high swelling capacity that is essential in the viewpoint of their potential application as drug carriers. During such an application, composite swells and at the same time releases from its interior introduced active substance; therefore, it is important to check the swelling ability of such material. Acknowledgements: The authors would like to thank The National Science Centre (Grant no: UMO - 2016/21/D/ST8/01697) for providing financial support to this project. This paper is based upon work from COST Action (CA18113), supported by COST (European Cooperation in Science and Technology).

Keywords: nanocomposites, gold nanoparticles, drug carriers, swelling properties

Procedia PDF Downloads 106
183 Optical Imaging Based Detection of Solder Paste in Printed Circuit Board Jet-Printing Inspection

Authors: D. Heinemann, S. Schramm, S. Knabner, D. Baumgarten

Abstract:

Purpose: Applying solder paste to printed circuit boards (PCB) with stencils has been the method of choice over the past years. A new method uses a jet printer to deposit tiny droplets of solder paste through an ejector mechanism onto the board. This allows for more flexible PCB layouts with smaller components. Due to the viscosity of the solder paste, air blisters can be trapped in the cartridge. This can lead to missing solder joints or deviations in the applied solder volume. Therefore, a built-in and real-time inspection of the printing process is needed to minimize uncertainties and increase the efficiency of the process by immediate correction. The objective of the current study is the design of an optimal imaging system and the development of an automatic algorithm for the detection of applied solder joints from optical from the captured images. Methods: In a first approach, a camera module connected to a microcomputer and LED strips are employed to capture images of the printed circuit board under four different illuminations (white, red, green and blue). Subsequently, an improved system including a ring light, an objective lens, and a monochromatic camera was set up to acquire higher quality images. The obtained images can be divided into three main components: the PCB itself (i.e., the background), the reflections induced by unsoldered positions or screw holes and the solder joints. Non-uniform illumination is corrected by estimating the background using a morphological opening and subtraction from the input image. Image sharpening is applied in order to prevent error pixels in the subsequent segmentation. The intensity thresholds which divide the main components are obtained from the multimodal histogram using three probability density functions. Determining the intersections delivers proper thresholds for the segmentation. Remaining edge gradients produces small error areas which are removed by another morphological opening. For quantitative analysis of the segmentation results, the dice coefficient is used. Results: The obtained PCB images show a significant gradient in all RGB channels, resulting from ambient light. Using different lightings and color channels 12 images of a single PCB are available. A visual inspection and the investigation of 27 specific points show the best differentiation between those points using a red lighting and a green color channel. Estimating two thresholds from analyzing the multimodal histogram of the corrected images and using them for segmentation precisely extracts the solder joints. The comparison of the results to manually segmented images yield high sensitivity and specificity values. Analyzing the overall result delivers a Dice coefficient of 0.89 which varies for single object segmentations between 0.96 for a good segmented solder joints and 0.25 for single negative outliers. Conclusion: Our results demonstrate that the presented optical imaging system and the developed algorithm can robustly detect solder joints on printed circuit boards. Future work will comprise a modified lighting system which allows for more precise segmentation results using structure analysis.

Keywords: printed circuit board jet-printing, inspection, segmentation, solder paste detection

Procedia PDF Downloads 331