Search results for: mixture rules
278 Solid State Drive End to End Reliability Prediction, Characterization and Control
Authors: Mohd Azman Abdul Latif, Erwan Basiron
Abstract:
A flaw or drift from expected operational performance in one component (NAND, PMIC, controller, DRAM, etc.) may affect the reliability of the entire Solid State Drive (SSD) system. Therefore, it is important to ensure the required quality of each individual component through qualification testing specified using standards or user requirements. Qualification testing is time-consuming and comes at a substantial cost for product manufacturers. A highly technical team, from all the eminent stakeholders is embarking on reliability prediction from beginning of new product development, identify critical to reliability parameters, perform full-blown characterization to embed margin into product reliability and establish control to ensure the product reliability is sustainable in the mass production. The paper will discuss a comprehensive development framework, comprehending SSD end to end from design to assembly, in-line inspection, in-line testing and will be able to predict and to validate the product reliability at the early stage of new product development. During the design stage, the SSD will go through intense reliability margin investigation with focus on assembly process attributes, process equipment control, in-process metrology and also comprehending forward looking product roadmap. Once these pillars are completed, the next step is to perform process characterization and build up reliability prediction modeling. Next, for the design validation process, the reliability prediction specifically solder joint simulator will be established. The SSD will be stratified into Non-Operating and Operating tests with focus on solder joint reliability and connectivity/component latent failures by prevention through design intervention and containment through Temperature Cycle Test (TCT). Some of the SSDs will be subjected to the physical solder joint analysis called Dye and Pry (DP) and Cross Section analysis. The result will be feedbacked to the simulation team for any corrective actions required to further improve the design. Once the SSD is validated and is proven working, it will be subjected to implementation of the monitor phase whereby Design for Assembly (DFA) rules will be updated. At this stage, the design change, process and equipment parameters are in control. Predictable product reliability at early product development will enable on-time sample qualification delivery to customer and will optimize product development validation, effective development resource and will avoid forced late investment to bandage the end-of-life product failures. Understanding the critical to reliability parameters earlier will allow focus on increasing the product margin that will increase customer confidence to product reliability.Keywords: e2e reliability prediction, SSD, TCT, solder joint reliability, NUDD, connectivity issues, qualifications, characterization and control
Procedia PDF Downloads 174277 Understanding Everyday Insecurities Emerging from Fragmented Territorial Control in Post-Accord Colombia
Authors: Clara Voyvodic
Abstract:
Transitions from conflict to peace are by no means smooth nor linear, particularly from the perspective of those living through them. Over the last few decades, the changing focus in peacebuilding studies has come to appreciate the everyday experience of communities and how that provides a lens through which the relative success or efficacy of these transitions can be understood. In particular, the demobilization of a significant conflict actor is not without consequences, not just for the macro-view of state stabilization and peace, but for the communities who find themselves without a clear authority of territorial control. In Colombia, the demobilization and disarmament of the FARC guerilla group provided a brief respite to the conflict and a major political win for President Manuel Santos. However, this victory has proven short-lived. Drawing from extensive field research in Colombia within the last year, including interviews with local communities and actors operating in these regions, field observations, and other primary resources, this paper examines the post-accord transitions in Colombia and the everyday security experiences of local communities in regions formerly controlled by the FARC. In order to do so, the research focused on a semi-ethnographic approach in the northern region of the department of Antioquia and the coastal area of the border department of Nariño that documented how individuals within these marginalized communities have come to understand and negotiate their security in the years following the accord and the demobilization of the FARC. This presentation will argue that the removal of the FARC as an informal governance actor opened a space for multiple actors to attempt to control the same territory, including the state. This shift has had a clear impact on the everyday security experiences of the local communities. With an exploration of the dynamics of local governance and its impact on lived security experiences, this research seeks to demonstrate how distinct patterns of armed group behavior are emerging not only from a vacuum of control left by the FARC but from an increase in state presence that nonetheless remains inconsistent and unpersuasive as a monopoly of force in the region. The increased multiplicity of actors, particularly the state, has meant that the normal (informal) rules for communities to navigate these territories are no longer in play as the identities, actions, and intentions of different competing groups have become frustratingly opaque. This research provides a prescient analysis on how the shifting dynamics of territorial control in a post-peace accord landscape produce uncertain realities that affect the daily lives of the local communities and endanger the long-term prospect of human-centered security.Keywords: armed actors, conflict transitions, informal governance, post-accord, security experiences
Procedia PDF Downloads 132276 New Teaching Tools for a Modern Representation of Chemical Bond in the Course of Food Science
Authors: Nicola G. G. Cecca
Abstract:
In Italian IPSSEOAs, high schools that give a vocational education to students that will work in the field of Enogastronomy and Hotel Management, the course of Food Science allows the students to start and see food as a mixture of substances that they will transform during their profession. These substances are characterized not only by a chemical composition but also by a molecular structure that makes them nutritionally active. But the increasing number of new products proposed by Food Industry, the modern techniques of production and transformation, the innovative preparations required by customers have made many information reported in the most wide spread Food Science textbooks not up-to-date or too poor for the people who will work in catering sector. Often Authors offer information aged to Bohr’s Atomic Model and to the ‘Octet Rule’ proposed by G.N. Lewis to describe the Chemical Bond, without giving any reference to new as Orbital Atomic Model and Molecular Orbital Theory that, in the meantime, start to be old themselves. Furthermore, this antiquated information precludes an easy understanding of a wide range of properties of nutritive substances and many reactions in which the food constituents are involved. In this paper, our attention is pointed out to use GEOMAG™ to represent the dynamics with which the chemical bond is formed during the synthesis of the molecules. GEOMAG™ is a toy, produced by the Swiss Company Geomagword S.A., pointed to stimulate in children, aged between 6-10 years, their fantasy and their handling ability and constituted by metallic spheres and metallic magnetic bars coated by coloured plastic materials. The simulation carried out with GEOMAG™ is based on the similitude existing between the Coulomb’s force and the magnetic attraction’s force and in particular between the formulae with which they are calculated. The electrostatic force (F in Newton) that allows the formation of the chemical bond can be calculated by mean Fc = kc q1 q2/d2 where: q1 e q2 are the charge of particles [in Coulomb], d is the distance between the particles [in meters] and kc is the Coulomb’s constant. It is surprising to observe that the attraction’s force (Fm) acting between the magnetic extremities of GEOMAG™ used to simulate the chemical bond can be calculated in the same way by using the formula Fm = km m1 m2/d2 where: m1 e m2 represent the strength of the poles [A•m], d is the distance between the particles [m], km = μ/4π in which μ is the magnetic permeability of medium [N•A-2]. The magnetic attraction can be tested by students by trying to keep the magnetic elements of GEOMAG™ separate by hands or trying to measure by mean an appropriate dynamometric system. Furthermore, by using a dynamometric system to measure the magnetic attraction between the GEOMAG™ elements is possible draw a graphic F=f(d) to verify that the curve obtained during the simulation is very similar to that one hypnotized, around the 1920’s by Linus Pauling to describe the formation of H2+ in according with Molecular Orbital Theory.Keywords: chemical bond, molecular orbital theory, magnetic attraction force, GEOMAG™
Procedia PDF Downloads 270275 Management Problems in a Patient With Long-term Undiagnosed Permanent Hypoparathyroidism
Authors: Babarina Maria, Andropova Margarita
Abstract:
Introduction: Hypoparathyroidism (HypoPT) is a rare endocrine disorder with an estimated prevalence of 0.25 per 1000 individuals. The most common cause of HypoPT is the loss of active parathyroid tissue following thyroid or parathyroid surgery. Sometimes permanent postoperative HypoPT occures, manifested by hypocalcemia in combination with low levels of PTH during 6 months or more after surgery. Cognitive impairments in patients with hypocalcemia due to chronic HypoPT are observed, and this can lead to problems and challenges in everyday living: memory loss and impaired concentration, that may be the cause of poor compliance. Clinical case: Patient K., 66 years old, underwent thyroidectomy in 2013 (at the age of 55) because of papillary thyroid cancer T1NxMx, histopathology findings confirmed the diagnosis. 5 years after the surgery, she was followed up on an outpatient basis, TSH levelsonly were monitored, and the dose of levothyroxine was adjusted. In 2018 due to, increasing complaints include tingling and cramps in the arms and legs, memory loss, sleep disorder, fatigue, anxiety, hair loss, muscle pain, tachycardia, positive Chvostek, and Trousseau signs were diagnosed during examination, also in blood analyses: total Ca 1.86 mmol/l (2.15-2.55), Ca++ 0.96 mmol/l (1.12-1.3), P 1.55 mmol/l (0.74-1.52), Mg 0.79 mmol/l (0.66-1.07) - chronic postoperative HypoPT was diagnosed. Therapy was initiated: alfacalcidol 0.5 mcg per day, calcium carbonate 2000 mg per day, cholecalciferol 1000 IU per day, magnesium orotate 3000 mg per day. During the case follow-up, hypocalcemia, hyperphosphatemia persisted, hypercalciuria15.7 mmol/day (2.5-6.5) was diagnosed. Dietary recommendations were given because of the high content of phosphorus rich foods, and therapy was adjusted: the dose of alfacalcidol was increased to 2.5 mcg per day, and the dose of calcium carbonate was reduced to 1500 mg per day. As part of the screening for complications of hypoPT, data for cataracts, Fahr syndrome, nephrocalcinosis, and kidney stone disease were not obtained. However, HypoPT compensation was not achieved, and therefore hydrochlorothiazide 25 mg was initiated, the dose of alfacalcidol was increased to 3 mcg per day, calcium carbonate to 3000 mg per day, magnesium orotate and cholecalciferol were continued at the same doses. Therapeutic goals were achieved: calcium phosphate product <4.4 mmol2/l2, there were no episodes of hypercalcemia, twenty-four-hour urinary calcium excretion was significantly reduced. Conclusion: Timely prescription, careful explanation of drugs usage rules, and monitoring and maintaining blood and urine parameters within the target contribute to the prevention of HypoPT complications development and life-threatening events.Keywords: hypoparathyroidism, hypocalcemia, hyperphosphatemia, hypercalciuria
Procedia PDF Downloads 109274 Simple and Effective Method of Lubrication and Wear Protection
Authors: Buddha Ratna Shrestha, Jimmy Faivre, Xavier Banquy
Abstract:
By precisely controlling the molecular interactions between anti-wear macromolecules and bottle-brush lubricating molecules in the solution state, we obtained a fluid with excellent lubricating and wear protection capabilities. The reason for this synergistic behavior relies on the subtle interaction forces between the fluid components which allow the confined macromolecules to sustain high loads under shear without rupture. Our results provide rational guides to design such fluids for virtually any type of surfaces. The lowest friction coefficient and the maximum pressure that it can sustain is 5*10-3 and 2.5 MPa which is close to the physiological pressure. Lubricating and protecting surfaces against wear using liquid lubricants is a great technological challenge. Until now, wear protection was usually imparted by surface coatings involving complex chemical modifications of the surface while lubrication was provided by a lubricating fluid. Hence, we here research for a simple, effective and applicable solution to the above problem using surface force apparatus (SFA). SFA is a powerful technique with sub-angstrom resolution in distance and 10 nN/m resolution in interaction force while performing friction experiment. Thus, SFA is used to have the direct insight into interaction force, material and friction at interface. Also, we always know the exact contact area. From our experiments, we found that by precisely controlling the molecular interactions between anti-wear macromolecules and lubricating molecules, we obtained a fluid with excellent lubricating and wear protection capabilities. The reason for this synergistic behavior relies on the subtle interaction forces between the fluid components which allow the confined macromolecules to sustain high loads under shear without rupture. The lowest friction coefficient and the maximum pressure that it can sustain in our system is 5*10-3 and 2.5 GPA which is well above the physiological pressure. Our results provide rational guides to design such fluids for virtually any type of surfaces. Most importantly this process is simple, effective and applicable method of lubrication and protection as until now wear protection was usually imparted by surface coatings involving complex chemical modifications of the surface. Currently, the frictional data that are obtained while sliding the flat mica surfaces are compared and confirmed that a particular mixture of solution was found to surpass all other combination. So, further we would like to confirm that the lubricating and antiwear protection remains the same by performing the friction experiments in synthetic cartilages.Keywords: bottle brush polymer, hyaluronic acid, lubrication, tribology
Procedia PDF Downloads 264273 Cardiac Arrest after Cardiac Surgery
Authors: Ravshan A. Ibadov, Sardor Kh. Ibragimov
Abstract:
Objective. The aim of the study was to optimize the protocol of cardiopulmonary resuscitation (CPR) after cardiovascular surgical interventions. Methods. The experience of CPR conducted on patients after cardiovascular surgical interventions in the Department of Intensive Care and Resuscitation (DIR) of the Republican Specialized Scientific-Practical Medical Center of Surgery named after Academician V. Vakhidov is presented. The key to the new approach is the rapid elimination of reversible causes of cardiac arrest, followed by either defibrillation or electrical cardioversion (depending on the situation) before external heart compression, which may damage sternotomy. Careful use of adrenaline is emphasized due to the potential recurrence of hypertension, and timely resternotomy (within 5 minutes) is performed to ensure optimal cerebral perfusion through direct massage. Out of 32 patients, cardiac arrest in the form of asystole was observed in 16 (50%), with hypoxemia as the cause, while the remaining 16 (50%) experienced ventricular fibrillation caused by arrhythmogenic reactions. The age of the patients ranged from 6 to 60 years. All patients were evaluated before the operation using the ASA and EuroSCORE scales, falling into the moderate-risk group (3-5 points). CPR was conducted for cardiac activity restoration according to the American Heart Association and European Resuscitation Council guidelines (Ley SJ. Standards for Resuscitation After Cardiac Surgery. Critical Care Nurse. 2015;35(2):30-38). The duration of CPR ranged from 8 to 50 minutes. The ARASNE II scale was used to assess the severity of patients' conditions after CPR, and the Glasgow Coma Scale was employed to evaluate patients' consciousness after the restoration of cardiac activity and sedation withdrawal. Results. In all patients, immediate chest compressions of the necessary depth (4-5 cm) at a frequency of 100-120 compressions per minute were initiated upon detection of cardiac arrest. Regardless of the type of cardiac arrest, defibrillation with a manual defibrillator was performed 3-5 minutes later, and adrenaline was administered in doses ranging from 100 to 300 mcg. Persistent ventricular fibrillation was also treated with antiarrhythmic therapy (amiodarone, lidocaine). If necessary, infusion of inotropes and vasopressors was used, and for the prevention of brain edema and the restoration of adequate neurostatus within 1-3 days, sedation, a magnesium-lidocaine mixture, mechanical intranasal cooling of the brain stem, and neuroprotective drugs were employed. A coordinated effort by the resuscitation team and proper role allocation within the team were essential for effective cardiopulmonary resuscitation (CPR). All these measures contributed to the improvement of CPR outcomes. Conclusion. Successful CPR following cardiac surgical interventions involves interdisciplinary collaboration. The application of an optimized CPR standard leads to a reduction in mortality rates and favorable neurological outcomes.Keywords: cardiac surgery, cardiac arrest, resuscitation, critically ill patients
Procedia PDF Downloads 55272 The Legal Nature of Grading Decisions and the Implications for Handling of Academic Complaints in or out of Court: A Comparative Legal Analysis of Academic Litigation in Europe
Authors: Kurt Willems
Abstract:
This research examines complaints against grading in higher education institutions in four different European regions: England and Wales, Flanders, the Netherlands, and France. The aim of the research is to examine the correlation between the applicable type of complaint handling on the one hand, and selected qualities of the higher education landscape and of public law on the other hand. All selected regions report a rising number of complaints against grading decisions, not only as to internal complaint handling within the institution but also judicially if the dispute persists. Some regions deem their administrative court system appropriate to deal with grading disputes (France) or have even erected a specialty administrative court to facilitate access (Flanders, the Netherlands). However, at the same time, different types of (governmental) dispute resolution bodies have been established outside of the judicial court system (England and Wales, and to lesser extent France and the Netherlands). Those dispute procedures do not seem coincidental. Public law issues such as the underlying legal nature of the education institution and, eventually, the grading decision itself, have an impact on the way the academic complaint procedures are developed. Indeed, in most of the selected regions, contractual disputes enjoy different legal protection than administrative decisions, making the legal qualification of the relationship between student and higher education institution highly relevant. At the same time, the scope of competence of government over different types of higher education institutions; albeit direct or indirect (o.a. through financing and quality control) is relevant as well to comprehend why certain dispute handling procedures have been established for students. To answer the above questions, the doctrinal and comparative legal method is used. The normative framework is distilled from the relevant national legislative rules and their preparatory texts, the legal literature, the (published) case law of academic complaints and the available governmental reports. The research is mainly theoretical in nature, examining different topics of public law (mainly administrative law) and procedural law in the context of grading decisions. The internal appeal procedure within the education institution is largely left out of the scope of the research, as well as different types of non-governmental-imposed cooperation between education institutions, given the public law angle of the research questions. The research results in the categorization of different academic complaint systems, and an analysis of the possibility to introduce each of those systems in different countries, depending on their public law system and higher education system. By doing so, the research also adds to the debate on the public-private divide in higher education systems, and its effect on academic complaints handling.Keywords: higher education, legal qualification of education institution, legal qualification of grading decisions, legal protection of students, academic litigation
Procedia PDF Downloads 233271 Grain Size Statistics and Depositional Pattern of the Ecca Group Sandstones, Karoo Supergroup in the Eastern Cape Province, South Africa
Authors: Christopher Baiyegunhi, Kuiwu Liu, Oswald Gwavava
Abstract:
Grain size analysis is a vital sedimentological tool used to unravel the hydrodynamic conditions, mode of transportation and deposition of detrital sediments. In this study, detailed grain-size analysis was carried out on thirty-five sandstone samples from the Ecca Group in the Eastern Cape Province of South Africa. Grain-size statistical parameters, bivariate analysis, linear discriminate functions, Passega diagrams and log-probability curves were used to reveal the depositional processes, sedimentation mechanisms, hydrodynamic energy conditions and to discriminate different depositional environments. The grain-size parameters show that most of the sandstones are very fine to fine grained, moderately well sorted, mostly near-symmetrical and mesokurtic in nature. The abundance of very fine to fine grained sandstones indicates the dominance of low energy environment. The bivariate plots that the samples are mostly grouped, except for the Prince Albert samples that show scattered trend, which is due to the either mixture of two modes in equal proportion in bimodal sediments or good sorting in unimodal sediments. The linear discriminant function (LDF) analysis is dominantly indicative of turbidity current deposits under shallow marine environments for samples from the Prince Albert, Collingham and Ripon Formations, while those samples from the Fort Brown Formation are fluvial (deltaic) deposits. The graphic mean value shows the dominance of fine sand-size particles, which point to relatively low energy conditions of deposition. In addition, the LDF results point to low energy conditions during the deposition of the Prince Albert, Collingham and part of the Ripon Formation (Pluto Vale and Wonderfontein Shale Members), whereas the Trumpeters Member of the Ripon Formation and the overlying Fort Brown Formation accumulated under high energy conditions. The CM pattern shows a clustered distribution of sediments in the PQ and QR segments, indicating that the sediments were deposited mostly by suspension and rolling/saltation, and graded suspension. Furthermore, the plots also show that the sediments are mainly deposited by turbidity currents. Visher diagrams show the variability of hydraulic depositional conditions for the Permian Ecca Group sandstones. Saltation is the major process of transportation, although suspension and traction also played some role during deposition of the sediments. The sediments were mainly in saltation and suspension before being deposited.Keywords: grain size analysis, hydrodynamic condition, depositional environment, Ecca Group, South Africa
Procedia PDF Downloads 483270 From Stigma to Solutions: Harnessing Innovation and Local Wisdom to Tackle Harms Associated with Menstrual Seclusion (Chhaupadi) in Nepal
Authors: Sara E. Baumann, Megan A. Rabin, Mary Hawk, Bhimsen Devkota, Kajol Upadhyaya, Guna Raj Shrestha, Brigit Joseph, Annika Agarwal, Jessica G. Burke
Abstract:
In Nepal, prevailing sociocultural norms associated with menstruation prompt adherence to stringent rules that limit participation in daily activities. Chhaupadi is a specific menstrual tradition in Nepal in which women and girls segregate themselves and follow a series of restrictions during menstruation. Despite having numerous physical and mental health implications, extant interventions have yet to sustainably address the harms associated with chhaupadi. In this study, the authors describe insights garnered from a collaboration with community members in Dailekh district, who formulated their own approaches to mitigate the adverse facets of chhaupadi. Envisaged as an entry point to improve women’s menstrual health experiences, this investigation employed an approach that uses Human-centered Design and a community-engaged approach. The authors conducted a four-day design workshop which unfolded in two phases: The Discovery Phase, to uncover chhaupadi context and key stakeholders, and the Design Phase, to design contextually relevant interventions. Diverse community-members, including those with lived experience practicing chhaupadi, developed five intervention concepts: 1) harnessing Female Community Health Volunteers as role models, for counseling, and raising awareness; 2) focusing on mothers and mother’s groups to instigate behavioral shifts; 3) engaging the broader community in behavior change efforts; 4) empowering fathers to effect change in their homes through counseling and education; and 5) training and emboldening youth to advocate for positive change through advocacy in their schools and homes. This research underscores the importance of employing multi-level approaches tailored to specific stakeholder groups, given Nepal’s rich cultural diversity. The engagement of Female Community Health Volunteers emerged as a promising yet underexplored intervention concept for chhaupadi, warranting broader implementation. Crucially, it is also imperative for interventions to prioritize tackling deleterious aspects of the chhaupadi tradition, emphasizing safety considerations, all while acknowledging chhaupadi’s entrenched cultural history; for some, there are positive aspects of the tradition that women and girls wish to preserve.Keywords: human-centered design, menstrual health, Nepal, community-engagement, intervention development, women's health, rural health
Procedia PDF Downloads 70269 Enhancing Institutional Roles and Managerial Instruments for Irrigation Modernization in Sudan: The Case of Gezira Scheme
Authors: Mohamed Ahmed Abdelmawla
Abstract:
Calling to achieve Millennium Development Goals (MDGs) engaged with agriculture, i.e. poverty alleviation targets, human resources involved in agricultural sectors with special emphasis on irrigation must receive wealth of practical experience and training. Increased food production, including staple food, is needed to overcome the present and future threats to food security. This should happen within a framework of sustainable management of natural resources, elimination of unsustainable methods of production and poverty reduction (i.e. axes of modernization). A didactic tool to confirm the task of wise and maximum utility is the best management and accurate measurement, as major requisites for modernization process. The key component to modernization as a warranted goal is adhering great attention to management and measurement issues via capacity building. As such, this paper stressed the issues of discharge management and measurement by Field Outlet Pipes (FOP) for selected ones within the Gezira Scheme, where randomly nine FOPs were selected as representative locations. These FOPs extended along the Gezira Main Canal at Kilo 57 areas in the South up to Kilo 194 in the North. The following steps were followed during the field data collection and measurements: For each selected FOP, a 90 v- notch thin plate weir was placed in such away that the water was directed to pass only through the notch. An optical survey level was used to measure the water head of the notch and FOP. Both calculated discharge rates as measured by the v – notch, denoted as [Qc], and the adopted discharges given by (MOIWR), denoted as [Qa], are tackled for the average of three replicated readings undertaken at each location. The study revealed that the FOP overestimates and sometimes underestimates the discharges. This is attributed to the fact that the original design specifications were not fulfilled or met at present conditions where water is allowed to flow day and night with high head fluctuation, knowing that the FOP is non modular structure, i.e. the flow depends on both levels upstream and downstream and confirmed by the results of this study. It is convenient and formative to quantify the discharge in FOP with weirs or Parshall flumes. Cropping calendar should be clearly determined and agreed upon before the beginning of the season in accordance and consistency with the Sudan Gezira Board (SGB) and Ministry of Irrigation and Water Resources. As such, the water indenting should be based on actual Crop Water Requirements (CWRs), not on rules of thumb (420 m3/feddan, irrespective of crop or time of season).Keywords: management, measurement, MDGs, modernization
Procedia PDF Downloads 252268 A Text in Movement in the Totonac Flyers’ Dance: A Performance-Linguistic Theory
Authors: Luisa Villani
Abstract:
The proposal aims to express concerns about the connection between mind, body, society, and environment in the Flyers’ dance, a very well-known rotatory dance in Mexico, to create meanings and to make the apprehension of the world possible. The interaction among the brain, mind, body, and environment, and the intersubjective relation among them, means the world creates and recreates a social interaction. The purpose of this methodology, based on the embodied cognition theory, which was named “A Performance-Embodied Theory” is to find the principles and patterns that organize the culture and the rules of the apprehension of the environment by Totonac people while the dance is being performed. The analysis started by questioning how anthropologists can interpret how Totonacs transform their unconscious knowledge into conscious knowledge and how the scheme formation of imagination and their collective imagery is understood in the context of public-facing rituals, such as Flyers’ dance. The problem is that most of the time, researchers interpret elements in a separate way and not as a complex ritual dancing whole, which is the original contribution of this study. This theory, which accepts the fact that people are body-mind agents, wants to interpret the dance as a whole, where the different elements are joined to an integral interpretation. To understand incorporation, data was recollected in prolonged periods of fieldwork, with participant observation and linguistic and extralinguistic data analysis. Laban’s notation for the description and analysis of gestures and movements in the space was first used, but it was later transformed and gone beyond this method, which is still a linear and compositional one. Performance in a ritual is the actualization of a potential complex of meanings or cognitive domains among many others in a culture: one potential dimension becomes probable and then real because of the activation of specific meanings in a context. It can only be thought what language permits thinking, and the lexicon that is used depends on the individual culture. Only some parts of this knowledge can be activated at once, and these parts of knowledge are connected. Only in this way, the world can be understood. It can be recognized that as languages geometrize the physical world thanks to the body, also ritual does. In conclusion, the ritual behaves as an embodied grammar or a text in movement, which, depending on the ritual phases and the words and sentences pronounced in the ritual, activates bits of encyclopedic knowledge that people have about the world. Gestures are not given by the performer but emerge from the intentional perception in which gestures are “understood” by the audio-spectator in an inter-corporeal way. The impact of this study regards the possibility not only to disseminate knowledge effectively but also to generate a balance between different parts of the world where knowledge is shared, rather than being received by academic institutions alone. This knowledge can be exchanged, so indigenous communities and academies could be together as part of the activation and the sharing of this knowledge with the world.Keywords: dance, flyers, performance, embodied, cognition
Procedia PDF Downloads 59267 Numerical Investigation on Transient Heat Conduction through Brine-Spongy Ice
Authors: S. R. Dehghani, Y. S. Muzychka, G. F. Naterer
Abstract:
The ice accretion of salt water on cold substrates creates brine-spongy ice. This type of ice is a mixture of pure ice and liquid brine. A real case of creation of this type of ice is superstructure icing which occurs on marine vessels and offshore structures in cold and harsh conditions. Transient heat transfer through this medium causes phase changes between brine pockets and pure ice. Salt rejection during the process of transient heat conduction increases the salinity of brine pockets to reach a local equilibrium state. In this process the only effect of passing heat through the medium is not changing the sensible heat of the ice and brine pockets; latent heat plays an important role and affects the mechanism of heat transfer. In this study, a new analytical model for evaluating heat transfer through brine-spongy ice is suggested. This model considers heat transfer and partial solidification and melting together. Properties of brine-spongy ice are obtained using properties of liquid brine and pure ice. A numerical solution using Method of Lines discretizes the medium to reach a set of ordinary differential equations. Boundary conditions are chosen using one of the applicable cases of this type of ice; one side is considered as a thermally isolated surface, and the other side is assumed to be suddenly affected by a constant temperature boundary. All cases are evaluated in temperatures between -20 C and the freezing point of brine-spongy ice. Solutions are conducted using different salinities from 5 to 60 ppt. Time steps and space intervals are chosen properly to maintain the most stable and fast solution. Variation of temperature, volume fraction of brine and brine salinity versus time are the most important outputs of this study. Results show that transient heat conduction through brine-spongy ice can create a various range of salinity of brine pockets from the initial salinity to that of 180 ppt. The rate of variation of temperature is found to be slower for high salinity cases. The maximum rate of heat transfer occurs at the start of the simulation. This rate decreases as time passes. Brine pockets are smaller at portions closer to the colder side than that of the warmer side. A the start of the solution, the numerical solution tends to increase instabilities. This is because of sharp variation of temperature at the start of the process. Changing the intervals improves the unstable situation. The analytical model using a numerical scheme is capable of predicting thermal behavior of brine spongy ice. This model and numerical solutions are important for modeling the process of freezing of salt water and ice accretion on cold structures.Keywords: method of lines, brine-spongy ice, heat conduction, salt water
Procedia PDF Downloads 217266 Thermal Method Production of the Hydroxyapatite from Bone By-Products from Meat Industry
Authors: Agnieszka Sobczak-Kupiec, Dagmara Malina, Klaudia Pluta, Wioletta Florkiewicz, Bozena Tyliszczak
Abstract:
Introduction: Request for compound of phosphorus grows continuously, thus, it is searched for alternative sources of this element. One of these sources could be by-products from meat industry which contain prominent quantity of phosphorus compounds. Hydroxyapatite, which is natural component of animal and human bones, is leading material applied in bone surgery and also in stomatology. This is material, which is biocompatible, bioactive and osteoinductive. Methodology: Hydroxyapatite preparation: As a raw material was applied deproteinized and defatted bone pulp called bone sludge, which was formed as waste in deproteinization process of bones, in which a protein hydrolysate was the main product. Hydroxyapatite was received in calcining process in chamber kiln with electric heating in air atmosphere in two stages. In the first stage, material was calcining in temperature 600°C within 3 hours. In the next stage unified material was calcining in three different temperatures (750°C, 850°C and 950°C) keeping material in maximum temperature within 3.0 hours. Bone sludge: Bone sludge was formed as waste in deproteinization process of bones, in which a protein hydrolysate was the main product. Pork bones coming from the partition of meat were used as a raw material for the production of the protein hydrolysate. After disintegration, a mixture of bone pulp and water with a small amount of lactic acid was boiled at temperature 130-135°C and under pressure4 bar. After 3-3.5 hours boiled-out bones were separated on a sieve, and the solution of protein-fat hydrolysate got into a decanter, where bone sludge was separated from it. Results of the study: The phase composition was analyzed by roentgenographic method. Hydroxyapatite was the only crystalline phase observed in all the calcining products. XRD investigation was shown that crystallization degree of hydroxyapatite was increased with calcining temperature. Conclusion: The researches were shown that phosphorus content is around 12%, whereas, calcium content amounts to 28% on average. The conducted researches on bone-waste calcining at the temperatures of 750-950°C confirmed that thermal utilization of deproteinized bone-waste was possible. X-ray investigations were confirmed that hydroxyapatite is the main component of calcining products, and also XRD investigation was shown that crystallization degree of hydroxyapatite was increased with calcining temperature. Contents of calcium and phosphorus were distinctly increased with calcining temperature, whereas contents of phosphorus soluble in acids were decreased. It could be connected with higher crystallization degree of material received in higher temperatures and its stable structure. Acknowledgements: “The authors would like to thank the The National Centre for Research and Development (Grant no: LIDER//037/481/L-5/13/NCBR/2014) for providing financial support to this project”.Keywords: bone by-products, bone sludge, calcination, hydroxyapatite
Procedia PDF Downloads 286265 Paradigms of Sustainability: Roles and Impact of Communication in the Fashion System
Authors: Elena Pucci, Margherita Tufarelli, Leonardo Giliberti
Abstract:
As central for human and social development of the future, sustainability is becoming a recurring theme also in the fashion industry, where the need to explore new possible directions aimed at achieving sustainability goals and their communication is rising. Scholars have been devoted to the overall environmental impact of the textile and fashion industry, which, emerging as one of the world’s most polluting, today concretely assumes the need to take the path of sustainability in both products and production processes. Every day we witness the impact of our consumption, showing that the sustainability concept is as vast as complex: with a sometimes ambiguous definition, sustainability can concern projects, products, companies, sales, packagings, supply chains in relation to the actors proximity as well as traceability, raw materials procurement, and disposal. However, in its primary meaning, sustainability is the ability to maintain specific values and resources for future generations. The contribution aims to address sustainability in the fashion system as a layered problem that requires substantial changes at different levels: in the fashion product (materials, production processes, timing, distribution, and disposal), in the functioning of the system (life cycle, impact, needs, communication) and last but not least in the practice of fashion design which should conceive durable, low obsolescence and possibly demountable products. Moreover, consumers play a central role for the growing awareness, together with an increasingly strong sensitivity towards the environment and sustainable clothing. Since it is also a market demand, undertaking significant efforts to achieve total transparency and sustainability in all production and distribution processes is becoming fundamental for the fashion system. Sustainability is not to be understood as purely environmental but as the pursuit of collective well-being in relation to conscious production, human rights, and social dignity with the aim to achieve intelligent, resource, and environmentally friendly production and consumption patterns. Assuming sustainability as a layered problem makes the role of communication crucial to convey scientific or production specific content so that people can obtain and interpret information to make related decisions. Hence, if it is true that “what designers make becomes the future we inhabit'', design is facing great and challenging responsibility. The fashion industry needs a system of rules able to assess the sustainability of products, which is transparent and easily interpreted by consumers, identifying and enhancing virtuous practices. There are still complex and fragmented value chains that make it extremely difficult for brands and manufacturers to know the history of their products, to identify exactly where the risks lie, and to respond to the growing demand from consumers and civil society for responsible and sustainable production practices in the fashion industry.Keywords: fashion design, fashion system, sustainability, communication, complexity
Procedia PDF Downloads 122264 Design, Construction, Validation And Use Of A Novel Portable Fire Effluent Sampling Analyser
Authors: Gabrielle Peck, Ryan Hayes
Abstract:
Current large scale fire tests focus on flammability and heat release measurements. Smoke toxicity isn’t considered despite it being a leading cause of death and injury in unwanted fires. A key reason could be that the practical difficulties associated with quantifying individual toxic components present in a fire effluent often require specialist equipment and expertise. Fire effluent contains a mixture of unreactive and reactive gases, water, organic vapours and particulate matter, which interact with each other. This interferes with the operation of the analytical instrumentation and must be removed without changing the concentration of the target analyte. To mitigate the need for expensive equipment and time-consuming analysis, a portable gas analysis system was designed, constructed and tested for use in large-scale fire tests as a simpler and more robust alternative to online FTIR measurements. The novel equipment aimed to be easily portable and able to run on battery or mains electricity; be able to be calibrated at the test site; be capable of quantifying CO, CO2, O2, HCN, HBr, HCl, NOx and SO2 accurately and reliably; be capable of independent data logging; be capable of automated switchover of 7 bubblers; be able to withstand fire effluents; be simple to operate; allow individual bubbler times to be pre-set; be capable of being controlled remotely. To test the analysers functionality, it was used alongside the ISO/TS 19700 Steady State Tube Furnace (SSTF). A series of tests were conducted to assess the validity of the box analyser measurements and the data logging abilities of the apparatus. PMMA and PA 6.6 were used to assess the validity of the box analyser measurements. The data obtained from the bench-scale assessments showed excellent agreement. Following this, the portable analyser was used to monitor gas concentrations during large-scale testing using the ISO 9705 room corner test. The analyser was set up, calibrated and set to record smoke toxicity measurements in the doorway of the test room. The analyser was successful in operating without manual interference and successfully recorded data for 12 of the 12 tests conducted in the ISO room tests. At the end of each test, the analyser created a data file (formatted as .csv) containing the measured gas concentrations throughout the test, which do not require specialist knowledge to interpret. This validated the portable analyser’s ability to monitor fire effluent without operator intervention on both a bench and large-scale. The portable analyser is a validated and significantly more practical alternative to FTIR, proven to work for large-scale fire testing for quantification of smoke toxicity. The analyser is a cheaper, more accessible option to assess smoke toxicity, mitigating the need for expensive equipment and specialist operators.Keywords: smoke toxicity, large-scale tests, iso 9705, analyser, novel equipment
Procedia PDF Downloads 78263 An Artificially Intelligent Teaching-Agent to Enhance Learning Interactions in Virtual Settings
Authors: Abdulwakeel B. Raji
Abstract:
This paper introduces a concept of an intelligent virtual learning environment that involves communication between learners and an artificially intelligent teaching agent in an attempt to replicate classroom learning interactions. The benefits of this technology over current e-learning practices is that it creates a virtual classroom where real time adaptive learning interactions are made possible. This is a move away from the static learning practices currently being adopted by e-learning systems. Over the years, artificial intelligence has been applied to various fields, including and not limited to medicine, military applications, psychology, marketing etc. The purpose of e-learning applications is to ensure users are able to learn outside of the classroom, but a major limitation has been the inability to fully replicate classroom interactions between teacher and students. This study used comparative surveys to gain information and understanding of the current learning practices in Nigerian universities and how they compare to these practices compare to the use of a developed e-learning system. The study was conducted by attending several lectures and noting the interactions between lecturers and tutors and as an aftermath, a software has been developed that deploys the use of an artificial intelligent teaching-agent alongside an e-learning system to enhance user learning experience and attempt to create the similar learning interactions to those found in classroom and lecture hall settings. Dialogflow has been used to implement a teaching-agent, which has been developed using JSON, which serves as a virtual teacher. Course content has been created using HTML, CSS, PHP and JAVASCRIPT as a web-based application. This technology can run on handheld devices and Google based home technologies to give learners an access to the teaching agent at any time. This technology also implements the use of definite clause grammars and natural language processing to match user inputs and requests with defined rules to replicate learning interactions. This technology developed covers familiar classroom scenarios such as answering users’ questions, asking ‘do you understand’ at regular intervals and answering subsequent requests, taking advanced user queries to give feedbacks at other periods. This software technology uses deep learning techniques to learn user interactions and patterns to subsequently enhance user learning experience. A system testing has been undergone by undergraduate students in the UK and Nigeria on the course ‘Introduction to Database Development’. Test results and feedback from users shows that this study and developed software is a significant improvement on existing e-learning systems. Further experiments are to be run using the software with different students and more course contents.Keywords: virtual learning, natural language processing, definite clause grammars, deep learning, artificial intelligence
Procedia PDF Downloads 135262 Properties Optimization of Keratin Films Produced by Film Casting and Compression Moulding
Authors: Mahamad Yousif, Eoin Cunningham, Beatrice Smyth
Abstract:
Every year ~6 million tonnes of feathers are produced globally. Due to feathers’ low density and possible contamination with pathogens, their disposal causes health and environmental problems. The extraction of keratin, which represents >90% of feathers’ dry weight, could offer a solution due to its wide range of applications in the food, medical, cosmetics, and biopolymer industries. One of these applications is the production of biofilms which can be used for packaging, edible films, drug delivery, wound healing etc. Several studies in the last two decades investigated keratin film production and its properties. However, the effects of many parameters on the properties of the films remain to be investigated including the extraction method, crosslinker type and concentration, and the film production method. These parameters were investigated in this study. Keratin was extracted from chicken feathers using two methods, alkaline extraction with 0.5 M NaOH at 80 °C or sulphitolysis extraction with 0.5 M sodium sulphite, 8 M urea, and 0.25-1 g sodium dodecyl sulphate (SDS) at 100 °C. The extracted keratin was mixed with different types and concentrations of plasticizers (glycerol and polyethylene glycol) and crosslinkers (formaldehyde (FA), glutaraldehyde, cinnamaldehyde, glyoxal, and 1,4-Butanediol diglycidyl ether (BDE)). The mixtures were either cast in a mould or compression moulded to produce films. For casting, keratin powder was initially dissolved in water to form a 5% keratin solution and the mixture was dried in an oven at 60 °C. For compression moulding, 10% water was added and the compression moulding temperature and pressure were in the range of 60-120 °C and 10-30 bar. Finally, the tensile properties, solubility, and transparency of the films were analysed. The films prepared using the sulphitolysis keratin had superior tensile properties to the alkaline keratin and formed successfully with lower plasticizer concentrations. Lowering the SDS concentration from 1 to 0.25 g/g feathers improved all the tensile properties. All the films prepared without crosslinkers were 100% water soluble but adding crosslinkers reduced solubility to as low as 21%. FA and BDE were found to be the best crosslinkers increasing the tensile strength and elongation at break of the films. Higher compression moulding temperature and pressure lowered the tensile properties of the films; therefore, 80 °C and 10 bar were considered to be the optimal compression moulding temperature and pressure. Nevertheless, the films prepared by casting had higher tensile properties than compression moulding but were less transparent. Two optimal films, prepared by film casting, were identified and their compositions were: (a) Sulphitolysis keratin, 20% glycerol, 10% FA, and 10% BDE. (b) Sulphitolysis keratin, 20% glycerol, and 10% BDE. Their tensile strength, elongation at break, Young’s modulus, solubility, and transparency were: (a) 4.275±0.467 MPa, 86.12±4.24%, 22.227±2.711 MPa, 21.34±1.11%, and 8.57±0.94* respectively. (b) 3.024±0.231 MPa, 113.65±14.61%, 10±1.948 MPa, 25.03±5.3%, and 4.8±0.15 respectively. A higher value indicates that the film is less transparent. The extraction method, film composition, and production method had significant influence on the properties of keratin films and should therefore be tailored to meet the desired properties and applications.Keywords: compression moulding, crosslinker, film casting, keratin, plasticizer, solubility, tensile properties, transparency
Procedia PDF Downloads 36261 Insect Manure (Frass) as a Complementary Fertilizer to Enhance Soil Mineralization Function: Application to Cranberry and Field Crops
Authors: Joël Passicousset, David Gilbert, Chloé Chervier-Legourd, Emmanuel Caron-Garant, Didier Labarre
Abstract:
Living soil agriculture tries to reconciliate food production while improving soil health, soil biodiversity, soil fertility and more generally attenuating the inherent environmental drawbacks induced by modern agriculture. Using appropriate organic materials as soil amendments has a role to play in the aim of increasing the soil organic matter, improving soil fertility, sequestering carbon, and diminishing the dependence on both mineral fertilizer and pesticides. Insect farming consists in producing insects that can be used as a rich-in-protein and entomo-based food. Usually, detritivores are chosen, thus they can be fed with food wastes, which contributes to circular economy while producing low-carbon food. This process also produces frass, made of insect feces, exuvial material, and non-digested fibrous material, that have valuable fertilizer and biostimulation properties. But frass, used as a sole fertilizer on a crop may be not completely adequate for plants’ needs. This is why this project considers black soldier fly (termed BSF, one of the three main insect species grown commercially) frass as a complementary fertilizer, both in organic and in conventional contexts. Three kinds of experiments are made to understand the behaviour of fertilizer treatments based on frass incorporation. Lab-scale mineralization experiments suggest that BSF frass alone mineralizes more slowly than chicken manure alone (CM), but at a ratio of 90% CM-10% BSF frass, the mineralization rate of the mixture is higher than both frass and CM individually. For example, in the 7 days following the fertilization with same nitrogen amount introduced among treatments, around 80% of the nitrogen content supplied through 90% CM-10% BSF frass fertilization is present in the soil under mineral forms, compared to roughly 60% for commercial CM fertilization and 45% with BSF-frass. This suggests that BSF frass contains a more recalcitrant form of organic nitrogen than CM, but also that BSF frass has a highly active microbiota that can increase CM mineralization rate. Consequently, when progressive mineralization is needed, pure BSF-frass may be a consistent option from an agronomic aspect whereas, for specific crops that require spikes of readily available nitrogen sources (like cranberry), fast release 90CM-10BSF frass biofertilizer are more appropriate. Field experiments on cranberry suggests that, indeed, 90CM-10BSF frass is a potent candidate for organic cranberry production, as currently, organic growers rely solely on CM, whose mineralization kinetics are known to imperfectly match plant’s needs, which is known to be a major reason that sustains the current yield gap between conventional and organic cranberry sectors.Keywords: soil mineralization, biofertilizer, BSF-frass, chicken manure, soil functions, nitrogen, soil microbiota
Procedia PDF Downloads 70260 Isosorbide Bis-Methyl Carbonate: Opportunities for an Industrial Model Based on Biomass
Authors: Olga Gomez De Miranda, Jose R. Ochoa-Gomez, Stefaan De Wildeman, Luciano Monsegue, Soraya Prieto, Leire Lorenzo, Cristina Dineiro
Abstract:
The chemical industry is facing a new revolution. As long as processes based on the exploitation of fossil resources emerged with force in the XIX century, Society currently demands a new radical change that will lead to the complete and irreversible implementation of a circular sustainable economic model. The implementation of biorefineries will be essential for this. There, renewable raw materials as sugars and other biomass resources are exploited for the development of new materials that will partially replace their petroleum-derived homologs in a safer, and environmentally more benign approach. Isosorbide, (1,4:3,6-dianhydro-d-glucidol) is a primary bio-based derivative obtained from the plant (poly) saccharides and a very interesting example of a useful chemical produced in biorefineries. It can, in turn, be converted to other secondary monomers as isosorbide bis-methyl carbonate (IBMC), whose main field of application can be as a key biodegradable intermediary substitute of bisphenol-A in the manufacture of polycarbonates, or as an alternative to the toxic isocyanates in the synthesis of new polyurethanes (non-isocyanate polyurethanes) both with a huge application market. New products will present advantageous mechanical or optical properties, as well as improved behavior in non-toxicity and biodegradability aspects in comparison to their petro-derived alternatives. A robust production process of IBMC, a biomass-derived chemical, is here presented. It can be used with different raw material qualities using dimethyl carbonate (DMC) as both co-reactant and solvent. It consists of the transesterification of isosorbide with DMC under soft operational conditions, using different basic catalysts, always active with the isosorbide characteristics and purity. Appropriate isolation processes have been also developed to obtain crude IBMC yields higher than 90%, with oligomers production lower than 10%, independently of the quality of the isosorbide considered. All of them are suitable to be used in polycondensation reactions for polymers obtaining. If higher qualities of IBMC are needed, a purification treatment based on nanofiltration membranes has been also developed. The IBMC reaction-isolation conditions established in the laboratory have been successfully modeled using appropriate software programs and moved to a pilot-scale (production of 100 kg of IBMC). It has been demonstrated that a highly efficient IBMC production process able to be up-scaled under suitable market conditions has been obtained. Operational conditions involved the production of IBMC involve soft temperature and energy needs, no additional solvents, and high operational efficiency. All of them are according to green manufacturing rules.Keywords: biomass, catalyst, isosorbide bis-methyl carbonate, polycarbonate, polyurethane, transesterification
Procedia PDF Downloads 133259 Healthcare Fire Disasters: Readiness, Response and Resilience Strategies: A Real-Time Experience of a Healthcare Organization of North India
Authors: Raman Sharma, Ashok Kumar, Vipin Koushal
Abstract:
Healthcare facilities are always seen as places of haven and protection for managing the external incidents, but the situation becomes more difficult and challenging when such facilities themselves are affected from internal hazards. Such internal hazards are arguably more disruptive than external incidents affecting vulnerable ones, as patients are always dependent on supportive measures and are neither in a position to respond to such crisis situation nor do they know how to respond. The situation becomes more arduous and exigent to manage if, in case critical care areas like Intensive Care Units (ICUs) and Operating Rooms (OR) are convoluted. And, due to these complexities of patients’ in-housed there, it becomes difficult to move such critically ill patients on immediate basis. Healthcare organisations use different types of electrical equipment, inflammable liquids, and medical gases often at a single point of use, hence, any sort of error can spark the fire. Even though healthcare facilities face many fire hazards, damage caused by smoke rather than flames is often more severe. Besides burns, smoke inhalation is primary cause of fatality in fire-related incidents. The greatest cause of illness and mortality in fire victims, particularly in enclosed places, appears to be the inhalation of fire smoke, which contains a complex mixture of gases in addition to carbon monoxide. Therefore, healthcare organizations are required to have a well-planned disaster mitigation strategy, proactive and well prepared manpower to cater all types of exigencies resulting from internal as well as external hazards. This case report delineates a true OR fire incident in Emergency Operation Theatre (OT) of a tertiary care multispecialty hospital and details the real life evidence of the challenges encountered by OR staff in preserving both life and property. No adverse event was reported during or after this fire commotion, yet, this case report aimed to congregate the lessons identified of the incident in a sequential and logical manner. Also, timely smoke evacuation and preventing the spread of smoke to adjoining patient care areas by opting appropriate measures, viz. compartmentation, pressurisation, dilution, ventilation, buoyancy, and airflow, helped to reduce smoke-related fatalities. Henceforth, precautionary measures may be implemented to mitigate such incidents. Careful coordination, continuous training, and fire drill exercises can improve the overall outcomes and minimize the possibility of these potentially fatal problems, thereby making a safer healthcare environment for every worker and patient.Keywords: healthcare, fires, smoke, management, strategies
Procedia PDF Downloads 68258 Optimising Apparel Digital Production in Industrial Clusters
Authors: Minji Seo
Abstract:
Fashion stakeholders are becoming increasingly aware of technological innovation in manufacturing. In 2020, the COVID-19 pandemic caused transformations in working patterns, such as working remotely rather thancommuting. To enable smooth remote working, 3D fashion design software is being adoptedas the latest trend in design and production. The majority of fashion designers, however, are still resistantto this change. Previous studies on 3D fashion design software solely highlighted the beneficial and detrimental factors of adopting design innovations. They lacked research on the relationship between resistance factors and the adoption of innovation. These studies also fell short of exploringthe perspectives of users of these innovations. This paper aims to investigate the key drivers and barriers of employing 3D fashion design software as wellas to explore the challenges faced by designers.It also toucheson the governmental support for digital manufacturing in Seoul, South Korea, and London, the United Kingdom. By conceptualising local support, this study aims to provide a new path for industrial clusters to optimise digital apparel manufacturing. The study uses a mixture of quantitative and qualitative approaches. Initially, it reflects a survey of 350 samples, fashion designers, on innovation resistance factors of 3D fashion design software and the effectiveness of local support. In-depth interviews with 30 participants provide a better understanding of designers’ aspects of the benefits and obstacles of employing 3D fashion design software. The key findings of this research are the main barriers to employing 3D fashion design software in fashion production. The cultural characteristics and interviews resultsare used to interpret the survey results. The findings of quantitative data examine the main resistance factors to adopting design innovations. The dominant obstacles are: the cost of software and its complexity; lack of customers’ interest in innovation; lack of qualified personnel, and lack of knowledge. The main difference between Seoul and London is the attitudes towards government support. Compared to the UK’s fashion designers, South Korean designers emphasise that government support is highly relevant to employing 3D fashion design software. The top-down and bottom-up policy implementation approach distinguishes the perception of government support. Compared to top-down policy approaches in South Korea, British fashion designers based on employing bottom-up approaches are reluctant to receive government support. The findings of this research will contribute to generating solutions for local government and the optimisation of use of 3D fashion design software in fashion industrial clusters.Keywords: digital apparel production, industrial clusters, innovation resistance, 3D fashion design software, manufacturing, innovation, technology, digital manufacturing, innovative fashion design process
Procedia PDF Downloads 102257 Finding the Association Rule between Nursing Interventions and Early Evaluation Results of In-Hospital Cardiac Arrest to Improve Patient Safety
Authors: Wei-Chih Huang, Pei-Lung Chung, Ching-Heng Lin, Hsuan-Chia Yang, Der-Ming Liou
Abstract:
Background: In-Hospital Cardiac Arrest (IHCA) threaten life of the inpatients, cause serious effect to patient safety, quality of inpatients care and hospital service. Health providers must identify the signs of IHCA early to avoid the occurrence of IHCA. This study will consider the potential association between early signs of IHCA and the essence of patient care provided by nurses and other professionals before an IHCA occurs. The aim of this study is to identify significant associations between nursing interventions and abnormal early evaluation results of IHCA that can assist health care providers in monitoring inpatients at risk of IHCA to increase opportunities of IHCA early detection and prevention. Materials and Methods: This study used one of the data mining techniques called association rules mining to compute associations between nursing interventions and abnormal early evaluation results of IHCA. The nursing interventions and abnormal early evaluation results of IHCA were considered to be co-occurring if nursing interventions were provided within 24 hours of last being observed in abnormal early evaluation results of IHCA. The rule based methods were utilized 23.6 million electronic medical records (EMR) from a medical center in Taipei, Taiwan. This dataset includes 733 concepts of nursing interventions that coded by clinical care classification (CCC) codes and 13 early evaluation results of IHCA with binary codes. The values of interestingness and lift were computed as Q values to measure the co-occurrence and associations’ strength between all in-hospital patient care measures and abnormal early evaluation results of IHCA. The associations were evaluated by comparing the results of Q values and verified by medical experts. Results and Conclusions: The results show that there are 4195 pairs of associations between nursing interventions and abnormal early evaluation results of IHCA with their Q values. The indication of positive association is 203 pairs with Q values greater than 5. Inpatients with high blood sugar level (hyperglycemia) have positive association with having heart rate lower than 50 beats per minute or higher than 120 beats per minute, Q value is 6.636. Inpatients with temporary pacemaker (TPM) have significant association with high risk of IHCA, Q value is 47.403. There is significant positive correlation between inpatients with hypovolemia and happened abnormal heart rhythms (arrhythmias), Q value is 127.49. The results of this study can help to prevent IHCA from occurring by making health care providers early recognition of inpatients at risk of IHCA, assist with monitoring patients for providing quality of care to patients, improve IHCA surveillance and quality of in-hospital care.Keywords: in-hospital cardiac arrest, patient safety, nursing intervention, association rule mining
Procedia PDF Downloads 271256 Intellectual Property Law as a Tool to Enhance and Sustain Museums in Digital Era
Authors: Nayira Ahmed Galal Elden Hassan, Amr Mostafa Awad Kassem
Abstract:
The management of Intellectual Property (IP) in museums presents a multifaceted challenge, requiring a balance between granting access to cultural assets and maintaining control over them. In the digital age, IP has emerged as a critical aspect of museum operations, encompassing valuable assets within collections and museum-generated content. Effective IP management enables museums to generate revenue, protect rights, and promote cultural heritage while leveraging digital technologies. Opportunities such as e-commerce and licensing can drive economic growth, but they also introduce complexities related to IP protection and regulation. This study explores the dual nature of IP assets—collection-based and museum-generated—highlighting their implications for sustainability and cultural preservation. The analysis includes examples such as the German State Museum’s management of replicas from the Nefertiti bust, showcasing the challenges museums face when navigating IP frameworks. The research underscores the importance of a comprehensive understanding of IP laws to prevent legal disputes, reputational risks, and revenue loss. By adopting an analytical and comparative methodology, this paper examines museums that have effectively implemented IP rules to enhance their operations and sustain their resources. It investigates how IP management can help museums fulfill their mission of community engagement, education, and outreach while ensuring long-term sustainability. The findings demonstrate that balanced IP strategies are essential for securing financial stability, safeguarding cultural heritage, and adapting to the demands of the digital era. This research seeks to explore how museums can effectively fulfill their mission of community engagement, education, and outreach while ensuring long-term sustainability. It examines the extent to which intellectual property (IP) management can contribute to achieving these objectives, focusing on the benefits and challenges associated with adopting IP management strategies. Additionally, the study addresses the question of ownership by investigating who holds the rights to cultural assets and how these rights can be managed effectively to align with both institutional goals and the preservation of cultural heritage.The findings underscore the pivotal role of effective IP management in empowering museums to navigate the digital landscape, maximize revenue streams, and safeguard cultural heritage. The study emphasizes the necessity of adopting a balanced approach to IP management, which aligns institutional goals with the ethical and legal considerations of cultural heritage preservation.Keywords: intellectual property, museums, IP management, digital technologies, sustainability, cultural heritage
Procedia PDF Downloads 12255 Sensory Characteristics of White Chocolate Enriched with Encapsulated Raspberry Juice
Authors: Ivana Loncarevic, Biljana Pajin, Jovana Petrovic, Danica Zaric, Vesna Tumbas Saponjac, Aleksandar Fistes
Abstract:
Chocolate is a food that activates pleasure centers in the human brain. In comparison to black and milk chocolate, white chocolate does not contain fat-free cocoa solids and thus lacks bioactive components. The aim of this study was to examine the sensory characteristics of enriched white chocolate with the addition of 10% of raspberry juice encapsulated in maltodextrins (denoted as encapsulate). Chocolate is primarily intended for enjoyment, and therefore, the sensory expectation is a critical factor for consumers when selecting a new type of chocolate. Consumer acceptance of chocolate depends primarily on the appearance and taste, but also very much on the mouthfeel, which mainly depends on the particle size of chocolate. Chocolate samples were evaluated by a panel of 8 trained panelists, food technologists, trained according to ISO 8586 (2012). Panelists developed the list of attributes to be used in this study: intensity of red color (light to dark); glow on the surface (mat to shiny); texture on snap (appearance of cavities or holes on the snap surface that are seen - even to gritty); hardness (hardness felt during the first bite of chocolate sample in half by incisors - soft to hard); melting (the time needed to convert solid chocolate into a liquid state – slowly to quickly); smoothness (perception of evenness of chocolate during melting - very even to very granular); fruitiness (impression of fruity taste - light fruity notes to distinct fruity notes); sweetness (organoleptic characteristic of pure substance or mixture giving sweet taste - lightly sweet to very sweet). The chocolate evaluation was carried out 24 h after sample preparation in the sensory laboratory, in partitioned booths, which were illuminated with fluorescent lights (ISO 8589, 2007). Samples were served in white plastic plates labeled with three-digit codes from a random number table. Panelist scored the perceived intensity of each attribute using a 7-point scale (1 = the least intensity and 7 = the most intensity) (ISO 4121, 2002). The addition of 10% of encapsulate had a big influence on chocolate color, where enriched chocolate got a nice reddish color. At the same time, the enriched chocolate sample had less intensity of gloss on the surface. The panelists noticed that addition of encapsulate reduced the time needed to convert solid chocolate into a liquid state, increasing its hardness. The addition of encapsulate had a significant impact on chocolate flavor. It reduced the sweetness of white chocolate and contributed to the fruity raspberry flavor.Keywords: white chocolate, encapsulated raspberry juice, color, sensory characteristics
Procedia PDF Downloads 160254 Degradation of the Cu-DOM Complex by Bacteria: A Way to Increase Phytoextraction of Copper in a Vineyard Soil
Authors: Justine Garraud, Hervé Capiaux, Cécile Le Guern, Pierre Gaudin, Clémentine Lapie, Samuel Chaffron, Erwan Delage, Thierry Lebeau
Abstract:
The repeated use of Bordeaux mixture (copper sulphate) and other chemical forms of copper (Cu) has led to its accumulation in wine-growing soils for more than a century, to the point of modifying the ecosystem of these soils. Phytoextraction of copper could progressively reduce the Cu load in these soils, and even to recycle copper (e.g. as a micronutrient in animal nutrition) by cultivating the extracting plants in the inter-row of the vineyards. Soil cleaning up usually requires several years because the chemical speciation of Cu in solution is mainly based on forms complexed with dissolved organic matter (DOM) that are not phytoavailable, unlike the "free" forms (Cu2+). Indeed, more than 98% of Cu in the solution is bound to DOM. The selection and inoculation of invineyardsoils in vineyard soils ofbacteria(bioaugmentation) able to degrade Cu-DOM complexes could increase the phytoavailable pool of Cu2+ in the soil solution (in addition to bacteria which first mobilize Cu in solution from the soil bearing phases) in order to increase phytoextraction performance. In this study, sevenCu-accumulating plants potentially usable in inter-row were tested for their Cu phytoextraction capacity in hydroponics (ray-grass, brown mustard, buckwheat, hemp, sunflower, oats, and chicory). Also, a bacterial consortium was tested: Pseudomonas sp. previously studied for its ability to mobilize Cu through the pyoverdine siderophore (complexing agent) and potentially to degrade Cu-DOM complexes, and a second bacterium (to be selected) able to promote the survival of Pseudomonas sp. following its inoculation in soil. Interaction network method was used based on the notions of co-occurrence and, therefore, of bacterial abundance found in the same soils. Bacteria from the EcoVitiSol project (Alsace, France) were targeted. The final step consisted of incoupling the bacterial consortium with the chosen plant in soil pots. The degradation of Cu-DOMcomplexes is measured on the basis of the absorption index at 254nm, which gives insight on the aromaticity of the DOM. The“free” Cu in solution (from the mobilization of Cu and/or the degradation of Cu-MOD complexes) is assessed by measuring pCu. Eventually, Cu accumulation in plants is measured by ICP-AES. The selection of the plant is currently being finalized. The interaction network method targeted the best positive interactions ofFlavobacterium sp. with Pseudomonassp. These bacteria are both PGPR (plant growth promoting rhizobacteria) with the ability to improve the plant growth and to mobilize Cu from the soil bearing phases (siderophores). Also, these bacteria are known to degrade phenolic groups, which are highly present in DOM. They could therefore contribute to the degradation of DOM-Cu. The results of the upcoming bacteria-plant coupling tests in pots will be also presented.Keywords: complexes Cu-DOM, bioaugmentation, phytoavailability, phytoextraction
Procedia PDF Downloads 82253 Repair of Thermoplastic Composites for Structural Applications
Authors: Philippe Castaing, Thomas Jollivet
Abstract:
As a result of their advantages, i.e. recyclability, weld-ability, environmental compatibility, long (continuous) fiber thermoplastic composites (LFTPC) are increasingly used in many industrial sectors (mainly automotive and aeronautic) for structural applications. Indeed, in the next ten years, the environmental rules will put the pressure on the use of new structural materials like composites. In aerospace, more than 50% of the damage are due to stress impact and 85% of damage are repaired on the fuselage (fuselage skin panels and around doors). With the arrival of airplanes mainly of composite materials, replacement of sections or panels seems difficult economically speaking and repair becomes essential. The objective of the present study is to propose a solution of repair to prevent the replacement the damaged part in thermoplastic composites in order to recover the initial mechanical properties. The classification of impact damage is not so not easy : talking about low energy impact (less than 35 J) can be totally wrong when high speed or weak thicknesses as well as thermoplastic resins are considered. Crash and perforation with higher energy create important damages and the structures are replaced without repairing, so we just consider here damages due to impacts at low energy that are as follows for laminates : − Transverse cracking; − Delamination; − Fiber rupture. At low energy, the damages are barely visible but can nevertheless reduce significantly the mechanical strength of the part due to resin cracks while few fiber rupture is observed. The patch repair solution remains the standard one but may lead to the rupture of fibers and consequently creates more damages. That is the reason why we investigate the repair of thermoplastic composites impacted at low energy. Indeed, thermoplastic resins are interesting as they absorb impact energy through plastic strain. The methodology is as follows: - impact tests at low energy on thermoplastic composites; - identification of the damage by micrographic observations; - evaluation of the harmfulness of the damage; - repair by reconsolidation according to the extent of the damage ; -validation of the repair by mechanical characterization (compression). In this study, the impacts tests are performed at various levels of energy on thermoplastic composites (PA/C, PEEK/C and PPS/C woven 50/50 and unidirectional) to determine the level of impact energy creating damages in the resin without fiber rupture. We identify the extent of the damage by US inspection and micrographic observations in the plane part thickness. The samples were in addition characterized in compression to evaluate the loss of mechanical properties. Then the strategy of repair consists in reconsolidating the damaged parts by thermoforming, and after reconsolidation the laminates are characterized in compression for validation. To conclude, the study demonstrates the feasibility of the repair for low energy impact on thermoplastic composites as the samples recover their properties. At a first step of the study, the “repair” is made by reconsolidation on a thermoforming press but we could imagine a process in situ to reconsolidate the damaged parts.Keywords: aerospace, automotive, composites, compression, damages, repair, structural applications, thermoplastic
Procedia PDF Downloads 305252 A Sustainable Pt/BaCe₁₋ₓ₋ᵧZrₓGdᵧO₃ Catalyst for Dry Reforming of Methane-Derived from Recycled Primary Pt
Authors: Alessio Varotto, Lorenzo Freschi, Umberto Pasqual Laverdura, Anastasia Moschovi, Davide Pumiglia, Iakovos Yakoumis, Marta Feroci, Maria Luisa Grilli
Abstract:
Dry reforming of Methane (DRM) is considered one of the most valuable technologies for green-house gas valorization thanks to the fact that through this reaction, it is possible to obtain syngas, a mixture of H₂ and CO in an H₂/CO ratio suitable for utilization in the Fischer-Tropsch process of high value-added chemicals and fuels. Challenges of the DRM process are the reduction of costs due to the high temperature of the process and the high cost of precious metals of the catalyst, the metal particles sintering, and carbon deposition on the catalysts’ surface. The aim of this study is to demonstrate the feasibility of the synthesis of catalysts using a leachate solution containing Pt coming directly from the recovery of spent diesel oxidation catalysts (DOCs) without further purification. An unusual perovskite support for DRM, the BaCe₁₋ₓ₋ᵧZrₓGdᵧO₃ (BCZG) perovskite, has been chosen as the catalyst support because of its high thermal stability and capability to produce oxygen vacancies, which suppress the carbon deposition and enhance the catalytic activity of the catalyst. BCZG perovskite has been synthesized by a sol-gel modified Pechini process and calcinated in air at 1100 °C. BCZG supports have been impregnated with a Pt-containing leachate solution of DOC, obtained by a mild hydrometallurgical recovery process, as reported elsewhere by some of the authors of this manuscript. For comparison reasons, a synthetic solution obtained by digesting commercial Pt-black powder in aqua regia was used for BCZG support impregnation. Pt nominal content was 2% in both BCZG-based catalysts formed by real and synthetic solutions. The structure and morphology of catalysts were characterized by X-Ray Diffraction (XRD) and Scanning Electron Microscopy (SEM). Thermogravimetric Analysis (TGA) was used to study the thermal stability of the catalyst’s samples. Brunauer-Emmett-Teller (BET) analysis provided a high surface area of the catalysts. H₂-TPR (Temperature Programmed Reduction) analysis was used to study the consumption of hydrogen for reducibility, and it was associated with H₂-TPD characterization to study the dispersion of Pt on the surface of the support and calculate the number of active sites used by the precious metal. Dry reforming of methane (DRM) reaction, carried out in a fixed bed reactor, showed a high conversion efficiency of CO₂ and CH4. At 850°C, CO₂ and CH₄ conversion were close to 100% for the catalyst obtained with the aqua regia-based solution of commercial Pt-black, and ~70% (for CH₄) and ~80 % (for CO₂) in the case of real HCl-based leachate solution. H₂/CO ratios were ~0.9 and ~0.70 in the first and latter cases, respectively. As far as we know, this is the first pioneering work in which a BCGZ catalyst and a real Pt-containing leachate solution were successfully employed for DRM reaction.Keywords: dry reforming of methane, perovskite, PGM, recycled Pt, syngas
Procedia PDF Downloads 40251 VISMA: A Method for System Analysis in Early Lifecycle Phases
Authors: Walter Sebron, Hans Tschürtz, Peter Krebs
Abstract:
The choice of applicable analysis methods in safety or systems engineering depends on the depth of knowledge about a system, and on the respective lifecycle phase. However, the analysis method chain still shows gaps as it should support system analysis during the lifecycle of a system from a rough concept in pre-project phase until end-of-life. This paper’s goal is to discuss an analysis method, the VISSE Shell Model Analysis (VISMA) method, which aims at closing the gap in the early system lifecycle phases, like the conceptual or pre-project phase, or the project start phase. It was originally developed to aid in the definition of the system boundary of electronic system parts, like e.g. a control unit for a pump motor. Furthermore, it can be also applied to non-electronic system parts. The VISMA method is a graphical sketch-like method that stratifies a system and its parts in inner and outer shells, like the layers of an onion. It analyses a system in a two-step approach, from the innermost to the outermost components followed by the reverse direction. To ensure a complete view of a system and its environment, the VISMA should be performed by (multifunctional) development teams. To introduce the method, a set of rules and guidelines has been defined in order to enable a proper shell build-up. In the first step, the innermost system, named system under consideration (SUC), is selected, which is the focus of the subsequent analysis. Then, its directly adjacent components, responsible for providing input to and receiving output from the SUC, are identified. These components are the content of the first shell around the SUC. Next, the input and output components to the components in the first shell are identified and form the second shell around the first one. Continuing this way, shell by shell is added with its respective parts until the border of the complete system (external border) is reached. Last, two external shells are added to complete the system view, the environment and the use case shell. This system view is also stored for future use. In the second step, the shells are examined in the reverse direction (outside to inside) in order to remove superfluous components or subsystems. Input chains to the SUC, as well as output chains from the SUC are described graphically via arrows, to highlight functional chains through the system. As a result, this method offers a clear and graphical description and overview of a system, its main parts and environment; however, the focus still remains on a specific SUC. It helps to identify the interfaces and interfacing components of the SUC, as well as important external interfaces of the overall system. It supports the identification of the first internal and external hazard causes and causal chains. Additionally, the method promotes a holistic picture and cross-functional understanding of a system, its contributing parts, internal relationships and possible dangers within a multidisciplinary development team.Keywords: analysis methods, functional safety, hazard identification, system and safety engineering, system boundary definition, system safety
Procedia PDF Downloads 225250 The Use of Brachytherapy in the Treatment of Liver Metastases: A Systematic Review
Authors: Mateusz Bilski, Jakub Klas, Emilia Kowalczyk, Sylwia Koziej, Katarzyna Kulszo, Ludmiła Grzybowska- Szatkowska
Abstract:
Background: Liver metastases are a common complication of primary solid tumors and sig-nificantly reduce patient survival. In the era of increasing diagnosis of oligometastatic disease and oligoprogression, methods of local treatment of metastases, i.e. MDT, are becoming more important. Implementation of such treatment can be considered for liver metastases, which are a common complication of primary solid tumors and significantly reduce patient survival. To date, the mainstay of treatment for oligometastatic disease has been surgical resection, but not all patients qualify for the procedure. As an alternative to surgical resection, radiotherapy techniques have become available, including stereotactic body radiation therapy (SBRT) or high-dose interstitial brachytherapy (iBT). iBT is an invasive method that emits very high doses of radiation from the inside of the tumor to the outside. This technique provides better tumor coverage than SBRT while having little impact on surrounding healthy tissue and elim-inates some concerns involving respiratory motion. Methods: We conducted a systematic re-view of the scientific literature on the use of brachytherapy in the treatment of liver metasta-ses from 2018 - 2023 using PubMed and ResearchGate browsers according to PRISMA rules. Results: From 111 articles, 18 publications containing information on 729 patients with liver metastases were selected. iBT has been shown to provide high rates of tumor control. Among 14 patients with 54 unresectable RCC liver metastases, after iBT LTC was 92.6% during a median follow-up of 10.2 months, PFS was 3.4 months. In analysis of 167 patients after treatment with a single fractional dose of 15-25 Gy with brachytherapy at 6- and 12-month follow-up, LRFS rates of 88,4-88.7% and 70.7 - 71,5%, PFS of 78.1 and 53.8%, and OS of 92.3 - 96.7% and 76,3% - 79.6%, respectively, were achieved. No serious complications were observed in all patients. Distant intrahepatic progression occurred later in patients with unre-sectable liver metastases after brachytherapy (PFS: 19.80 months) than in HCC patients (PFS: 13.50 months). A significant difference in LRFS between CRC patients (84.1% vs. 50.6%) and other histologies (92.4% vs. 92.4%) was noted, suggesting a higher treatment dose is necessary for CRC patients. The average target dose for metastatic colorectal cancer was 40 - 60 Gy (compared to 100 - 250 Gy for HCC). To better assess sensitivity to therapy and pre-dict side effects, it has been suggested that humoral mediators be evaluated. It was also shown that baseline levels of TNF-α, MCP-1 and VEGF, as well as NGF and CX3CL corre-lated with both tumor volume and radiation-induced liver damage, one of the most serious complications of iBT, indicating their potential role as biomarkers of therapy outcome. Con-clusions: The use of brachytherapy methods in the treatment of liver metastases of various cancers appears to be an interesting and relatively safe therapeutic method alternative to sur-gery. An important challenge remains the selection of an appropriate brachytherapy method and radiation dose for the corresponding initial tumor type from which the metastasis origi-nated.Keywords: liver metastases, brachytherapy, CT-HDRBT, iBT
Procedia PDF Downloads 114249 Novel Framework for MIMO-Enhanced Robust Selection of Critical Control Factors in Auto Plastic Injection Moulding Quality Optimization
Authors: Seyed Esmail Seyedi Bariran, Khairul Salleh Mohamed Sahari
Abstract:
Apparent quality defects such as warpage, shrinkage, weld line, etc. are such an irresistible phenomenon in mass production of auto plastic appearance parts. These frequently occurred manufacturing defects should be satisfied concurrently so as to achieve a final product with acceptable quality standards. Determining the significant control factors that simultaneously affect multiple quality characteristics can significantly improve the optimization results by eliminating the deviating effect of the so-called ineffective outliers. Hence, a robust quantitative approach needs to be developed upon which major control factors and their level can be effectively determined to help improve the reliability of the optimal processing parameter design. Hence, the primary objective of current study was to develop a systematic methodology for selection of significant control factors (SCF) relevant to multiple quality optimization of auto plastic appearance part. Auto bumper was used as a specimen with the most identical quality and production characteristics to APAP group. A preliminary failure modes and effect analysis (FMEA) was conducted to nominate a database of pseudo significant significant control factors prior to the optimization phase. Later, CAE simulation Moldflow analysis was implemented to manipulate four rampant plastic injection quality defects concerned with APAP group including warpage deflection, volumetric shrinkage, sink mark and weld line. Furthermore, a step-backward elimination searching method (SESME) has been developed for systematic pre-optimization selection of SCF based on hierarchical orthogonal array design and priority-based one-way analysis of variance (ANOVA). The development of robust parameter design in the second phase was based on DOE module powered by Minitab v.16 statistical software. Based on the F-test (F 0.05, 2, 14) one-way ANOVA results, it was concluded that for warpage deflection, material mixture percentage was the most significant control factor yielding a 58.34% of contribution while for the other three quality defects, melt temperature was the most significant control factor with a 25.32%, 84.25%, and 34.57% contribution for sin mark, shrinkage and weld line strength control. Also, the results on the he least significant control factors meaningfully revealed injection fill time as the least significant factor for both warpage and sink mark with respective 1.69% and 6.12% contribution. On the other hand, for shrinkage and weld line defects, the least significant control factors were holding pressure and mold temperature with a 0.23% and 4.05% overall contribution accordingly.Keywords: plastic injection moulding, quality optimization, FMEA, ANOVA, SESME, APAP
Procedia PDF Downloads 349