Search results for: CASE technology
792 Interdigitated Flexible Li-Ion Battery by Aerosol Jet Printing
Authors: Yohann R. J. Thomas, Sébastien Solan
Abstract:
Conventional battery technology includes the assembly of electrode/separator/electrode by standard techniques such as stacking or winding, depending on the format size. In that type of batteries, coating or pasting techniques are only used for the electrode process. The processes are suited for large scale production of batteries and perfectly adapted to plenty of application requirements. Nevertheless, as the demand for both easier and cost-efficient production modes, flexible, custom-shaped and efficient small sized batteries is rising. Thin-film, printable batteries are one of the key areas for printed electronics. In the frame of European BASMATI project, we are investigating the feasibility of a new design of lithium-ion battery: interdigitated planar core design. Polymer substrate is used to produce bendable and flexible rechargeable accumulators. Direct fully printed batteries lead to interconnect the accumulator with other electronic functions for example organic solar cells (harvesting function), printed sensors (autonomous sensors) or RFID (communication function) on a common substrate to produce fully integrated, thin and flexible new devices. To fulfill those specifications, a high resolution printing process have been selected: Aerosol jet printing. In order to fit with this process parameters, we worked on nanomaterials formulation for current collectors and electrodes. In addition, an advanced printed polymer-electrolyte is developed to be implemented directly in the printing process in order to avoid the liquid electrolyte filling step and to improve safety and flexibility. Results: Three different current collectors has been studied and printed successfully. An ink of commercial copper nanoparticles has been formulated and printed, then a flash sintering was applied to the interdigitated design. A gold ink was also printed, the resulting material was partially self-sintered and did not require any high temperature post treatment. Finally, carbon nanotubes were also printed with a high resolution and well defined patterns. Different electrode materials were formulated and printed according to the interdigitated design. For cathodes, NMC and LFP were efficaciously printed. For anodes, LTO and graphite have shown to be good candidates for the fully printed battery. The electrochemical performances of those materials have been evaluated in a standard coin cell with lithium-metal counter electrode and the results are similar with those of a traditional ink formulation and process. A jellified plastic crystal solid state electrolyte has been developed and showed comparable performances to classical liquid carbonate electrolytes with two different materials. In our future developments, focus will be put on several tasks. In a first place, we will synthesize and formulate new specific nano-materials based on metal-oxyde. Then a fully printed device will be produced and its electrochemical performance will be evaluated.Keywords: high resolution digital printing, lithium-ion battery, nanomaterials, solid-state electrolytes
Procedia PDF Downloads 251791 Sustainable Pavements with Reflective and Photoluminescent Properties
Authors: A.H. Martínez, T. López-Montero, R. Miró, R. Puig, R. Villar
Abstract:
An alternative to mitigate the heat island effect is to pave streets and sidewalks with pavements that reflect incident solar energy, keeping their surface temperature lower than conventional pavements. The “Heat island mitigation to prevent global warming by designing sustainable pavements with reflective and photoluminescent properties (RELUM) Project” has been carried out with this intention in mind. Its objective has been to develop bituminous mixtures for urban pavements that help in the fight against global warming and climate change, while improving the quality of life of citizens. The technology employed has focused on the use of reflective pavements, using bituminous mixes made with synthetic bitumens and light pigments that provide high solar reflectance. In addition to this advantage, the light surface colour achieved with these mixes can improve visibility, especially at night. In parallel and following the latter approach, an appropriate type of treatment has also been developed on bituminous mixtures to make them capable of illuminating at night, giving rise to photoluminescent applications, which can reduce energy consumption and increase road safety due to improved night-time visibility. The work carried out consisted of designing different bituminous mixtures in which the nature of the aggregate was varied (porphyry, granite and limestone) and also the colour of the mixture, which was lightened by adding pigments (titanium dioxide and iron oxide). The reflectance of each of these mixtures was measured, as well as the temperatures recorded throughout the day, at different times of the year. The results obtained make it possible to propose bituminous mixtures whose characteristics can contribute to the reduction of urban heat islands. Among the most outstanding results is the mixture made with synthetic bitumen, white limestone aggregate and a small percentage of titanium dioxide, which would be the most suitable for urban surfaces without road traffic, given its high reflectance and the greater temperature reduction it offers. With this solution, a surface temperature reduction of 9.7°C is achieved at the beginning of the night in the summer season with the highest radiation. As for luminescent pavements, paints with different contents of strontium aluminate and glass microspheres have been applied to asphalt mixtures, and the luminance of all the applications designed has been measured by exciting them with electric bulbs that simulate the effect of sunlight. The results obtained at this stage confirm the ability of all the designed dosages to emit light for a certain time, varying according to the proportions used. Not only the effect of the strontium aluminate and microsphere content has been observed, but also the influence of the colour of the base on which the paint is applied; the lighter the base, the higher the luminance. Ongoing studies are focusing on the evaluation of the durability of the designed solutions in order to determine their lifetime.Keywords: heat island, luminescent paints, reflective pavement, temperature reduction
Procedia PDF Downloads 34790 Techno-Economic Analysis (TEA) of Circular Economy Approach in the Valorisation of Pig Meat Processing Wastes
Authors: Ribeiro A., Vilarinho C., Luisa A., Carvalho J
Abstract:
The pig meat industry generates large volumes of by- and co-products like blood, bones, skin, trimmings, organs, viscera, and skulls, among others, during slaughtering and meat processing and must be treated and disposed of ecologically. The yield of these by-products has been reported to account for about 10% to 15% of the value of the live animal in developed countries, although animal by-products account for about two-thirds of the animal after slaughter. It was selected for further valorization of the principal wastes produced throughout the value chain of pig meat production: Pig Manure, Pig Bones, Fats, Skins, Pig Hair, Wastewater, Wastewater sludges, and other animal subproducts type III. According to the potential valorization options, these wastes will be converted into Biomethane, Fertilizers (phosphorus and digestate), Hydroxyapatite, and protein hydrolysates (Keratin and Collagen). This work includes comprehensive technical and economic analyses (TEA) for each valorization route or applied technology. Metrics such as Net Present Value (NPV), Internal Rate of Return (IRR), and payback periods were used to evaluate economic feasibility. From this analysis, it can be concluded that, for Biogas Production, the scenarios using pig manure, wastewater sludges and mixed grass and leguminous wastes presented a remarkably high economic feasibility. Scenarios showed high economic feasibility with a positive payback period, NPV, and IRR. The optimal scenario combining pig manure with mixed grass and leguminous wastes had a payback period of 1.2 years and produced 427,6269 m³ of biomethane annually. Regarding the Chemical Extraction of Phosphorous and Nitrogen, results proved that the process is economically unviable due to negative cash flows despite high recovery rates. The TEA of Hydrolysis and Extraction of Keratin Hydrolysates indicate that a unit processing and valorizing 10 tons of pig hair per year for the production of keratin hydrolysate has an NPV of 907,940 €, an IRR of 13.07%, and a Payback period of 5.41 years. All of these indicators suggest a highly potential project to explore in the future. On the opposite, the results of Hydrolysis and Extraction of Collagen Hydrolysates showed a process economically unviable with negative cash flows in all scenarios due to the high-fat content in raw materials. In fact, the results from the valorization of 10 tons of pig skin had a negative cash flow of 453 743,88 €. TEA results of Extraction and purification of Hydroxyapatite from Pig Bones with Pyrolysis indicate that unit processing and valorizing 10 tons of pig bones per year for the production of hydroxyapatite has an NPV of 1 274 819,00 €, an IRR of 65.43%, and a Payback period of 1,5 years over a timeline of 10 years with a discount rate of 10%. These valorization routes, circular economy and bio-refinery approach offer significant contributions to sustainable bio-based operations within the agri-food industry. This approach transforms waste into valuable resources, enhancing both environmental and economic outcomes and contributing to a more sustainable and circular bioeconomy.Keywords: techno-economic analysis (TEA), pig meat processing wastes, circular economy, bio-refinery
Procedia PDF Downloads 17789 3D Interactions in Under Water Acoustic Simulations
Authors: Prabu Duplex
Abstract:
Due to stringent emission regulation targets, large-scale transition to renewable energy sources is a global challenge, and wind power plays a significant role in the solution vector. This scenario has led to the construction of offshore wind farms, and several wind farms are planned in the shallow waters where the marine habitat exists. It raises concerns over impacts of underwater noise on marine species, for example bridge constructions in the ocean straits. Dangerous to aquatic life, the environmental organisations say, the bridge would be devastating, since ocean straits are important place of transit for marine mammals. One of the highest concentrations of biodiversity in the world is concentrated these areas. The investigation of ship noise and piling noise that may happen during bridge construction and in operation is therefore vital. Once the source levels are known the receiver levels can be modelled. With this objective this work investigates the key requirement of the software that can model transmission loss in high frequencies that may occur during construction or operation phases. Most propagation models are 2D solutions, calculating the propagation loss along a transect, which does not include horizontal refraction, reflection or diffraction. In many cases, such models provide sufficient accuracy and can provide three-dimensional maps by combining, through interpolation, several two-dimensional (distance and depth) transects. However, in some instances the use of 2D models may not be sufficient to accurately model the sound propagation. A possible example includes a scenario where an island or land mass is situated between the source and receiver. The 2D model will result in a shadow behind the land mass where the modelled transects intersect the land mass. Diffraction will occur causing bending of the sound around the land mass. In such cases, it may be necessary to use a 3D model, which accounts for horizontal diffraction to accurately represent the sound field. Other scenarios where 2D models may not provide sufficient accuracy may be environments characterised by a strong up-sloping or down sloping seabed, such as propagation around continental shelves. In line with these objectives by means of a case study, this work addresses the importance of 3D interactions in underwater acoustics. The methodology used in this study can also be used for other 3D underwater sound propagation studies. This work assumes special significance given the increasing interest in using underwater acoustic modeling for environmental impacts assessments. Future work also includes inter-model comparison in shallow water environments considering more physical processes known to influence sound propagation, such as scattering from the sea surface. Passive acoustic monitoring of the underwater soundscape with distributed hydrophone arrays is also suggested to investigate the 3D propagation effects as discussed in this article.Keywords: underwater acoustics, naval, maritime, cetaceans
Procedia PDF Downloads 20788 Liquefaction Phenomenon in the Kathmandu Valley during the 2015 Earthquake of Nepal
Authors: Kalpana Adhikari, Mandip Subedi, Keshab Sharma, Indra P. Acharya
Abstract:
The Gorkha Nepal earthquake of moment magnitude (Mw) 7.8 struck the central region of Nepal on April 25, 2015 with the epicenter about 77 km northwest of Kathmandu Valley . Peak ground acceleration observed during the earthquake was 0.18g. This motion induced several geotechnical effects such as landslides, foundation failures liquefaction, lateral spreading and settlement, and local amplification. An aftershock of moment magnitude (Mw) 7.3 hit northeast of Kathmandu on May 12 after 17 days of main shock caused additional damages. Kathmandu is the largest city in Nepal, have a population over four million. As the Kathmandu Valley deposits are composed mainly of sand, silt and clay layers with a shallow ground water table, liquefaction is highly anticipated. Extensive liquefaction was also observed in Kathmandu Valley during the 1934 Nepal-Bihar earthquake. Field investigations were carried out in Kathmandu Valley immediately after Mw 7.8, April 25 main shock and Mw 7.3, May 12 aftershock. Geotechnical investigation of both liquefied and non-liquefied sites were conducted after the earthquake. This paper presents observations of liquefaction and liquefaction induced damage, and the liquefaction potential assessment based on Standard Penetration Tests (SPT) for liquefied and non-liquefied sites. SPT based semi-empirical approach has been used for evaluating liquefaction potential of the soil and Liquefaction Potential Index (LPI) has been used to determine liquefaction probability. Recorded ground motions from the event are presented. Geological aspect of Kathmandu Valley and local site effect on the occurrence of liquefaction is described briefly. Observed liquefaction case studies are described briefly. Typically, these are sand boils formed by freshly ejected sand forced out of over-pressurized sub-strata. At most site, sand was ejected to agricultural fields forming deposits that varied from millimetres to a few centimeters thick. Liquefaction-induced damage to structures in these areas was not significant except buildings on some places tilted slightly. Boiled soils at liquefied sites were collected and the particle size distributions of ejected soils were analyzed. SPT blow counts and the soil profiles at ten liquefied and non-liquefied sites were obtained. The factors of safety against liquefaction with depth and liquefaction potential index of the ten sites were estimated and compared with observed liquefaction after 2015 Gorkha earthquake. The liquefaction potential indices obtained from the analysis were found to be consistent with the field observation. The field observations along with results from liquefaction assessment were compared with the existing liquefaction hazard map. It was found that the existing hazard maps are unrepresentative and underestimate the liquefaction susceptibility in Kathmandu Valley. The lessons learned from the liquefaction during this earthquake are also summarized in this paper. Some recommendations are also made to the seismic liquefaction mitigation in the Kathmandu Valley.Keywords: factor of safety, geotechnical investigation, liquefaction, Nepal earthquake
Procedia PDF Downloads 325787 Impact of Customer Experience Quality on Loyalty of Mobile and Fixed Broadband Services: Case Study of Telecom Egypt Group
Authors: Nawal Alawad, Passent Ibrahim Tantawi, Mohamed Abdel Salam Ragheb
Abstract:
Providing customers with quality experiences has been confirmed to be a sustainable, competitive advantage with a distinct financial impact for companies. The success of service providers now relies on their ability to provide customer-centric services. The importance of perceived service quality and customer experience is widely recognized. The focus of this research is in the area of mobile and fixed broadband services. This study is of dual importance both academically and practically. Academically, this research applies a new model investigating the impact of customer experience quality on loyalty based on modifying the multiple-item scale for measuring customers’ service experience in a new area and did not depend on the traditional models. The integrated scale embraces four dimensions: service experience, outcome focus, moments of truth and peace of mind. In addition, it gives a scientific explanation for this relationship so this research fill the gap in such relations in which no one correlate or give explanations for these relations before using such integrated model and this is the first time to apply such modified and integrated new model in telecom field. Practically, this research gives insights to marketers and practitioners to improve customer loyalty through evolving the experience quality of broadband customers which is interpreted to suggested outcomes: purchase, commitment, repeat purchase and word-of-mouth, this approach is one of the emerging topics in service marketing. Data were collected through 412 questionnaires and analyzed by using structural equation modeling.Findings revealed that both outcome focus and moments of truth have a significant impact on loyalty while both service experience and peace of mind have insignificant impact on loyalty.In addition, it was found that 72% of the variation occurring in loyalty is explained by the model. The researcher also measured the net prompters score and gave explanation for the results. Furthermore, assessed customer’s priorities of broadband services. The researcher recommends that the findings of this research will extend to be considered in the future plans of Telecom Egypt Group. In addition, to be applied in the same industry especially in the developing countries that have the same circumstances with similar service settings. This research is a positive contribution in service marketing, particularly in telecom industry for making marketing more reliable as managers can relate investments in service experience directly with the performance closest to income for instance, repurchasing behavior, positive word of mouth and, commitment. Finally, the researcher recommends that future studies should consider this model to explain significant marketing outcomes such as share of wallet and ultimately profitability.Keywords: broadband services, customer experience quality, loyalty, net promoters score
Procedia PDF Downloads 268786 Functionalizing Gold Nanostars with Ninhydrin as Vehicle Molecule for Biomedical Applications
Authors: Swati Mishra
Abstract:
In recent years, there has been an explosion in Gold NanoParticle (GNP) research, with a rapid increase in publications in diverse fields, including imaging, bioengineering, and molecular biology. GNPs exhibit unique physicochemical properties, including surface plasmon resonance (SPR) and bind amine and thiol groups, allowing surface modification and use in biomedical applications. Nanoparticle functionalization is the subject of intense research at present, with rapid progress being made towards developing biocompatible, multi-functional particles. In the present study, the photochemical method has been done to functionalize various-shaped GNPs like nanostars by the molecules like ninhydrin. Ninhydrin is bactericidal, virucidal, fungicidal, antigen-antibody reactive, and used in fingerprint technology in forensics. The GNPs functionalized with ninhydrin efficiently will bind to the amino acids on the target protein, which is of eminent importance during the pandemic, especially where long-term treatments of COVID- 19 bring many side effects of the drugs. The photochemical method is adopted as it provides low thermal load, selective reactivity, selective activation, and controlled radiation in time, space, and energy. The GNPs exhibit their characteristic spectrum, but a distinctly blue or redshift in the peak will be observed after UV irradiation, ensuring efficient ninhydrin binding. Now, the bound ninhydrin in the GNP carrier, upon chemically reacting with any amino acid, will lead to the formation of Rhumann purple. A common method of GNP production includes citrate reduction of Au [III] derivatives such as aurochloric acid (HAuCl4) in water to Au [0] through a one-step synthesis of size-tunable GNPs. The following reagents are prepared to validate the approach. Reagent A solution 1 is0.0175 grams ninhydrin in 5 ml Millipore water Reagent B 30 µl of HAuCl₄.3H₂O in 3 ml of solution 1 Reagent C 1 µl of gold nanostars in 3 ml of solution 1 Reagent D 6 µl of cetrimonium bromide (CTAB) in 3 ml of solution1 ReagentE 1 µl of gold nanostars in 3 ml of ethanol ReagentF 30 µl of HAuCl₄.₃H₂O in 3 ml of ethanol ReagentG 30 µl of HAuCl₄.₃H₂O in 3 ml of solution 2 ReagentH solution 2 is0.0087 grams ninhydrin in 5 ml Millipore water ReagentI 30 µl of HAuCl₄.₃H₂O in 3 ml of water The reagents were irradiated at 254 nm for 15 minutes, followed by their UV Visible spectroscopy. The wavelength was selected based on the one reported for excitation of a similar molecule Pthalimide. It was observed that the solution B and G deviate around 600 nm, while C peaks distinctively at 567.25 nm and 983.9 nm. Though it is tough to say about the chemical reaction happening, butATR-FTIR of reagents will ensure that ninhydrin is not forming Rhumann purple in the absence of amino acids. Therefore, these experiments, we achieved the functionalization of gold nanostars with ninhydrin corroborated by the deviation in the spectrum obtained in a mixture of GNPs and ninhydrin irradiated with UV light. It prepares them as a carrier molecule totake up amino acids for targeted delivery or germicidal action.Keywords: gold nanostars, ninhydrin, photochemical method, UV visible specgtroscopy
Procedia PDF Downloads 149785 An Exploration of the Emergency Staff’s Perceptions and Experiences of Teamwork and the Skills Required in the Emergency Department in Saudi Arabia
Authors: Sami Alanazi
Abstract:
Teamwork practices have been recognized as a significant strategy to improve patient safety, quality of care, and staff and patient satisfaction in healthcare settings, particularly within the emergency department (ED). The EDs depend heavily on teams of interdisciplinary healthcare staff to carry out their operational goals and core business of providing care to the serious illness and injured. The ED is also recognized as a high-risk area in relation to service demand and the potential for human error. Few studies have considered the perceptions and experiences of the ED staff (physicians, nurses, allied health professionals, and administration staff) about the practice of teamwork, especially in Saudi Arabia (SA), and no studies have been conducted to explore the practices of teamwork in the EDs. Aim: To explore the practices of teamwork from the perspectives and experiences of staff (physicians, nurses, allied health professionals, and administration staff) when interacting with each other in the admission areas in the ED of a public hospital in the Northern Border region of SA. Method: A qualitative case study design was utilized, drawing on two methods for the data collection, comprising of semi-structured interviews (n=22) with physicians (6), nurses (10), allied health professionals (3), and administrative members (3) working in the ED of a hospital in the Northern Border region of SA. The second method is non-participant direct observation. All data were analyzed using thematic analysis. Findings: The main themes that emerged from the analysis were as follows: the meaningful of teamwork, reasons of teamwork, the ED environmental factors, the organizational factors, the value of communication, leadership, teamwork skills in the ED, team members' behaviors, multicultural teamwork, and patients and families behaviors theme. Discussion: Working in the ED environment played a major role in affecting work performance as well as team dynamics. However, Communication, time management, fast-paced performance, multitasking, motivation, leadership, and stress management were highlighted by the participants as fundamental skills that have a major impact on team members and patients in the ED. It was found that the behaviors of the team members impacted the team dynamics as well as ED health services. Behaviors such as disputes among team members, conflict, cooperation, uncooperative members, neglect, and emotions of the members. Besides that, the behaviors of the patients and their accompanies had a direct impact on the team and the quality of the services. In addition, the differences in the cultures have separated the team members and created undesirable gaps such the gender segregation, national origin discrimination, and similarity and different in interests. Conclusion: Effective teamwork, in the context of the emergency department, was recognized as an essential element to obtain the quality of care as well as improve staff satisfaction.Keywords: teamwork, barrier, facilitator, emergencydepartment
Procedia PDF Downloads 142784 Understanding the Impact of Out-of-Sequence Thrust Dynamics on Earthquake Mitigation: Implications for Hazard Assessment and Disaster Planning
Authors: Rajkumar Ghosh
Abstract:
Earthquakes pose significant risks to human life and infrastructure, highlighting the importance of effective earthquake mitigation strategies. Traditional earthquake modelling and mitigation efforts have largely focused on the primary fault segments and their slip behaviour. However, earthquakes can exhibit complex rupture dynamics, including out-of-sequence thrust (OOST) events, which occur on secondary or subsidiary faults. This abstract examines the impact of OOST dynamics on earthquake mitigation strategies and their implications for hazard assessment and disaster planning. OOST events challenge conventional seismic hazard assessments by introducing additional fault segments and potential rupture scenarios that were previously unrecognized or underestimated. Consequently, these events may increase the overall seismic hazard in affected regions. The study reviews recent case studies and research findings that illustrate the occurrence and characteristics of OOST events. It explores the factors contributing to OOST dynamics, such as stress interactions between fault segments, fault geometry, and mechanical properties of fault materials. Moreover, it investigates the potential triggers and precursory signals associated with OOST events to enhance early warning systems and emergency response preparedness. The abstract also highlights the significance of incorporating OOST dynamics into seismic hazard assessment methodologies. It discusses the challenges associated with accurately modelling OOST events, including the need for improved understanding of fault interactions, stress transfer mechanisms, and rupture propagation patterns. Additionally, the abstract explores the potential for advanced geophysical techniques, such as high-resolution imaging and seismic monitoring networks, to detect and characterize OOST events. Furthermore, the abstract emphasizes the practical implications of OOST dynamics for earthquake mitigation strategies and urban planning. It addresses the need for revising building codes, land-use regulations, and infrastructure designs to account for the increased seismic hazard associated with OOST events. It also underscores the importance of public awareness campaigns to educate communities about the potential risks and safety measures specific to OOST-induced earthquakes. This sheds light on the impact of out-of-sequence thrust dynamics in earthquake mitigation. By recognizing and understanding OOST events, researchers, engineers, and policymakers can improve hazard assessment methodologies, enhance early warning systems, and implement effective mitigation measures. By integrating knowledge of OOST dynamics into urban planning and infrastructure development, societies can strive for greater resilience in the face of earthquakes, ultimately minimizing the potential for loss of life and infrastructure damage.Keywords: earthquake mitigation, out-of-sequence thrust, seismic, satellite imagery
Procedia PDF Downloads 90783 Identification of Three Strategies to Enhance University Students’ Professional Identity, Using Hierarchical Regression Analysis
Authors: Alba Barbara-i-Molinero, Rosalia Cascon-Pereira, Ana Beatriz Hernandez
Abstract:
Students’ transitions from high school to the university have been challenged by the lack of continuity between both contexts. This mismatch directly affects students by generating feelings of anxiety and uncertainty, which increases the dropout rates and reduces students’ academic success. This discontinuity emanates because ‘transitions concern a restructuring of what the person does and who the person perceives him or herself to be’. Hence, identity becomes essential in these transitions. Generally, identity is the answer to questions such as who am I? or who are we? This is integrated by personal identity, and as many social identities as groups, the individual feels he/she is a part. A case in point to construct a social identity is the identification with a profession. For this reason, a way to lighten the generated tension during transitions is applying strategies orientated to enhance students’ professional identity in their point of entry to the higher education institution. That would create a sense of continuity between high school and higher education contexts, increasing their Professional Identity Strength. To develop the strategies oriented to enhance students Professional Identity, it is important to analyze what influences it. There exist several influencing factors that influence Professional Identity (e.g., professional status, the recommendation of family and peers, the academic environment, or the chosen bachelor degree). There is a gap in the literature analyzing the impact of these factors on more than one bachelor degree. In this regards, our study takes an additional step with the aim of evaluating the influence of several factors on Professional Identity using a cohort of university students from multiple degrees between the ages of 17-19 years. To do so, we used hierarchical regression analyses to assess the impact of the following factors: External Motivation Conditionals (EMC), Educational Experience Conditionals (EEC) and Personal Motivational Conditional (PMP). After conducting the analyses, we found that the assessed factors influenced students’ professional identity differently according to their bachelor degree and discipline. For example, PMC and EMC positively affected science students, while architecture, law and economics and engineering students were just influenced by PMC. Basing on that influences, we proposed three different strategies aimed to enhance students’ professional identity, in the short and long term. These strategies are: to enhance students’ professional identity before the incorporation to university through campuses and icebreaker activities; to apply recruitment strategies aimed to provide realistic information of the bachelor degree; and to incorporate different activities, such as in-vitro, in situ and self-directed activities aimed to enhance longitudinally students’ professional identity from the university. From these results, theoretical contributions and practical implications arise. First, we contribute to the literature by identifying which factors influence students from different bachelor degrees since there is still no evidence. And, second, using as a benchmark the obtained results, we contribute from a practical perspective, by proposing several alternative strategies to increase students’ professional identity strength aiming to lighten their transition from high school to higher education.Keywords: professional identity, higher education, educational strategies , students
Procedia PDF Downloads 145782 Using Machine Learning to Extract Patient Data from Non-standardized Sports Medicine Physician Notes
Authors: Thomas Q. Pan, Anika Basu, Chamith S. Rajapakse
Abstract:
Machine learning requires data that is categorized into features that models train on. This topic is important to the field of sports medicine due to the many tools it provides to physicians such as diagnosis support and risk assessment. Physician note that healthcare professionals take are usually unclean and not suitable for model training. The objective of this study was to develop and evaluate an advanced approach for extracting key features from sports medicine data without the need for extensive model training or data labeling. An LLM (Large Language Model) was given a narrative (Physician’s Notes) and prompted to extract four features (details about the patient). The narrative was found in a datasheet that contained six columns: Case Number, Validation Age, Validation Gender, Validation Diagnosis, Validation Body Part, and Narrative. The validation columns represent the accurate responses that the LLM attempts to output. With the given narrative, the LLM would output its response and extract the age, gender, diagnosis, and injured body part with each category taking up one line. The output would then be cleaned, matched, and added to new columns containing the extracted responses. Five ways of checking the accuracy were used: unclear count, substring comparison, LLM comparison, LLM re-check, and hand-evaluation. The unclear count essentially represented the extractions the LLM missed. This can be also understood as the recall score ([total - false negatives] over total). The rest of these correspond to the precision score ([total - false positives] over total). Substring comparison evaluated the validation (X) and extracted (Y) columns’ likeness by checking if X’s results were a substring of Y's findings and vice versa. LLM comparison directly asked an LLM if the X and Y’s results were similar. LLM Re-check prompted the LLM to see if the extracted results can be found in the narrative. Lastly, A selection of 1,000 random narratives was also selected and hand-evaluated to give an estimate of how well the LLM-based feature extraction model performed. With a selection of 10,000 narratives, the LLM-based approach had a recall score of roughly 98%. However, the precision scores of the substring comparison and LLM comparison models were around 72% and 76% respectively. The reason for these low figures is due to the minute differences between answers. For example, the ‘chest’ is a part of the ‘upper trunk’ however, these models cannot detect that. On the other hand, the LLM re-check and subset of hand-tested narratives showed a precision score of 96% and 95%. If this subset is used to extrapolate the possible outcome of the whole 10,000 narratives, the LLM-based approach would be strong in both precision and recall. These results indicated that an LLM-based feature extraction model could be a useful way for medical data in sports to be collected and analyzed by machine learning models. Wide use of this method could potentially increase the availability of data thus improving machine learning algorithms and supporting doctors with more enhanced tools. Procedia PDF Downloads 12781 Enhancing the Effectiveness of Witness Examination through Deposition System in Korean Criminal Trials: Insights from the U.S. Evidence Discovery Process
Authors: Qi Wang
Abstract:
With the expansion of trial-centered principles, the importance of witness examination in Korean criminal proceedings has been increasingly emphasized. However, several practical challenges have emerged in courtroom examinations, including concerns about witnesses’ memory deterioration due to prolonged trial periods, the possibility of inaccurate testimony due to courtroom anxiety and tension, risks of testimony retraction, and witnesses’ refusal to appear. These issues have led to a decline in the effective utilization of witness testimony. This study analyzes the deposition system, which is widely used in the U.S. evidence discovery process, and examines its potential implementation within the Korean criminal procedure framework. Furthermore, it explores the scope of application, procedural design, and measures to prevent potential abuse if the system were to be adopted. Under the adversarial litigation structure that has evolved through several amendments to the Criminal Procedure Act, the deposition system, although conducted pre-trial, serves as a preliminary procedure to facilitate efficient and effective witness examination during trial. This system not only aligns with the goal of discovering substantive truth but also upholds the practical ideals of trial-centered principles while promoting judicial economy. Furthermore, with the legal foundation established by Article 266 of the Criminal Procedure Act and related provisions, this study concludes that the implementation of the deposition system is both feasible and appropriate for the Korean criminal justice system. The specific functions of depositions include providing case-related information to refresh witnesses’ memory as a preliminary to courtroom examination, pre-reviewing existing statement documents to enhance trial efficiency, and conducting preliminary examinations on key issues and anticipated questions. The subsequent courtroom witness examination focuses on verifying testimony through public and cross-examination, identifying and analyzing contradictions in testimony, and conducting double verification of testimony credibility under judicial supervision. Regarding operational aspects, both prosecution and defense may request depositions, subject to court approval. The deposition process involves video or audio recording, complete documentation by court reporters, and the preparation of transcripts, with copies provided to all parties and the original included in court records. The admissibility of deposition transcripts is recognized under Article 311 of the Criminal Procedure Act. Given prosecutors’ advantageous position in evidence collection, which may lead to indifference or avoidance of depositions, the study emphasizes the need to reinforce prosecutors’ public interest status and objective duties. Additionally, it recommends strengthening pre-employment ethics education and post-violation disciplinary measures for prosecutors.Keywords: witness examination, deposition system, Korean criminal procedure, evidence discovery, trial-centered principle
Procedia PDF Downloads 13780 Predicting Food Waste and Losses Reduction for Fresh Products in Modified Atmosphere Packaging
Authors: Matar Celine, Gaucel Sebastien, Gontard Nathalie, Guilbert Stephane, Guillard Valerie
Abstract:
To increase the very short shelf life of fresh fruits and vegetable, Modified Atmosphere Packaging (MAP) allows an optimal atmosphere composition to be maintained around the product and thus prevent its decay. This technology relies on the modification of internal packaging atmosphere due to equilibrium between production/consumption of gases by the respiring product and gas permeation through the packaging material. While, to the best of our knowledge, benefit of MAP for fresh fruits and vegetable has been widely demonstrated in the literature, its effect on shelf life increase has never been quantified and formalized in a clear and simple manner leading difficult to anticipate its economic and environmental benefit, notably through the decrease of food losses. Mathematical modelling of mass transfers in the food/packaging system is the basis for a better design and dimensioning of the food packaging system. But up to now, existing models did not permit to estimate food quality nor shelf life gain reached by using MAP. However, shelf life prediction is an indispensable prerequisite for quantifying the effect of MAP on food losses reduction. The objective of this work is to propose an innovative approach to predict shelf life of MAP food product and then to link it to a reduction of food losses and wastes. In this purpose, a ‘Virtual MAP modeling tool’ was developed by coupling a new predictive deterioration model (based on visual surface prediction of deterioration encompassing colour, texture and spoilage development) with models of the literature for respiration and permeation. A major input of this modelling tool is the maximal percentage of deterioration (MAD) which was assessed from dedicated consumers’ studies. Strawberries of the variety Charlotte were selected as the model food for its high perishability, high respiration rate; 50-100 ml CO₂/h/kg produced at 20°C, allowing it to be a good representative of challenging post-harvest storage. A value of 13% was determined as a limit of acceptability for the consumers, permitting to define products’ shelf life. The ‘Virtual MAP modeling tool’ was validated in isothermal conditions (5, 10 and 20°C) and in dynamic temperature conditions mimicking commercial post-harvest storage of strawberries. RMSE values were systematically lower than 3% for respectively, O₂, CO₂ and deterioration profiles as a function of time confirming the goodness of model fitting. For the investigated temperature profile, a shelf life gain of 0.33 days was obtained in MAP compared to the conventional storage situation (no MAP condition). Shelf life gain of more than 1 day could be obtained for optimized post-harvest conditions as numerically investigated. Such shelf life gain permitted to anticipate a significant reduction of food losses at the distribution and consumer steps. This food losses' reduction as a function of shelf life gain has been quantified using a dedicated mathematical equation that has been developed for this purpose.Keywords: food losses and wastes, modified atmosphere packaging, mathematical modeling, shelf life prediction
Procedia PDF Downloads 183779 Technological Challenges for First Responders in Civil Protection; the RESPOND-A Solution
Authors: Georgios Boustras, Cleo Varianou Mikellidou, Christos Argyropoulos
Abstract:
Summer 2021 was marked by a number of prolific fires in the EU (Greece, Cyprus, France) as well as outside the EU (USA, Turkey, Israel). This series of dramatic events have stretched national civil protection systems and first responders in particular. Despite the introduction of National, Regional and International frameworks (e.g. rescEU), a number of challenges have arisen, not only related to climate change. RESPOND-A (funded by the European Commission by Horizon 2020, Contract Number 883371) introduces a unique five-tier project architectural structure for best associating modern telecommunications technology with novel practices for First Responders of saving lives, while safeguarding themselves, more effectively and efficiently. The introduced architecture includes Perception, Network, Processing, Comprehension, and User Interface layers, which can be flexibly elaborated to support multiple levels and types of customization, so, the intended technologies and practices can adapt to any European Environment Agency (EEA)-type disaster scenario. During the preparation of the RESPOND-A proposal, some of our First Responder Partners expressed the need for an information management system that could boost existing emergency response tools, while some others envisioned a complete end-to-end network management system that would offer high Situational Awareness, Early Warning and Risk Mitigation capabilities. The intuition behind these needs and visions sits on the long-term experience of these Responders, as well, their smoldering worry that the evolving threat of climate change and the consequences of industrial accidents will become more frequent and severe. Three large-scale pilot studies are planned in order to illustrate the capabilities of the RESPOND-A system. The first pilot study will focus on the deployment and operation of all available technologies for continuous communications, enhanced Situational Awareness and improved health and safety conditions for First Responders, according to a big fire scenario in a Wildland Urban Interface zone (WUI). An important issue will be examined during the second pilot study. Unobstructed communication in the form of the flow of information is severely affected during a crisis; the flow of information between the wider public, from the first responders to the public and vice versa. Call centers are flooded with requests and communication is compromised or it breaks down on many occasions, which affects in turn – the effort to build a common operations picture for all firstr esponders. At the same time the information that reaches from the public to the operational centers is scarce, especially in the aftermath of an incident. Understandably traffic if disrupted leaves no other way to observe but only via aerial means, in order to perform rapid area surveys. Results and work in progress will be presented in detail and challenges in relation to civil protection will be discussed.Keywords: first responders, safety, civil protection, new technologies
Procedia PDF Downloads 143778 Introduction of Acute Paediatric Services in Primary Care: Evaluating the Impact on GP Education
Authors: Salman Imran, Chris Healey
Abstract:
Traditionally, medical care of children in England and Wales starts from primary care with a referral to secondary care paediatricians who may not investigate further. Many primary care doctors do not undergo a paediatric rotation/exposure in training. As a result, there are many who have not acquired the necessary skills to manage children hence increasing hospital referral. With the current demand on hospitals in the National Health Service managing more problems in the community is needed. One way of handling this is to set up clinics, meetings and huddles in GP surgeries where professionals involved (general practitioner, paediatrician, health visitor, community nurse, dietician, school nurse) come together and share information which can help improve communication and care. The increased awareness and education that paediatricians can impart in this way will help boost confidence for primary care professionals to be able to be more self-sufficient. This has been tried successfully in other regions e.g., St. Mary’s Hospital in London but is crucial for a more rural setting like ours. The primary aim of this project would be to educate specifically GP’s and generally all other health professionals involved. Additional benefits would be providing care nearer home, increasing patient’s confidence in their local surgery, improving communication and reducing unnecessary patient flow to already stretched hospital resources. Methods: This was done as a plan do study act cycle (PDSA). Three clinics were delivered in different practices over six months where feedback from staff and patients was collected. Designated time for teaching/discussion was used which involved some cases from the actual clinics. Both new and follow up patients were included. Two clinics were conducted by a paediatrician and nurse whilst the 3rd involved paediatrician and local doctor. The distance from hospital to clinics varied from two miles to 22 miles approximately. All equipment used was provided by primary care. Results: A total of 30 patients were seen. All patients found the location convenient as it was nearer than the hospital. 70-90% clearly understood the reason for a change in venue. 95% agreed to the importance of their local doctor being involved in their care. 20% needed to be seen in the hospital for further investigations. Patients felt this to be a more personalised, in-depth, friendly and polite experience. Local physicians felt this to be a more relaxed, familiar and local experience for their patients and they managed to get immediate feedback regarding their own clinical management. 90% felt they gained important learning from the discussion time and the paediatrician also learned about their understanding and gaps in knowledge/focus areas. 80% felt this time was valuable for targeted learning. Equipment, information technology, and office space could be improved for the smooth running of any future clinics. Conclusion: The acute paediatric outpatient clinic can be successfully established in primary care facilities. Careful patient selection and adequate facilities are important. We have demonstrated a further step in the reduction of patient flow to hospitals and upskilling primary care health professionals. This service is expected to become more efficient with experience.Keywords: clinics, education, paediatricians, primary care
Procedia PDF Downloads 164777 Threats to the Business Value: The Case of Mechanical Engineering Companies in the Czech Republic
Authors: Maria Reznakova, Michala Strnadova, Lukas Reznak
Abstract:
Successful achievement of strategic goals requires an effective performance management system, i.e. determining the appropriate indicators measuring the rate of goal achievement. Assuming that the goal of the owners is to grow the assets they invested in, it is vital to identify the key performance indicators, which contribute to value creation. These indicators are known as value drivers. Based on the undertaken literature search, a value driver is defined as any factor that affects the value of an enterprise. The important factors are then monitored by both financial and non-financial indicators. Financial performance indicators are most useful in strategic management, since they indicate whether a company's strategy implementation and execution are contributing to bottom line improvement. Non-financial indicators are mainly used for short-term decisions. The identification of value drivers, however, is problematic for companies which are not publicly traded. Therefore financial ratios continue to be used to measure the performance of companies, despite their considerable criticism. The main drawback of such indicators is the fact that they are calculated based on accounting data, while accounting rules may differ considerably across different environments. For successful enterprise performance management it is vital to avoid factors that may reduce (or even destroy) its value. Among the known factors reducing the enterprise value are the lack of capital, lack of strategic management system and poor quality of production. In order to gain further insight into the topic, the paper presents results of the research identifying factors that adversely affect the performance of mechanical engineering enterprises in the Czech Republic. The research methodology focuses on both the qualitative and the quantitative aspect of the topic. The qualitative data were obtained from a questionnaire survey of the enterprises senior management, while the quantitative financial data were obtained from the Analysis Major Database for European Sources (AMADEUS). The questionnaire prompted managers to list factors which negatively affect business performance of their enterprises. The range of potential factors was based on a secondary research – analysis of previously undertaken questionnaire surveys and research of studies published in the scientific literature. The results of the survey were evaluated both in general, by average scores, and by detailed sub-analyses of additional criteria. These include the company specific characteristics, such as its size and ownership structure. The evaluation also included a comparison of the managers’ opinions and the performance of their enterprises – measured by return on equity and return on assets ratios. The comparisons were tested by a series of non-parametric tests of statistical significance. The results of the analyses show that the factors most detrimental to the enterprise performance include the incompetence of responsible employees and the disregard to the customers‘ requirements.Keywords: business value, financial ratios, performance measurement, value drivers
Procedia PDF Downloads 224776 Planckian Dissipation in Bi₂Sr₂Ca₂Cu₃O₁₀₋δ
Authors: Lalita, Niladri Sarkar, Subhasis Ghosh
Abstract:
Since the discovery of high temperature superconductivity (HTSC) in cuprates, several aspects of this phenomena have fascinated physics community. The most debated one is the linear temperature dependence of normal state resistivity over wide range of temperature in violation of with Fermi liquid theory. The linear-in-T resistivity (LITR) is the indication of strongly correlated metallic, known as “strange metal”, attributed to non Fermi liquid theory (NFL). The proximity of superconductivity to LITR suggests that there may be underlying common origin. The LITR has been shown to be due to unknown dissipative phenomena, restricted by quantum mechanics and commonly known as ‘‘Planckian dissipation” , the term first coined by Zaanen and the associated inelastic scattering time τ and given by 1/τ=αkBT/ℏ, where ℏ, kB and α are reduced Planck’s constant, Boltzmann constant and a dimensionless constant of order of unity, respectively. Since the first report, experimental support for α ~ 1 is appearing in literature. There are several striking issues which remain to be resolved if we desire to find out or at least get a clue towards microscopic origin of maximal dissipation in cuprates. (i) Universality of α ~ 1, recently some doubts have been raised in some cases. (ii) So far, Planckian dissipation has been demonstrated in overdoped Cuprates, but if the proximity to quantum criticality is important, then Planckian dissipation should be observed in optimally doped and marginally underdoped cuprates. The link between Planckian dissipation and quantum criticality still remains an open problem. (iii) Validity of Planckian dissipation in all cuprates is an important issue. Here, we report reversible change in the superconducting behavior of high temperature superconductor Bi2Sr2Ca2Cu3O10+δ (Bi-2223) under dynamic doping induced by photo-excitation. Two doped Bi-223 samples, which are x = 0.16 (optimal-doped), x = 0.145 (marginal-doped) have been used for this investigation. It is realized that steady state photo-excitation converts magnetic Cu2+ ions to nonmagnetic Cu1+ ions which reduces superconducting transition temperature (Tc) by killing superfluid density. In Bi-2223, one would expect the maximum of suppression of Tc should be at charge transfer gap. We have observed suppression of Tc starts at 2eV, which is the charge transfer gap in Bi-2223. We attribute this transition due to Cu-3d9(Cu2+) to Cu-3d10(Cu+), known as d9 − d10 L transition, photoexcitation makes some Cu ions in CuO2 planes as spinless non-magnetic potential perturbation as Zn2+ does in CuO2 plane in case Zn-doped cuprates. The resistivity varies linearly with temperature with or without photo-excitation. Tc can be varied by almost by 40K be photoexcitation. Superconductivity can be destroyed completely by introducing ≈ 2% of Cu1+ ions for this range of doping. With this controlled variation of Tc and resistivity, detailed investigation has been carried out to reveal Planckian dissipation underdoped to optimally doped Bi-2223. The most important aspect of this investigation is that we could vary Tc dynamically and reversibly, so that LITR and associated Planckian dissipation can be studied over wide ranges of Tc without changing the doping chemically.Keywords: linear resistivity, HTSC, Planckian dissipation, strange metal
Procedia PDF Downloads 62775 Contextual Toxicity Detection with Data Augmentation
Authors: Julia Ive, Lucia Specia
Abstract:
Understanding and detecting toxicity is an important problem to support safer human interactions online. Our work focuses on the important problem of contextual toxicity detection, where automated classifiers are tasked with determining whether a short textual segment (usually a sentence) is toxic within its conversational context. We use “toxicity” as an umbrella term to denote a number of variants commonly named in the literature, including hate, abuse, offence, among others. Detecting toxicity in context is a non-trivial problem and has been addressed by very few previous studies. These previous studies have analysed the influence of conversational context in human perception of toxicity in controlled experiments and concluded that humans rarely change their judgements in the presence of context. They have also evaluated contextual detection models based on state-of-the-art Deep Learning and Natural Language Processing (NLP) techniques. Counterintuitively, they reached the general conclusion that computational models tend to suffer performance degradation in the presence of context. We challenge these empirical observations by devising better contextual predictive models that also rely on NLP data augmentation techniques to create larger and better data. In our study, we start by further analysing the human perception of toxicity in conversational data (i.e., tweets), in the absence versus presence of context, in this case, previous tweets in the same conversational thread. We observed that the conclusions of previous work on human perception are mainly due to data issues: The contextual data available does not provide sufficient evidence that context is indeed important (even for humans). The data problem is common in current toxicity datasets: cases labelled as toxic are either obviously toxic (i.e., overt toxicity with swear, racist, etc. words), and thus context does is not needed for a decision, or are ambiguous, vague or unclear even in the presence of context; in addition, the data contains labeling inconsistencies. To address this problem, we propose to automatically generate contextual samples where toxicity is not obvious (i.e., covert cases) without context or where different contexts can lead to different toxicity judgements for the same tweet. We generate toxic and non-toxic utterances conditioned on the context or on target tweets using a range of techniques for controlled text generation(e.g., Generative Adversarial Networks and steering techniques). On the contextual detection models, we posit that their poor performance is due to limitations on both of the data they are trained on (same problems stated above) and the architectures they use, which are not able to leverage context in effective ways. To improve on that, we propose text classification architectures that take the hierarchy of conversational utterances into account. In experiments benchmarking ours against previous models on existing and automatically generated data, we show that both data and architectural choices are very important. Our model achieves substantial performance improvements as compared to the baselines that are non-contextual or contextual but agnostic of the conversation structure.Keywords: contextual toxicity detection, data augmentation, hierarchical text classification models, natural language processing
Procedia PDF Downloads 171774 Design, Control and Implementation of 300Wp Single Phase Photovoltaic Micro Inverter for Village Nano Grid Application
Authors: Ramesh P., Aby Joseph
Abstract:
Micro Inverters provide Module Embedded Solution for harvesting energy from small-scale solar photovoltaic (PV) panels. In addition to higher modularity & reliability (25 years of life), the MicroInverter has inherent advantages such as avoidance of long DC cables, eliminates module mismatch losses, minimizes partial shading effect, improves safety and flexibility in installations etc. Due to the above-stated benefits, the renewable energy technology with Solar Photovoltaic (PV) Micro Inverter becomes more widespread in Village Nano Grid application ensuring grid independence for rural communities and areas without access to electricity. While the primary objective of this paper is to discuss the problems related to rural electrification, this concept can also be extended to urban installation with grid connectivity. This work presents a comprehensive analysis of the power circuit design, control methodologies and prototyping of 300Wₚ Single Phase PV Micro Inverter. This paper investigates two different topologies for PV Micro Inverters, based on the first hand on Single Stage Flyback/ Forward PV Micro-Inverter configuration and the other hand on the Double stage configuration including DC-DC converter, H bridge DC-AC Inverter. This work covers Power Decoupling techniques to reduce the input filter capacitor size to buffer double line (100 Hz) ripple energy and eliminates the use of electrolytic capacitors. The propagation of the double line oscillation reflected back to PV module will affect the Maximum Power Point Tracking (MPPT) performance. Also, the grid current will be distorted. To mitigate this issue, an independent MPPT control algorithm is developed in this work to reject the propagation of this double line ripple oscillation to PV side to improve the MPPT performance and grid side to improve current quality. Here, the power hardware topology accepts wide input voltage variation and consists of suitably rated MOSFET switches, Galvanically Isolated gate drivers, high-frequency magnetics and Film capacitors with a long lifespan. The digital controller hardware platform inbuilt with the external peripheral interface is developed using floating point microcontroller TMS320F2806x from Texas Instruments. The firmware governing the operation of the PV Micro Inverter is written in C language and was developed using code composer studio Integrated Development Environment (IDE). In this work, the prototype hardware for the Single Phase Photovoltaic Micro Inverter with Double stage configuration was developed and the comparative analysis between the above mentioned configurations with experimental results will be presented.Keywords: double line oscillation, micro inverter, MPPT, nano grid, power decoupling
Procedia PDF Downloads 136773 Rapid Atmospheric Pressure Photoionization-Mass Spectrometry (APPI-MS) Method for the Detection of Polychlorinated Dibenzo-P-Dioxins and Dibenzofurans in Real Environmental Samples Collected within the Vicinity of Industrial Incinerators
Authors: M. Amo, A. Alvaro, A. Astudillo, R. Mc Culloch, J. C. del Castillo, M. Gómez, J. M. Martín
Abstract:
Polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) of course comprise a range of highly toxic compounds that may exist as particulates within the air or accumulate within water supplies, soil, or vegetation. They may be created either ubiquitously or naturally within the environment as a product of forest fires or volcanic eruptions. It is only since the industrial revolution, however, that it has become necessary to closely monitor their generation as a byproduct of manufacturing/combustion processes, in an effort to mitigate widespread contamination events. Of course, the environmental concentrations of these toxins are expected to be extremely low, therefore highly sensitive and accurate methods are required for their determination. Since ionization of non-polar compounds through electrospray and APCI is difficult and inefficient, we evaluate the performance of a novel low-flow Atmospheric Pressure Photoionization (APPI) source for the trace detection of various dioxins and furans using rapid Mass Spectrometry workflows. Air, soil and biota (vegetable matter) samples were collected monthly during one year from various locations within the vicinity of an industrial incinerator in Spain. Analytes were extracted and concentrated using soxhlet extraction in toluene and concentrated by rotavapor and nitrogen flow. Various ionization methods as electrospray (ES) and atmospheric pressure chemical ionization (APCI) were evaluated, however, only the low-flow APPI source was capable of providing the necessary performance, in terms of sensitivity, required for detecting all targeted analytes. In total, 10 analytes including 2,3,7,8-tetrachlorodibenzodioxin (TCDD) were detected and characterized using the APPI-MS method. Both PCDDs and PCFDs were detected most efficiently in negative ionization mode. The most abundant ion always corresponded to the loss of a chlorine and addition of an oxygen, yielding [M-Cl+O]- ions. MRM methods were created in order to provide selectivity for each analyte. No chromatographic separation was employed; however, matrix effects were determined to have a negligible impact on analyte signals. Triple Quadrupole Mass Spectrometry was chosen because of its unique potential for high sensitivity and selectivity. The mass spectrometer used was a Sciex´s Qtrap3200 working in negative Multi Reacting Monitoring Mode (MRM). Typically mass detection limits were determined to be near the 1-pg level. The APPI-MS2 technology applied to the detection of PCDD/Fs allows fast and reliable atmospheric analysis, minimizing considerably operational times and costs, with respect other technologies available. In addition, the limit of detection can be easily improved using a more sensitive mass spectrometer since the background in the analysis channel is very low. The APPI developed by SEADM allows polar and non-polar compounds ionization with high efficiency and repeatability.Keywords: atmospheric pressure photoionization-mass spectrometry (APPI-MS), dioxin, furan, incinerator
Procedia PDF Downloads 209772 Information and Communication Technology Skills of Finnish Students in Particular by Gender
Authors: Antero J. S. Kivinen, Suvi-Sadetta Kaarakainen
Abstract:
Digitalization touches every aspect of contemporary society, changing the way we live our everyday life. Contemporary society is sometimes described as knowledge society including unprecedented amount of information people face daily. The tools to manage this information flow are ICT-skills which are both technical skills and reflective skills needed to manage incoming information. Therefore schools are under constant pressure of revision. In the latest Programme for International Student Assessment (PISA) girls have been outperforming boys in all Organization for Economic Co-operation and Development (OECD) member countries and the gender gap between girls and boys is widest in Finland. This paper presents results of the Comprehensive Schools in the Digital Age project of RUSE, University of Turku. The project is in connection with Finnish Government Analysis, Assessment and Research Activities. First of all, this paper examines gender differences in ICT-skills of Finnish upper comprehensive school students. Secondly, it explores in which way differences are changing when students proceed to upper secondary and vocational education. ICT skills are measured using a performance-based ICT-skill test. Data is collected in 3 phases, January-March 2017 (upper comprehensive schools, n=5455), September-December 2017 (upper secondary and vocational schools, n~3500) and January-March 2018 (Upper comprehensive schools). The age of upper comprehensive school student’s is 15-16 and upper secondary and vocational school 16-18. The test is divided into 6 categories: basic operations, productivity software, social networking and communication, content creation and publishing, applications and requirements for the ICT study programs. Students have filled a survey about their ICT-usage and study materials they use in school and home. Cronbach's alpha was used to estimate the reliability of the ICT skill test. Statistical differences between genders were examined using two-tailed independent samples t-test. Results of first data from upper comprehensive schools show that there is no statistically significant difference in ICT-skill tests total scores between genders (boys 10.24 and girls 10.64, maximum being 36). Although, there were no gender difference in total test scores, there are differences in above mentioned six categories. Girls get better scores on school related and social networking test subjects while boys perform better on more technical oriented subjects. Test scores on basic operations are quite low for both groups. Perhaps these can partly be explained by the fact that the test was made on computers and majority of students ICT-usage consist of smartphones and tablets. Against this background it is important to analyze further the reasons for these differences. In a context of ongoing digitalization of everyday life and especially working life, the significant purpose of this analyses is to find answers how to guarantee the adequate ICT skills for all students.Keywords: basic education, digitalization, gender differences, ICT-skills, upper comprehensive education, upper secondary education, vocational education
Procedia PDF Downloads 135771 A Dynamic Curriculum as a Platform for Continuous Competence Development
Authors: Niina Jallinoja, Anu Moisio
Abstract:
Focus on adult learning is vital to overcome economic challenges as well as to respond to the demand for new competencies and sustained productivity in the digitalized world economy. Employees of all ages must be able to carry on continuous professional development to remain competitive in the labor market. According to EU policies, countries should offer more flexible opportunities for adult learners who study online and in so-called ‘second chance’ qualification programmes. Traditionally, adult education in Finland has comprised of not only liberal adult education but also the government funding to study for Bachelor, Master's, and Ph.D. degrees in Finnish Universities and Universities of Applied Sciences (UAS). From the beginning of 2021, public funding is allocated not only to degrees but also to courses to achieve new competencies for adult learners in Finland. Consequently, there will be degree students (often younger of age) and adult learners studying in the same evening, online and blended courses. The question is thus: How are combined studies meeting the different needs of degree students and adult learners? Haaga-Helia University of Applied Sciences (UAS), located in the metropolitan area of Finland, is taking up the challenge of continuous learning for adult learners. Haaga-Helia has been reforming the bachelor level education and respective shorter courses from 2019 in the biggest project in its history. By the end of 2023, Haaga-Helia will have a flexible, modular curriculum for the bachelor's degrees of hospitality management, business administration, business information technology, journalism and sports management. Building on the shared key competencies, degree students will have the possibility to build individual study paths more flexibly, thanks to the new modular structure of the curriculum. They will be able to choose courses across all degrees, and thus, build their own unique competence combinations. All modules can also be offered as separate courses or learning paths to non-degree students, both publicly funded and as commercial services for employers. Consequently, there will be shared course implementations for degree studies and adult learners with various competence requirements. The newly designed courses are piloted in parallel of the designing of the curriculum in Haaga-Helia during 2020 and 2021. Semi-structured online surveys are composed among the participants for the key competence courses. The focus of the research is to understand how students in the bachelor programme and adult learners from Open UAE perceive the learning experience in such a diverse learning group. A comparison is also executed between learning methods of in-site teaching, online implementation, blended learning and virtual self-learning courses to understand how the pedagogy is meeting the learning objectives of these two different groups. The new flexible curricula and the study modules are to be designed to fill the most important competence gaps that exist in the Finnish labor markets. The new curriculum will be dynamic and constantly evolving over time according to the future competence needs in the labor market. This type of approach requires constant dialogue between Haaga-Helia and workplaces during and after designing of the shared curriculum.Keywords: ccompetence development, continuous learning, curriculum, higher education
Procedia PDF Downloads 127770 The Effect of Rheological Properties and Spun/Meltblown Fiber Characteristics on “Hotmelt Bleed through” Behavior in High Speed Textile Backsheet Lamination Process
Authors: Kinyas Aydin, Fatih Erguney, Tolga Ceper, Serap Ozay, Ipar N. Uzun, Sebnem Kemaloglu Dogan, Deniz Tunc
Abstract:
In order to meet high growth rates in baby diaper industry worldwide, the high-speed textile backsheet lamination lines have recently been introduced to the market for non-woven/film lamination applications. It is a process where two substrates are bonded to each other via hotmelt adhesive (HMA). Nonwoven (NW) lamination system basically consists of 4 components; polypropylene (PP) nonwoven, polyethylene (PE) film, HMA and applicator system. Each component has a substantial effect on the process efficiency of continuous line and final product properties. However, for a precise subject cover, we will be addressing only the main challenges and possible solutions in this paper. The NW is often produced by spunbond method (SSS or SMS configuration) and has a 10-12 gsm (g/m²) basis weight. The NW rolls can have a width and length up to 2.060 mm and 30.000 linear meters, respectively. The PE film is the 2ⁿᵈ component in TBS lamination, which is usually a 12-14 gsm blown or cast breathable film. HMA is a thermoplastic glue (mostly rubber based) that can be applied in a large range of viscosity ranges. The main HMA application technology in TBS lamination is the slot die application in which HMA is spread on the top of the NW along the whole width at high temperatures in the melt form. Then, the NW is passed over chiller rolls with a certain open time depending on the line speed. HMAs are applied at certain levels in order to provide a proper de-lamination strength in cross and machine directions to the entire structure. Current TBS lamination line speed and width can be as high as 800 m/min and 2100 mm, respectively. They also feature an automated web control tension system for winders and unwinders. In order to run a continuous trouble-free mass production campaign on the fast industrial TBS lines, rheological properties of HMAs and micro-properties of NWs can have adverse effects on the line efficiency and continuity. NW fiber orientation and fineness, as well as spun/melt blown composition fabric micro-level properties, are the significant factors to affect the degree of “HMA bleed through.” As a result of this problem, frequent line stops are observed to clean the glue that is being accumulated on the chiller rolls, which significantly reduces the line efficiency. HMA rheology is also important and to eliminate any bleed through the problem; one should have a good understanding of rheology driven potential complications. So, the applied viscosity/temperature should be optimized in accordance with the line speed, line width, NW characteristics and the required open time for a given HMA formulation. In this study, we will show practical aspects of potential preventative actions to minimize the HMA bleed through the problem, which may stem from both HMA rheological properties and NW spun melt/melt blown fiber characteristics.Keywords: breathable, hotmelt, nonwoven, textile backsheet lamination, spun/melt blown
Procedia PDF Downloads 363769 The Burmese Exodus of 1942: Towards Evolving Policy Protocols for a Refugee Archive
Authors: Vinod Balakrishnan, Chrisalice Ela Joseph
Abstract:
The Burmese Exodus of 1942, which left more than 4 lakh as refugees and thousands dead, is one of the worst forced migrations in recorded history. Adding to the woes of the refugees is the lack of credible documentation of their lived experiences, trauma, and stories and their erasure from recorded history. Media reports, national records, and mainstream narratives that have registered the exodus provide sanitized versions which have reduced the refugees to a nameless, faceless mass of travelers and obliterated their lived experiences, trauma, and sufferings. This attitudinal problem compels the need to stem the insensitivity that accompanies institutional memory by making a case for a more humanistically evolved policy that puts in place protocols for the way the humanities would voice the concern for the refugee. A definite step in this direction and a far more relevant project in our times is the need to build a comprehensive refugee archive that can be a repository of the refugee experiences and perspectives. The paper draws on Hannah Arendt’s position on the Jewish refugee crisis, Agamben’s work on statelessness and citizenship, Foucault’s notion of governmentality and biopolitics, Edward Said’s concepts on Exile, Fanon’s work on the dispossessed, Derrida’s work on ‘the foreigner and hospitality’ in order to conceptualize the refugee condition which will form the theoretical framework for the paper. It also refers to the existing scholarship in the field of refugee studies such as Roger Zetter’s work on the ‘refugee label’, Philip Marfleet’s work on ‘refugees and history’, Lisa Malkki’s research on the anthropological discourse of the refugee and refugee studies. The paper is also informed by the work that has been done by the international organizations to address the refugee crisis. The emphasis is on building a strong argument for the establishment of the refugee archive that finds but a passing and a none too convincing reference in refugee studies in order to enable a multi-dimensional understanding of the refugee crisis. Some of the old questions cannot be dismissed as outdated as the continuing travails of the refugees in different parts of the world only remind us that they are still, largely, unanswered. The questions are -What is the nature of a Refugee Archive? How is it different from the existing historical and political archives? What are the implications of the refugee archive? What is its contribution to refugee studies? The paper draws on Diana Taylor’s concept of the archive and the repertoire to theorize the refugee archive as a repository that has the documentary function of the ‘archive’ and the ‘agency’ function of the repertoire. It then reads Ayya’s Accounts- a memoir by Anand Pandian -in the light of Hannah Arendt’s concepts of the ‘refugee as vanguard’ and ‘story telling as political action’- to illustrate how the memoir contributes to the refugee archive that provides the refugee a place and agency in history. The paper argues for a refugee archive that has implications for the formulation of inclusive refugee policies.Keywords: Ayya’s Accounts, Burmese Exodus, policy protocol, refugee archive
Procedia PDF Downloads 141768 Evaluation of Modern Natural Language Processing Techniques via Measuring a Company's Public Perception
Authors: Burak Oksuzoglu, Savas Yildirim, Ferhat Kutlu
Abstract:
Opinion mining (OM) is one of the natural language processing (NLP) problems to determine the polarity of opinions, mostly represented on a positive-neutral-negative axis. The data for OM is usually collected from various social media platforms. In an era where social media has considerable control over companies’ futures, it’s worth understanding social media and taking actions accordingly. OM comes to the fore here as the scale of the discussion about companies increases, and it becomes unfeasible to gauge opinion on individual levels. Thus, the companies opt to automize this process by applying machine learning (ML) approaches to their data. For the last two decades, OM or sentiment analysis (SA) has been mainly performed by applying ML classification algorithms such as support vector machines (SVM) and Naïve Bayes to a bag of n-gram representations of textual data. With the advent of deep learning and its apparent success in NLP, traditional methods have become obsolete. Transfer learning paradigm that has been commonly used in computer vision (CV) problems started to shape NLP approaches and language models (LM) lately. This gave a sudden rise to the usage of the pretrained language model (PTM), which contains language representations that are obtained by training it on the large datasets using self-supervised learning objectives. The PTMs are further fine-tuned by a specialized downstream task dataset to produce efficient models for various NLP tasks such as OM, NER (Named-Entity Recognition), Question Answering (QA), and so forth. In this study, the traditional and modern NLP approaches have been evaluated for OM by using a sizable corpus belonging to a large private company containing about 76,000 comments in Turkish: SVM with a bag of n-grams, and two chosen pre-trained models, multilingual universal sentence encoder (MUSE) and bidirectional encoder representations from transformers (BERT). The MUSE model is a multilingual model that supports 16 languages, including Turkish, and it is based on convolutional neural networks. The BERT is a monolingual model in our case and transformers-based neural networks. It uses a masked language model and next sentence prediction tasks that allow the bidirectional training of the transformers. During the training phase of the architecture, pre-processing operations such as morphological parsing, stemming, and spelling correction was not used since the experiments showed that their contribution to the model performance was found insignificant even though Turkish is a highly agglutinative and inflective language. The results show that usage of deep learning methods with pre-trained models and fine-tuning achieve about 11% improvement over SVM for OM. The BERT model achieved around 94% prediction accuracy while the MUSE model achieved around 88% and SVM did around 83%. The MUSE multilingual model shows better results than SVM, but it still performs worse than the monolingual BERT model.Keywords: BERT, MUSE, opinion mining, pretrained language model, SVM, Turkish
Procedia PDF Downloads 148767 An Exploratory Study of Changing Organisational Practices of Third-Sector Organisations in Mandated Corporate Social Responsibility in India
Authors: Avadh Bihari
Abstract:
Corporate social responsibility (CSR) has become a global parameter to define corporates' ethical and responsible behaviour. It was a voluntary practice in India till 2013, driven by various guidelines, which has become a mandate since 2014 under the Companies Act, 2013. This has compelled the corporates to redesign their CSR strategies by bringing in structures, planning, accountability, and transparency in their processes with a mandate to 'comply or explain'. Based on the author's M.Phil. dissertation, this paper presents the changes in organisational practices and institutional mechanisms of third-sector organisations (TSOs) with the theoretical frameworks of institutionalism and co-optation. It became an interesting case as India is the only country to have a law on CSR, which is not only mandating the reporting but the spending too. The space of CSR in India is changing rapidly and affecting multiple institutions, in the context of the changing roles of the state, market, and TSOs. Several factors such as stringent regulation on foreign funding, mandatory CSR pushing corporates to look out for NGOs, and dependency of Indian NGOs on CSR funds have come to the fore almost simultaneously, which made it an important area of study. Further, the paper aims at addressing the gap in the literature on the effects of mandated CSR on the functioning of TSOs through the empirical and theoretical findings of this study. The author had adopted an interpretivist position in this study to explore changes in organisational practices from the participants' experiences. Data were collected through in-depth interviews with five corporate officials, eleven officials from six TSOs, and two academicians, located at Mumbai and Delhi, India. The findings of this study show the legislation has institutionalised CSR, and TSOs get co-opted in the process of implementing mandated CSR. Seventy percent of the corporates implement their CSR projects through TSOs in India; this has affected the organisational practices of TSOs to a large extent. They are compelled to recruit expert workforce, create new departments for monitoring & evaluation, communications, and adopt management practices of project implementation from corporates. These are attempts to institutionalise the TSOs so that they can produce calculated results as demanded by corporates. In this process, TSOs get co-opted in a struggle to secure funds and lose their autonomy. The normative, coercive, and mimetic isomorphisms of institutionalism come into play as corporates are mandated to take up CSR, thereby influencing the organisational practices of TSOs. These results suggest that corporates and TSOs require an understanding of each other's work culture to develop mutual respect and work towards the goal of sustainable development of the communities. Further, TSOs need to retain their autonomy and understanding of ground realities without which they become an extension of the corporate-funder. For a successful CSR project, engagement beyond funding is required from corporate, through their involvement and not interference. CSR-led community development can be structured by management practices to an extent, but cannot overshadow the knowledge and experience of TSOs.Keywords: corporate social responsibility, institutionalism, organisational practices, third-sector organisations
Procedia PDF Downloads 116766 The Dynamics of a Droplet Spreading on a Steel Surface
Authors: Evgeniya Orlova, Dmitriy Feoktistov, Geniy Kuznetsov
Abstract:
Spreading of a droplet over a solid substrate is a key phenomenon observed in the following engineering applications: thin film coating, oil extraction, inkjet printing, and spray cooling of heated surfaces. Droplet cooling systems are known to be more effective than film or rivulet cooling systems. It is caused by the greater evaporation surface area of droplets compared with the film of the same mass and wetting surface. And the greater surface area of droplets is connected with the curvature of the interface. Location of the droplets on the cooling surface influences on the heat transfer conditions. The close distance between the droplets provides intensive heat removal, but there is a possibility of their coalescence in the liquid film. The long distance leads to overheating of the local areas of the cooling surface and the occurrence of thermal stresses. To control the location of droplets is possible by changing the roughness, structure and chemical composition of the surface. Thus, control of spreading can be implemented. The most important characteristic of spreading of droplets on solid surfaces is a dynamic contact angle, which is a function of the contact line speed or capillary number. However, there is currently no universal equation, which would describe the relationship between these parameters. This paper presents the results of the experimental studies of water droplet spreading on metal substrates with different surface roughness. The effect of the droplet growth rate and the surface roughness on spreading characteristics was studied at low capillary numbers. The shadow method using high speed video cameras recording up to 10,000 frames per seconds was implemented. A droplet profile was analyzed by Axisymmetric Drop Shape Analyses techniques. According to change of the dynamic contact angle and the contact line speed three sequential spreading stages were observed: rapid increase in the dynamic contact angle; monotonous decrease in the contact angle and the contact line speed; and form of the equilibrium contact angle at constant contact line. At low droplet growth rate, the dynamic contact angle of the droplet spreading on the surfaces with the maximum roughness is found to increase throughout the spreading time. It is due to the fact that the friction force on such surfaces is significantly greater than the inertia force; and the contact line is pinned on microasperities of a relief. At high droplet growth rate the contact angle decreases during the second stage even on the surfaces with the maximum roughness, as in this case, the liquid does not fill the microcavities, and the droplet moves over the “air cushion”, i.e. the interface is a liquid/gas/solid system. Also at such growth rates pulsation of liquid flow was detected; and the droplet oscillates during the spreading. Thus, obtained results allow to conclude that it is possible to control spreading by using the surface roughness and the growth rate of droplets on surfaces as varied factors. Also, the research findings may be used for analyzing heat transfer in rivulet and drop cooling systems of high energy equipment.Keywords: contact line speed, droplet growth rate, dynamic contact angle, shadow system, spreading
Procedia PDF Downloads 334765 The Effect of Post Spinal Hypotension on Cerebral Oxygenation Using Near-Infrared Spectroscopy and Neonatal Outcomes in Full Term Parturient Undergoing Lower Segment Caesarean Section: A Prospective Observational Study
Authors: Shailendra Kumar, Lokesh Kashyap, Puneet Khanna, Nishant Patel, Rakesh Kumar, Arshad Ayub, Kelika Prakash, Yudhyavir Singh, Krithikabrindha V.
Abstract:
Introduction: Spinal anesthesia is considered a standard anesthesia technique for caesarean delivery. The incidence of spinal hypotension during caesarean delivery is 70 -80%. Spinal hypotension may cause cerebral hypoperfusion in the mother, but physiologically cerebral autoregulatory mechanisms accordingly prevent cerebral hypoxia. Cerebral blood flow remains constant in the 50-150 mmHg of Cerebral Perfusion Pressure (CPP) range. Near-infrared spectroscopy (NIRS) is a non-invasive technology that is used to detect Cerebral Desaturation Events (CDEs) immediately compared to other conventional intraoperative monitoring techniques. Objective: The primary aim of the study is to correlate the change in cerebral oxygen saturation using NIRS with respect to a fall in mean blood pressure after spinal anaesthesia and to find out the effects of spinal hypotension on neonatal APGAR score, neonatal acid-base variations, and presence of Postoperative Delirium (POD). Methodology: NIRS sensors were attached to the forehead of all the patients, and their baseline readings of cerebral oxygenation on the right and left frontal regions and mean blood pressure were noted. Subarachnoid block was given with hyperbaric 0.5% bupivacaine plus fentanyl, the dose being determined by the individual anaesthesiologist. Co-loading of IV crystalloid solutions was given to the patient. Blood pressure reading and cerebral saturation were recorded every 1 minute till 30min. Hypotension was a fall in MAP less than 20% of the baseline values. Patients going for hypotension were treated with an IV Bolus of phenylephrine/ephedrine. Umbilical cord blood samples were taken for blood gas analysis, and neonatal APGAR was noted by a neonatologist. Study design: A prospective observational study conducted in a population of Thirty ASA 2 and 3 parturients scheduled for lower segment caesarean section (LSCS). Results: Mean fall in regional cerebral saturation is 28.48 ± 14.7% with respect to the mean fall in blood pressure 38.92 ± 8.44 mm Hg. The correlation coefficient between fall in saturation and fall in mean blood pressure is 0.057, and p-value {0.7} after subarachnoid block. A fall in regional cerebral saturation occurred 2±1 min before a fall in mean blood pressure. Twenty-nine out of thirty patients required vasopressors during hypotension. The first dose of vasopressor requirement is needed at 6.02±2 min after the block. The mean APGAR score was 7.86 and 9.74 at 1 and 5 min of birth, respectively, and the mean umbilical arterial pH of 7.3±0.1. According to DRS-98 (Delirium Rating Scale), the mean delirium rating score on postoperative day 1 and day 2 were 0.1 and 0.7, respectively. Discussion: There was a fall in regional cerebral oxygen saturation, which started before with respect to a significant fall in mean blood pressure readings but was statistically not significant. Maximal fall in blood pressure requiring vasopressors occurs within 10 min of SAB. Neonatal APGAR scores and acid-base variations were in the normal range with maternal hypotension, and there was no incidence of postoperative delirium in patients with post-spinal hypotension.Keywords: cerebral oxygenation, LSCS, NIRS, spinal hypotension
Procedia PDF Downloads 69764 Supply Chain Improvement of the Halal Goat Industry in the Autonomous Region in Muslim Mindanao
Authors: Josephine R. Migalbin
Abstract:
Halal is an Arabic word meaning "lawful" or "permitted". When it comes to food and consumables, Halal is the dietary standard of Muslims. The Autonomous Region in Muslim Mindanao (ARMM) has a comparative advantage when it comes to Halal Industry because it is the only Muslim region in the Philippines and the natural starting point for the establishment of a halal industry in the country. The region has identified goat production not only for domestic consumption but for export market. Goat production is one of its strengths due to cultural compatibility. There is a high demand for goats during Ramadhan and Eid ul-Adha. The study aimed to provide an overview of the ARMM Halal Goat Industry; to map out the specific supply chain of halal goat, and to analyze the performance of the halal goat supply chain in terms of efficiency, flexibility, and overall responsiveness. It also aimed to identify areas for improvement in the supply chain such as behavioural, institutional, and process to provide recommendations for improvement in the supply chain towards efficient and effective production and marketing of halal goats, subsequently improving the plight of the actors in the supply chain. Generally, the raising of goats is characterized by backyard production (92.02%). There are four interrelated factors affecting significantly the production of goats which are breeding prolificacy, prevalence of diseases, feed abundance and pre-weaning mortality rate. The institutional buyers are mostly traders, restaurants/eateries, supermarkets, and meat shops, among others. The municipalities of Midsayap and Pikit in another region and Parang are the major goat sources and the municipalities in ARMM among others. In addition to the major supply centers, Siquijor, an island province in the Visayas is becoming a key source of goats. Goats are usually gathered by traders/middlemen and brought to the public markets. Meat vendors purchase them directly from raisers, slaughtered and sold fresh in wet markets. It was observed that there is increased demand at 2%/year and that supply is not enough to meet the demand. Farm gate price is 2.04 USD to 2.11 USD/kg liveweight. Industry information is shared by three key participants - raisers, traders and buyers. All respondents reported that information is through personal built-upon past experiences and that there is no full disclosure of information among the key participants in the chain. The information flow in the industry is fragmented in nature such that no total industry picture exists. In the last five years, numerous local and foreign agencies had undertaken several initiatives for the development of the halal goat industry in ARMM. The major issues include productivity which is the greatest challenge, difficulties in accessing technical support channels and lack of market linkage and consolidation. To address the various issues and concerns of the various industry players, there is a need to intensify appropriate technology transfer through extension activities, improve marketing channels by grouping producers, strengthen veterinary services and provide capital windows to improve facilities and reduce logistics and transaction costs in the entire supply chain.Keywords: autonomous region in Muslim Mindanao, halal, halal goat industry, supply chain improvement
Procedia PDF Downloads 335763 The Display of Age-Period/Age-Cohort Mortality Trends Using 1-Year Intervals Reveals Period and Cohort Effects Coincident with Major Influenza A Events
Authors: Maria Ines Azambuja
Abstract:
Graphic displays of Age-Period-Cohort (APC) mortality trends generally uses data aggregated within 5 or 10-year intervals. Technology allows one to increase the amount of processed data. Displaying occurrences by 1-year intervals is a logic first step in the direction of attaining higher quality landscapes of variations in temporal occurrences. Method: 1) Comparison of UK mortality trends plotted by 10-, 5- and 1-year intervals; 2) Comparison of UK and US mortality trends (period X age and cohort X age) displayed by 1-year intervals. Source: Mortality data (period, 1x1, males, 1933-1912) uploaded from the Human Mortality Database to Excel files, where Period X Age and Cohort X Age graphics were produced. The choice of transforming age-specific trends from calendar to birth-cohort years (cohort = period – age) (instead of using cohort 1x1 data available at the HMD resource) was taken to facilitate the comparison of age-specific trends when looking across calendar-years and birth-cohorts. Yearly live births, males, 1933 to 1912 (UK) were uploaded from the HFD. Influenza references are from the literature. Results: 1) The use of 1-year intervals unveiled previously unsuspected period, cohort and interacting period x cohort effects upon all-causes mortality. 2) The UK and US figures showed variations associated with particular calendar years (1936, 1940, 1951, 1957-68, 72) and, most surprisingly, with particular birth-cohorts (1889-90 in the US, and 1900, 1918-19, 1940-41 and 1946-47, in both countries. Also, the figures showed ups and downs in age-specific trends initiated at particular birth-cohorts (1900, 1918-19 and 1947-48) or a particular calendar-year (1968, 1972, 1977-78 in the US), variations at times restricted to just a range of ages (cohort x period interacting effects). Importantly, most of the identified “scars” (period and cohort) correlates with the record of occurrences of Influenza A epidemics since the late 19th Century. Conclusions: The use of 1-year intervals to describe APC mortality trends both increases the amount of information available, thus enhancing the opportunities for patterns’ recognition, and increases our capability of interpreting those patterns by describing trends across smaller intervals of time (period or birth-cohort). The US and the UK mortality landscapes share many but not all 'scars' and distortions suggested here to be associated with influenza epidemics. Different size-effects of wars are evident, both in mortality and in fertility. But it would also be realistic to suppose that the preponderant influenza A viruses circulating in UK and US at the beginning of the 20th Century might be different and the difference to have intergenerational long-term consequences. Compared with the live births trend (UK data), birth-cohort scars clearly depend on birth-cohort sizes relatives to neighbor ones, which, if causally associated with influenza, would result from influenza-related fetal outcomes/selection. Fetal selection could introduce continuing modifications on population patterns of immune-inflammatory phenotypes that might give rise to 'epidemic constitutions' favoring the occurrence of particular diseases. Comparative analysis of mortality landscapes may help us to straight our record of past circulation of Influenza viruses and document associations between influenza recycling and fertility changes.Keywords: age-period-cohort trends, epidemic constitution, fertility, influenza, mortality
Procedia PDF Downloads 231