Search results for: herbal products
298 Mobulid Ray Fishery Characteristics and Trends in East Java to Inform Management Decisions
Authors: Muhammad G. Salim, Betty J.L. Laglbauer, Sila K. Sari, Irianes C. Gozali, Fahmi, Didik Rudianto, Selvia Oktaviyani, Isabel Ender
Abstract:
Muncar, East Java, is one of the largest artisanal fisheries in Indonesia. Sharks and rays are caught as both target and bycatch, for local meat consumption and with some derived products exported. Of the seven mobulid ray species occurring in Indonesia, five have been recorded as retained bycatch at Muncar fishing port: the spinetail devil ray (Mobula mobular), the bentfin devil ray (Mobula thurstoni), the sicklefin devil ray (Mobula tarapacana), the oceanic manta ray (Mobula birostris) and the reef manta ray (Mobula alfredi). Both manta ray species are listed as Vulnerable by the International Union for the Conservation of Nature and are protected in Indonesia despite still being captured as bycatch, while all the three devil ray species mentioned here are listed as Endangered and do not currently benefit from any protection in Indonesian waters. Mobulid landings in East Java are caused primarily by small-scale drift gillnets but they also occasionally occur on longlines and in purse-seines operating off the coast of East Java and occasionally in fishing grounds located as far as the Makassar and Sumba Straits. Landing trends from 2015-2019 (non-continuous surveys) revealed that the highest abundance of mobulid rays at Muncar fishing port occurs during the upwelling season from June-October. During El-Nino or above-average temperature years, this may extend until November (such as in 2015 and 2019). The strong seasonal upwelling along the East Java coast is linked to higher zooplankton abundance (inferred from chlorophyll-a sea-surface concentrations), on which mobulids forage, along with teleost fishes constituting the primary target of gillnet fisheries in the Bali Strait. Mobulid ray landings in Muncar were dominated by Mobula mobular, followed by M. thurstoni, M. tarapacana, M. birostris and M. alfredi, however, the catch varied across years and seasons. A majority of immature individuals were recorded in M. mobular and M. thurstoni, and slight decreases in landings, despite no known changes in fishing effort, were observed across the upwelling seasons of 2015-2018 for M. mobular. While all mobulids are listed on Appendix II of the Convention on International Trade in Endangered Species, which regulates international trade in gill plates sought after in the Chinese Medicine Trade, local and national-level management measures are required to sustain mobulid populations. The findings presented here provide important baseline data, from which potential management approaches can be identified.Keywords: devil ray, mobulid, manta ray, Indonesia
Procedia PDF Downloads 178297 Upgrading of Bio-Oil by Bio-Pd Catalyst
Authors: Sam Derakhshan Deilami, Iain N. Kings, Lynne E. Macaskie, Brajendra K. Sharma, Anthony V. Bridgwater, Joseph Wood
Abstract:
This paper reports the application of a bacteria-supported palladium catalyst to the hydrodeoxygenation (HDO) of pyrolysis bio-oil, towards producing an upgraded transport fuel. Biofuels are key to the timely replacement of fossil fuels in order to mitigate the emissions of greenhouse gases and depletion of non-renewable resources. The process is an essential step in the upgrading of bio-oils derived from industrial by-products such as agricultural and forestry wastes, the crude oil from pyrolysis containing a large amount of oxygen that requires to be removed in order to create a fuel resembling fossil-derived hydrocarbons. The bacteria supported catalyst manufacture is a means of utilizing recycled metals and second life bacteria, and the metal can also be easily recovered from the spent catalysts after use. Comparisons are made between bio-Pd, and a conventional activated carbon supported Pd/C catalyst. Bio-oil was produced by fast pyrolysis of beechwood at 500 C at a residence time below 2 seconds, provided by Aston University. 5 wt % BioPd/C was prepared under reducing conditions, exposing cells of E. coli MC4100 to a solution of sodium tetrachloropalladate (Na2PdCl4), followed by rinsing, drying and grinding to form a powder. Pd/C was procured from Sigma-Aldrich. The HDO experiments were carried out in a 100 mL Parr batch autoclave using ~20g bio-crude oil and 0.6 g bio-Pd/C catalyst. Experimental variables investigated for optimization included temperature (160-350C) and reaction times (up to 5 h) at a hydrogen pressure of 100 bar. Most of the experiments resulted in an aqueous phase (~40%) and an organic phase (~50-60%) as well as gas phase (<5%) and coke (<2%). Study of the temperature and time upon the process showed that the degree of deoxygenation increased (from ~20 % up to 60 %) at higher temperatures in the region of 350 C and longer residence times up to 5 h. However minimum viscosity (~0.035 Pa.s) occurred at 250 C and 3 h residence time, indicating that some polymerization of the oil product occurs at the higher temperatures. Bio-Pd showed a similar degree of deoxygenation (~20 %) to Pd/C at lower temperatures of 160 C, but did not rise as steeply with temperature. More coke was formed over bio-Pd/C than Pd/C at temperatures above 250 C, suggesting that bio-Pd/C may be more susceptible to coke formation than Pd/C. Reactions occurring during bio-oil upgrading include catalytic cracking, decarbonylation, decarboxylation, hydrocracking, hydrodeoxygenation and hydrogenation. In conclusion, it was shown that bio-Pd/C displays an acceptable rate of HDO, which increases with residence time and temperature. However some undesirable reactions also occur, leading to a deleterious increase in viscosity at higher temperatures. Comparisons are also drawn with earlier work on the HDO of Chlorella derived bio-oil manufactured from micro-algae via hydrothermal liquefaction. Future work will analyze the kinetics of the reaction and investigate the effect of bi-metallic catalysts.Keywords: bio-oil, catalyst, palladium, upgrading
Procedia PDF Downloads 175296 Optimization of Ultrasound-Assisted Extraction of Oil from Spent Coffee Grounds Using a Central Composite Rotatable Design
Authors: Malek Miladi, Miguel Vegara, Maria Perez-Infantes, Khaled Mohamed Ramadan, Antonio Ruiz-Canales, Damaris Nunez-Gomez
Abstract:
Coffee is the second consumed commodity worldwide, yet it also generates colossal waste. Proper management of coffee waste is proposed by converting them into products with higher added value to achieve sustainability of the economic and ecological footprint and protect the environment. Based on this, a study looking at the recovery of coffee waste is becoming more relevant in recent decades. Spent coffee grounds (SCG's) resulted from brewing coffee represents the major waste produced among all coffee industry. The fact that SCGs has no economic value be abundant in nature and industry, do not compete with agriculture and especially its high oil content (between 7-15% from its total dry matter weight depending on the coffee varieties, Arabica or Robusta), encourages its use as a sustainable feedstock for bio-oil production. The bio-oil extraction is a crucial step towards biodiesel production by the transesterification process. However, conventional methods used for oil extraction are not recommended due to their high consumption of energy, time, and generation of toxic volatile organic solvents. Thus, finding a sustainable, economical, and efficient extraction technique is crucial to scale up the process and to ensure more environment-friendly production. Under this perspective, the aim of this work was the statistical study to know an efficient strategy for oil extraction by n-hexane using indirect sonication. The coffee waste mixed Arabica and Robusta, which was used in this work. The temperature effect, sonication time, and solvent-to-solid ratio on the oil yield were statistically investigated as dependent variables by Central Composite Rotatable Design (CCRD) 23. The results were analyzed using STATISTICA 7 StatSoft software. The CCRD showed the significance of all the variables tested (P < 0.05) on the process output. The validation of the model by analysis of variance (ANOVA) showed good adjustment for the results obtained for a 95% confidence interval, and also, the predicted values graph vs. experimental values confirmed the satisfactory correlation between the model results. Besides, the identification of the optimum experimental conditions was based on the study of the surface response graphs (2-D and 3-D) and the critical statistical values. Based on the CCDR results, 29 ºC, 56.6 min, and solvent-to-solid ratio 16 were the better experimental conditions defined statistically for coffee waste oil extraction using n-hexane as solvent. In these conditions, the oil yield was >9% in all cases. The results confirmed the efficiency of using an ultrasound bath in extracting oil as a more economical, green, and efficient way when compared to the Soxhlet method.Keywords: coffee waste, optimization, oil yield, statistical planning
Procedia PDF Downloads 119295 Sustainable Production of Pharmaceutical Compounds Using Plant Cell Culture
Authors: David A. Ullisch, Yantree D. Sankar-Thomas, Stefan Wilke, Thomas Selge, Matthias Pump, Thomas Leibold, Kai Schütte, Gilbert Gorr
Abstract:
Plants have been considered as a source of natural substances for ages. Secondary metabolites from plants are utilized especially in medical applications but are more and more interesting as cosmetical ingredients and in the field of nutraceuticals. However, supply of compounds from natural harvest can be limited by numerous factors i.e. endangered species, low product content, climate impacts and cost intensive extraction. Especially in the pharmaceutical industry the ability to provide sufficient amounts of product and high quality are additional requirements which in some cases are difficult to fulfill by plant harvest. Whereas in many cases the complexity of secondary metabolites precludes chemical synthesis on a reasonable commercial basis, plant cells contain the biosynthetic pathway – a natural chemical factory – for a given compound. A promising approach for the sustainable production of natural products can be plant cell fermentation (PCF®). A thoroughly accomplished development process comprises the identification of a high producing cell line, optimization of growth and production conditions, the development of a robust and reliable production process and its scale-up. In order to address persistent, long lasting production, development of cryopreservation protocols and generation of working cell banks is another important requirement to be considered. So far the most prominent example using a PCF® process is the production of the anticancer compound paclitaxel. To demonstrate the power of plant suspension cultures here we present three case studies: 1) For more than 17 years Phyton produces paclitaxel at industrial scale i.e. up to 75,000 L in scale. With 60 g/kg dw this fully controlled process which is applied according to GMP results in outstanding high yields. 2) Thapsigargin is another anticancer compound which is currently isolated from seeds of Thapsia garganica. Thapsigargin is a powerful cytotoxin – a SERCA inhibitor – and the precursor for the derivative ADT, the key ingredient of the investigational prodrug Mipsagargin (G-202) which is in several clinical trials. Phyton successfully generated plant cell lines capable to express this compound. Here we present data about the screening for high producing cell lines. 3) The third case study covers ingenol-3-mebutate. This compound is found in the milky sap of the intact plants of the Euphorbiacae family at very low concentrations. Ingenol-3-mebutate is used in Picato® which is approved against actinic keratosis. Generation of cell lines expressing significant amounts of ingenol-3-mebutate is another example underlining the strength of plant cell culture. The authors gratefully acknowledge Inspyr Therapeutics for funding.Keywords: Ingenol-3-mebutate, plant cell culture, sustainability, thapsigargin
Procedia PDF Downloads 249294 Sorption Properties of Hemp Cellulosic Byproducts for Petroleum Spills and Water
Authors: M. Soleimani, D. Cree, C. Chafe, L. Bates
Abstract:
The accidental release of petroleum products into the environment could have harmful consequences to our ecosystem. Different techniques such as mechanical separation, membrane filtration, incineration, treatment processes using enzymes and dispersants, bioremediation, and sorption process using sorbents have been applied for oil spill remediation. Most of the techniques investigated are too costly or do not have high enough efficiency. This study was conducted to determine the sorption performance of hemp byproducts (cellulosic materials) in terms of sorption capacity and kinetics for hydrophobic and hydrophilic fluids. In this study, heavy oil, light oil, diesel fuel, and water/water vapor were used as sorbate fluids. Hemp stalk in different forms, including loose material (hammer milled (HM) and shredded (Sh) with low bulk densities) and densified forms (pellet form (P) and crumbled pellets (CP)) with high bulk densities, were used as sorbents. The sorption/retention tests were conducted according to ASTM 726 standard. For a quick-purpose application of the sorbents, the sorption tests were conducted for 15 min, and for an ideal sorption capacity of the materials, the tests were carried out for 24 h. During the test, the sorbent material was exposed to the fluid by immersion, followed by filtration through a stainless-steel wire screen. Water vapor adsorption was carried out in a controlled environment chamber with the capability of controlling relative humidity (RH) and temperature. To determine the kinetics of sorption for each fluid and sorbent, the retention capacity also was determined intervalley for up to 24 h. To analyze the kinetics of sorption, pseudo-first-order, pseudo-second order and intraparticle diffusion models were employed with the objective of minimal deviation of the experimental results from the models. The results indicated that HM and Sh materials had the highest sorption capacity for the hydrophobic fluids with approximately 6 times compared to P and CP materials. For example, average retention values of heavy oil on HM and Sh was 560% and 470% of the mass of the sorbents, respectively. Whereas, the retention of heavy oil on P and CP was up to 85% of the mass of the sorbents. This lower sorption capacity for P and CP can be due to the less exposed surface area of these materials and compacted voids or capillary tubes in the structures. For water uptake application, HM and Sh resulted in at least 40% higher sorption capacity compared to those obtained for P and CP. On average, the performance of sorbate uptake from high to low was as follows: water, heavy oil, light oil, diesel fuel. The kinetic analysis indicated that the second-pseudo order model can describe the sorption process of the oil and diesel better than other models. However, the kinetics of water absorption was better described by the pseudo-first-order model. Acetylation of HM materials could improve its oil and diesel sorption to some extent. Water vapor adsorption of hemp fiber was a function of temperature and RH, and among the models studied, the modified Oswin model was the best model in describing this phenomenon.Keywords: environment, fiber, petroleum, sorption
Procedia PDF Downloads 124293 Synthesis of Belite Cements at Low Temperature from Silica Fume and Natural Commercial Zeolite
Authors: Tatiana L. Avalos-Rendon, Elias A. Pasten Chelala, Carlos J. Mendoza EScobedo, Ignacio A. Figueroa, Victor H. Lara, Luis M. Palacios-Romero
Abstract:
The cement industry is facing cost increments in energy supply, requirements for reduction of CO₂, and insufficient supply of raw materials of good quality. According to all these environmental issues, cement industry must change its consumption patterns and reduce CO₂ emissions to the atmosphere. This can be achieved by generating environmental consciousness, which encourages the use of industrial by-products and/or recycling for the production of cement, as well as alternate, environment-friendly methods of synthesis which reduce CO₂. Calcination is the conventional method for the obtainment of Portland cement clinker. This method consists of grinding and mixing of raw materials (limestone, clay, etc.) in an adequate dosage. Resulting mix has a clinkerization temperature of 1450 °C so that the formation of the main component occur: alite (Ca₃SiO₅, C₃S). Considering that the energy required to produce C₃S is 1810 kJ kg -1, calcination method for the obtainment of clinker represents two major disadvantages: long thermal treatment and elevated temperatures of synthesis, both of which cause high emissions of carbon dioxide (CO₂) to the atmosphere. Belite Portland clinker is characterized by having a low content of calcium oxide (CaO), causing the presence of alite to diminish and favoring the formation of belite (β-Ca₂SiO₄, C₂S), so production of clinker requires a reduced energy consumption (1350 kJ kg-1), releasing less CO₂ to the atmosphere. Conventionally, β-Ca₂SiO₄ is synthetized by the calcination of calcium carbonate (CaCO₃) and silicon dioxide (SiO₂) through the reaction in solid state at temperatures greater than 1300 °C. Resulting belite shows low hydraulic reactivity. Therefore, this study concerns a new simple modified combustion method for the synthesis of two belite cements at low temperatures (1000 °C). Silica fume, as subproduct of metallurgic industry and commercial natural zeolite were utilized as raw materials. These are considered low-cost materials and were utilized with no additional purification process. Belite cements properties were characterized by XRD, SEM, EDS and BET techniques. Hydration capacity of belite cements was calculated while the mechanical strength was determined in ordinary Portland cement specimens (PC) with a 10% partial replacement of the belite cements obtained. Results showed belite cements presented relatively high surface áreas, at early ages mechanical strengths similar to those of alite cement and comparable to strengths of belite cements obtained by different synthesis methods. Cements obtained in this work present good hydraulic reactivity properties.Keywords: belite, silica fume, zeolite, hydraulic reactivity
Procedia PDF Downloads 344292 Removal of Problematic Organic Compounds from Water and Wastewater Using the Arvia™ Process
Authors: Akmez Nabeerasool, Michaelis Massaros, Nigel Brown, David Sanderson, David Parocki, Charlotte Thompson, Mike Lodge, Mikael Khan
Abstract:
The provision of clean and safe drinking water is of paramount importance and is a basic human need. Water scarcity coupled with tightening of regulations and the inability of current treatment technologies to deal with emerging contaminants and Pharmaceuticals and personal care products means that alternative treatment technologies that are viable and cost effective are required in order to meet demand and regulations for clean water supplies. Logistically, the application of water treatment in rural areas presents unique challenges due to the decentralisation of abstraction points arising from low population density and the resultant lack of infrastructure as well as the need to treat water at the site of use. This makes it costly to centralise treatment facilities and hence provide potable water direct to the consumer. Furthermore, across the UK there are segments of the population that rely on a private water supply which means that the owner or user(s) of these supplies, which can serve one household to hundreds, are responsible for the maintenance. The treatment of these private water supply falls on the private owners, and it is imperative that a chemical free technological solution that can operate unattended and does not produce any waste is employed. Arvia’s patented advanced oxidation technology combines the advantages of adsorption and electrochemical regeneration within a single unit; the Organics Destruction Cell (ODC). The ODC uniquely uses a combination of adsorption and electrochemical regeneration to destroy organics. Key to this innovative process is an alternative approach to adsorption. The conventional approach is to use high capacity adsorbents (e.g. activated carbons with high porosities and surface areas) that are excellent adsorbents, but require complex and costly regeneration. Arvia’s technology uses a patent protected adsorbent, Nyex™, which is a non-porous, highly conductive, graphite based adsorbent material that enables it to act as both the adsorbent and as a 3D electrode. Adsorbed organics are oxidised and the surface of the Nyex™ is regenerated in-situ for further adsorption without interruption or replacement. Treated water flows from the bottom of the cell where it can either be re-used or safely discharged. Arvia™ Technology Ltd. has trialled the application of its tertiary water treatment technology in treating reservoir water abstracted near Glasgow, Scotland, with promising results. Several other pilot plants have also been successfully deployed at various locations in the UK showing the suitability and effectiveness of the technology in removing recalcitrant organics (including pharmaceuticals, steroids and hormones), COD and colour.Keywords: Arvia™ process, adsorption, water treatment, electrochemical oxidation
Procedia PDF Downloads 263291 Entrepreneurial Dynamism and Socio-Cultural Context
Authors: Shailaja Thakur
Abstract:
Managerial literature abounds with discussions on business strategies, success stories as well as cases of failure, which provide an indication of the parameters that should be considered in gauging the dynamism of an entrepreneur. Neoclassical economics has reduced entrepreneurship to a mere factor of production, driven solely by the profit motive, thus stripping him of all creativity and restricting his decision making to mechanical calculations. His ‘dynamism’ is gauged simply by the amount of profits he earns, marginalizing any discussion on the means that he employs to attain this objective. With theoretical backing, we have developed an Index of Entrepreneurial Dynamism (IED) giving weights to the different moves that the entrepreneur makes during his business journey. Strategies such as changes in product lines, markets and technology are gauged as very important (weighting of 4); while adaptations in terms of technology, raw materials used, upgradations in skill set are given a slightly lesser weight of 3. Use of formal market analysis, diversification in related products are considered moderately important (weight of 2) and being a first generation entrepreneur, employing managers and having plans to diversify are taken to be only slightly important business strategies (weight of 1). The maximum that an entrepreneur can score on this index is 53. A semi-structured questionnaire is employed to solicit the responses from the entrepreneurs on the various strategies that have been employed by them during the course of their business. Binary as well as graded responses are obtained, weighted and summed up to give the IED. This index was tested on about 150 tribal entrepreneurs in Mizoram, a state of India and was found to be highly effective in gauging their dynamism. This index has universal acceptability but is devoid of the socio-cultural context, which is very central to the success and performance of the entrepreneurs. We hypothesize that a society that respects risk taking takes failures in its stride, glorifies entrepreneurial role models, promotes merit and achievement is one that has a conducive socio- cultural environment for entrepreneurship. For obtaining an idea about the social acceptability, we are putting forth questions related to the social acceptability of business to another set of respondents from different walks of life- bureaucracy, academia, and other professional fields. Similar weighting technique is employed, and index is generated. This index is used for discounting the IED of the respondent entrepreneurs from that region/ society. This methodology is being tested for a sample of entrepreneurs from two very different socio- cultural milieus- a tribal society and a ‘mainstream’ society- with the hypothesis that the entrepreneurs in the tribal milieu might be showing a higher level of dynamism than their counterparts in other regions. An entrepreneur who scores high on IED and belongs to society and culture that holds entrepreneurship in high esteem, might not be in reality as dynamic as a person who shows similar dynamism in a relatively discouraging or even an outright hostile environment.Keywords: index of entrepreneurial dynamism, India, social acceptability, tribal entrepreneurs
Procedia PDF Downloads 257290 Recovery of Food Waste: Production of Dog Food
Authors: K. Nazan Turhan, Tuğçe Ersan
Abstract:
The population of the world is approximately 8 billion, and it increases uncontrollably and irrepressibly, leading to an increase in consumption. This situation causes crucial problems, and food waste is one of these. The Food and Agriculture Organization of the United Nations (FAO) defines food waste as the discarding or alternative utilization of food that is safe and nutritious for the consumption of humans along the entire food supply chain, from primary production to end household consumer level. In addition, according to the estimation of FAO, one-third of all food produced for human consumption is lost or wasted worldwide every year. Wasting food endangers natural resources and causes hunger. For instance, excessive amounts of food waste cause greenhouse gas emissions, contributing to global warming. Therefore, waste management has been gaining significance in the last few decades at both local and global levels due to the expected scarcity of resources for the increasing population of the world. There are several ways to recover food waste. According to the United States Environmental Protection Agency’s Food Recovery Hierarchy, food waste recovery ways are source reduction, feeding hungry people, feeding animals, industrial uses, composting, and landfill/incineration from the most preferred to the least preferred, respectively. Bioethanol, biodiesel, biogas, agricultural fertilizer and animal feed can be obtained from food waste that is generated by different food industries. In this project, feeding animals was selected as a food waste recovery method and food waste of a plant was used to provide ingredient uniformity. Grasshoppers were used as a protein source. In other words, the project was performed to develop a dog food product by recovery of the plant’s food waste after following some steps. The collected food waste and purchased grasshoppers were sterilized, dried and pulverized. Then, they were all mixed with 60 g agar-agar solution (4%w/v). 3 different aromas were added, separately to the samples to enhance flavour quality. Since there are differences in the required amounts of different species of dogs, fulfilling all nutritional needs is one of the problems. In other words, there is a wide range of nutritional needs in terms of carbohydrates, protein, fat, sodium, calcium, and so on. Furthermore, the requirements differ depending on age, gender, weight, height, and species. Therefore, the product that was developed contains average amounts of each substance so as not to cause any deficiency or surplus. On the other hand, it contains more protein than similar products in the market. The product was evaluated in terms of contamination and nutritional content. For contamination risk, detection of E. coli and Salmonella experiments were performed, and the results were negative. For the nutritional value test, protein content analysis was done. The protein contents of different samples vary between 33.68% and 26.07%. In addition, water activity analysis was performed, and the water activity (aw) values of different samples ranged between 0.2456 and 0.4145.Keywords: food waste, dog food, animal nutrition, food waste recovery
Procedia PDF Downloads 63289 Regional Dynamics of Innovation and Entrepreneurship in the Optics and Photonics Industry
Authors: Mustafa İlhan Akbaş, Özlem Garibay, Ivan Garibay
Abstract:
The economic entities in innovation ecosystems form various industry clusters, in which they compete and cooperate to survive and grow. Within a successful and stable industry cluster, the entities acquire different roles that complement each other in the system. The universities and research centers have been accepted to have a critical role in these systems for the creation and development of innovations. However, the real effect of research institutions on regional economic growth is difficult to assess. In this paper, we present our approach for the identification of the impact of research activities on the regional entrepreneurship for a specific high-tech industry: optics and photonics. The optics and photonics has been defined as an enabling industry, which combines the high-tech photonics technology with the developing optics industry. The recent literature suggests that the growth of optics and photonics firms depends on three important factors: the embedded regional specializations in the labor market, the research and development infrastructure, and a dynamic small firm network capable of absorbing new technologies, products and processes. Therefore, the role of each factor and the dynamics among them must be understood to identify the requirements of the entrepreneurship activities in optics and photonics industry. There are three main contributions of our approach. The recent studies show that the innovation in optics and photonics industry is mostly located around metropolitan areas. There are also studies mentioning the importance of research center locations and universities in the regional development of optics and photonics industry. These studies are mostly limited with the number of patents received within a short period of time or some limited survey results. Therefore the first contribution of our approach is conducting a comprehensive analysis for the state and recent history of the photonics and optics research in the US. For this purpose, both the research centers specialized in optics and photonics and the related research groups in various departments of institutions (e.g. Electrical Engineering, Materials Science) are identified and a geographical study of their locations is presented. The second contribution of the paper is the analysis of regional entrepreneurship activities in optics and photonics in recent years. We use the membership data of the International Society for Optics and Photonics (SPIE) and the regional photonics clusters to identify the optics and photonics companies in the US. Then the profiles and activities of these companies are gathered by extracting and integrating the related data from the National Establishment Time Series (NETS) database, ES-202 database and the data sets from the regional photonics clusters. The number of start-ups, their employee numbers and sales are some examples of the extracted data for the industry. Our third contribution is the utilization of collected data to investigate the impact of research institutions on the regional optics and photonics industry growth and entrepreneurship. In this analysis, the regional and periodical conditions of the overall market are taken into consideration while discovering and quantifying the statistical correlations.Keywords: entrepreneurship, industrial clusters, optics, photonics, emerging industries, research centers
Procedia PDF Downloads 406288 Marketing in the Fashion Industry and Its Critical Success Factors: The Case of Fashion Dealers in Ghana
Authors: Kumalbeo Paul Kamani
Abstract:
Marketing plays a very important role in the success of any firm since it represents the means through which a firm can reach its customers and also promotes its products and services. In fact, marketing aids the firm in identifying customers who the business can competitively serve, and tailoring product offerings, prices, distribution, promotional efforts, and services towards those customers. Unfortunately, in many firms, marketing has been reduced to merely advertisement. For effective marketing, firms must go beyond this often-limited function of advertisement. In the fashion industry in particular, marketing faces challenges due to its peculiar characteristics. Previous research for instance affirms the idiosyncrasy and peculiarities that differentiate the fashion industry from other industrial areas. It has been documented that the fashion industry is characterized seasonal intensity, short product life cycles, the difficulty of competitive differentiation, and long time for companies to reach financial stability. These factors are noted to pose obstacles to the fashion entrepreneur’s endeavours and can be the reasons that explain their low survival rates. In recent times, the fashion industry has been described as a market that is accessible market, has low entry barriers, both in terms of needed capital and skills which have all accounted for the burgeoning nature of startups. Yet as already stated, marketing is particularly challenging in the industry. In particular, areas such as marketing, branding, growth, project planning, financial and relationship management might represent challenges for the fashion entrepreneur but that have not been properly addressed by previous research. It is therefore important to assess marketing strategies of fashion firms and the factors influencing their success. This study generally sought to examine marketing strategies of fashion dealers in Ghana and their critical success factors. The study employed the quantitative survey research approach. A total of 120 fashion dealers were sampled. Questionnaires were used as instrument of data collection. Data collected was analysed using quantitative techniques including descriptive statistics and Relative Importance Index. The study revealed that the marketing strategies used by fashion apparels are text messages using mobile phones, referrals, social media marketing, and direct marketing. Results again show that the factors influencing fashion marketing effectiveness are strategic management, marketing mix (product, price, promotion etc), branding and business development. Policy implications are finally outlined. The study recommends among others that there is a need for the top management executive to craft and adopt marketing strategies that enable that are compatible with the fashion trends and the needs of the customers. This will improve customer satisfaction and hence boost market penetration. The study further recommends that the fashion industry in Ghana should seek to ensure that fashion apparels accommodate the diversity and the cultural setting of different customers to meet their unique needs.Keywords: marketing, fashion, industry, success factors
Procedia PDF Downloads 41287 Inhibition of Food Borne Pathogens by Bacteriocinogenic Enterococcus Strains
Authors: Neha Farid
Abstract:
Due to the abuse of antimicrobial medications in animal feed, the occurrence of multi-drug resistant (MDR) pathogens in foods is currently a growing public health concern on a global scale. MDR infections have the potential to penetrate the food chain by posing a serious risk to both consumers and animals. Food pathogens are those biological agents that have the tendency to cause pathogenicity in the host body upon ingestion. The major reservoirs of foodborne pathogens include food-producing fauna like cows, pigs, goats, sheep, deer, etc. The intestines of these animals are highly condensed with several different types of food pathogens. Bacterial food pathogens are the main cause of foodborne disease in humans; almost 66% of the reported cases of food illness in a year are caused by the infestation of bacterial food pathogens. When ingested, these pathogens reproduce and survive or form different kinds of toxins inside host cells causing severe infections. The genus Listeria consists of gram-positive, rod-shaped, non-spore-forming bacteria. The disease caused by Listeria monocytogenes is listeriosis or gastroenteritis, which induces fever, vomiting, and severe diarrhea in the affected body. Campylobacter jejuni is a gram-negative, curved-rod-shaped bacteria causing foodborne illness. The major source of Campylobacter jejuni is livestock and poultry; particularly, chicken is highly colonized with Campylobacter jejuni. Serious public health concerns include the widespread growth of bacteria that are resistant to antibiotics and the slowing in the discovery of new classes of medicines. The objective of this study is to provide some potential antibacterial activities with certain broad-range antibiotics and our desired bacteriocins, i.e., Enterococcus faecium from specific strains preventing microbial contamination pathways in order to safeguard the food by lowering food deterioration, contamination, and foodborne illnesses. The food pathogens were isolated from various sources of dairy products and meat samples. The isolates were tested for the presence of Listeria and Campylobacter by gram staining and biochemical testing. They were further sub-cultured on selective media enriched with the growth supplements for Listeria and Campylobacter. All six strains of Listeria and Campylobacter were tested against ten antibiotics. Campylobacter strains showed resistance against all the antibiotics, whereas Listeria was found to be resistant only against Nalidixic Acid and Erythromycin. Further, the strains were tested against the two bacteriocins isolated from Enterococcus faecium. It was found that bacteriocins showed better antimicrobial activity against food pathogens. They can be used as a potential antimicrobial for food preservation. Thus, the study concluded that natural antimicrobials could be used as alternatives to synthetic antimicrobials to overcome the problem of food spoilage and severe food diseases.Keywords: food pathogens, listeria, campylobacter, antibiotics, bacteriocins
Procedia PDF Downloads 71286 Analysis on the Converged Method of Korean Scientific and Mathematical Fields and Liberal Arts Programme: Focusing on the Intervention Patterns in Liberal Arts
Authors: Jinhui Bak, Bumjin Kim
Abstract:
The purpose of this study is to analyze how the scientific and mathematical fields (STEM) and liberal arts (A) work together in the STEAM program. In the future STEAM programs that have been designed and developed, the humanities will act not just as a 'tool' for science technology and mathematics, but as a 'core' content to have an equivalent status. STEAM was first introduced to the Republic of Korea in 2011 when the Ministry of Education emphasized fostering creative convergence talent. Many programs have since been developed under the name STEAM, but with the majority of programs focusing on technology education, arts and humanities are considered secondary. As a result, arts is most likely to be accepted as an option that can be excluded from the teachers who run the STEAM program. If what we ultimately pursue through STEAM education is in fostering STEAM literacy, we should no longer turn arts into a tooling area for STEM. Based on this consciousness, this study analyzed over 160 STEAM programs in middle and high schools, which were produced and distributed by the Ministry of Education and the Korea Science and Technology Foundation from 2012 to 2017. The framework of analyses referenced two criteria presented in the related prior studies: normative convergence and technological convergence. In addition, we divide Arts into fine arts and liberal arts and focused on Korean Language Course which is in liberal arts and analyzed what kind of curriculum standards were selected, and what kind of process the Korean language department participated in teaching and learning. In this study, to ensure the reliability of the analysis results, we have chosen to cross-check the individual analysis results of the two researchers and only if they are consistent. We also conducted a reliability check on the analysis results of three middle and high school teachers involved in the STEAM education program. Analyzing 10 programs selected randomly from the analyzed programs, Cronbach's α .853 showed a reliable level. The results of this study are summarized as follows. First, the convergence ratio of the liberal arts was lowest in the department of moral at 14.58%. Second, the normative convergence is 28.19%, which is lower than that of the technological convergence. Third, the language and achievement criteria selected for the program were limited to functional areas such as listening, talking, reading and writing. This means that the convergence of Korean language departments is made only by the necessary tools to communicate opinions or promote scientific products. In this study, we intend to compare these results with the STEAM programs in the United States and abroad to explore what elements or key concepts are required for the achievement criteria for Korean language and curriculum. This is meaningful in that the humanities field (A), including Korean, provides basic data that can be fused into 'equivalent qualifications' with science (S), technical engineering (TE) and mathematics (M).Keywords: Korean STEAM Programme, liberal arts, STEAM curriculum, STEAM Literacy, STEM
Procedia PDF Downloads 157285 Fashion Utopias: The Role of Fashion Exhibitions and Fashion Archives to Defining (and Stimulating) Possible Future Fashion Landscapes
Authors: Vittorio Linfante
Abstract:
Utopìa is a term that, since its first appearance in 1516, in Tommaso Moro’s work, has taken on different meanings and forms in various fields: social studies, politics, art, creativity, and design. The utopias, although of short duration and in their apparent impossibility, have been able to give a shape to the future, laying the foundations for our present and the future of the next generations. The Twentieth century was the historical period crossed by many changes, and it saw the most significant number of utopias not only social, political, and scientific but also artistic, architectural, in design, communication, and, last but not least, in fashion. Over the years, fashion has been able to interpret various utopistic impulses giving form to the most futuristic visions. From the Manifesto del Vestito by Giacomo Balla, through the functional experiments that led to the Tuta by Thayath and the Varst by Aleksandr Rodčenko and Varvara Stepanova, through the Space Age visions of Rudi Gernreich, Paco Rabanne and Pierre Cardin, and the Archizoom’s political actions and their fashion project Vestirsi è facile. Experiments that have continued to the present days through the (sometimes) excessive visions of Hussein Chalayan, Alexander McQueen, and Gareth Pugh or those that are more anchored to the market (but no fewer innovative and visionaries) by Prada, Chanel, and Raf Simmons. If, as Bauman states, it is true that we have entered in a phase of Retrotopia characterized by the inability to think about new forms of the future; it is necessary, more than ever, to redefine the role of history, of its narration and its mise en scène, within the contemporary creative process. A process that increasingly requires an in-depth knowledge of the past for the definition of a renewed discourse about design processes. A discourse in which words like archive, exhibition, curating, revival, vintage, and costume take on new meanings. The paper aims to investigate–through case studies, research, and professional projects–the renewed role of curating and preserving fashion artefacts. A renewed role that–in an era of Retrotopia–museums, exhibitions, and archives can (and must) assume, to contribute to the definition of new design paradigms, capable of overcoming the traditional categories of revival or costume in favour of a more contemporary “mash-up” approach. Mash-up in which past and present, craftsmanship and new technologies, revival and experimentation merge seamlessly. In this perspective, dresses (as well as fashion accessories) should be considered not only as finished products but as artefacts capable of talking about the past and of producing unpublished new stories at the same time. Archives, exhibitions (academic and not), and museums thus become powerful sources of inspiration for fashion: places and projects capable of generating innovation, becoming active protagonists of the contemporary fashion design processes.Keywords: heritage, history, costume and fashion interface, performance, language, design research
Procedia PDF Downloads 114284 Developing a Framework for Designing Digital Assessments for Middle-school Aged Deaf or Hard of Hearing Students in the United States
Authors: Alexis Polanco Jr, Tsai Lu Liu
Abstract:
Research on digital assessment for deaf and hard of hearing (DHH) students is negligible. Part of this stems from the DHH assessment design existing at the intersection of the emergent disciplines of usability, accessibility, and child-computer interaction (CCI). While these disciplines have some prevailing guidelines —e.g. in user experience design (UXD), there is Jacob Nielsen’s 10 Usability Heuristics (Nielsen-10); for accessibility, there are the Web Content Accessibility Guidelines (WCAG) & the Principles of Universal Design (PUD)— this research was unable to uncover a unified set of guidelines. Given that digital assessments have lasting implications for the funding and shaping of U.S. school districts, it is vital that cross-disciplinary guidelines emerge. As a result, this research seeks to provide a framework by which these disciplines can share knowledge. The framework entails a process of asking subject-matter experts (SMEs) and design & development professionals to self-describe their fields of expertise, how their work might serve DHH students, and to expose any incongruence between their ideal process and what is permissible at their workplace. This research used two rounds of mixed methods. The first round consisted of structured interviews with SMEs in usability, accessibility, CCI, and DHH education. These practitioners were not designers by trade but were revealed to use designerly work processes. In addition to asking these SMEs about their field of expertise, work process, etc., these SMEs were asked to comment about whether they believed Nielsen-10 and/or PUD were sufficient for designing products for middle-school DHH students. This first round of interviews revealed that Nielsen-10 and PUD were, at best, a starting point for creating middle-school DHH design guidelines or, at worst insufficient. The second round of interviews followed a semi-structured interview methodology. The SMEs who were interviewed in the first round were asked open-ended follow-up questions about their semantic understanding of guidelines— going from the most general sense down to the level of design guidelines for DHH middle school students. Designers and developers who were never interviewed previously were asked the same questions that the SMEs had been asked across both rounds of interviews. In terms of the research goals: it was confirmed that the design of digital assessments for DHH students is inherently cross-disciplinary. Unexpectedly, 1) guidelines did not emerge from the interviews conducted in this study, and 2) the principles of Nielsen-10 and PUD were deemed to be less relevant than expected. Given the prevalence of Nielsen-10 in UXD curricula across academia and certificate programs, this poses a risk to the efficacy of DHH assessments designed by UX designers. Furthermore, the following findings emerged: A) deep collaboration between the disciplines of usability, accessibility, and CCI is low to non-existent; B) there are no universally agreed-upon guidelines for designing digital assessments for DHH middle school students; C) these disciplines are structured academically and professionally in such a way that practitioners may not know to reach out to other disciplines. For example, accessibility teams at large organizations do not have designers and accessibility specialists on the same team.Keywords: deaf, hard of hearing, design, guidelines, education, assessment
Procedia PDF Downloads 67283 Method for Requirements Analysis and Decision Making for Restructuring Projects in Factories
Authors: Rene Hellmuth
Abstract:
The requirements for the factory planning and the building concerned have changed in the last years. Factory planning has the task of designing products, plants, processes, organization, areas, and the building of a factory. Regular restructuring gains more importance in order to maintain the competitiveness of a factory. Restrictions regarding new areas, shorter life cycles of product and production technology as well as a VUCA (volatility, uncertainty, complexity and ambiguity) world cause more frequently occurring rebuilding measures within a factory. Restructuring of factories is the most common planning case today. Restructuring is more common than new construction, revitalization and dismantling of factories. The increasing importance of restructuring processes shows that the ability to change was and is a promising concept for the reaction of companies to permanently changing conditions. The factory building is the basis for most changes within a factory. If an adaptation of a construction project (factory) is necessary, the inventory documents must be checked and often time-consuming planning of the adaptation must take place to define the relevant components to be adapted, in order to be able to finally evaluate them. The different requirements of the planning participants from the disciplines of factory planning (production planner, logistics planner, automation planner) and industrial construction planning (architect, civil engineer) come together during reconstruction and must be structured. This raises the research question: Which requirements do the disciplines involved in the reconstruction planning place on a digital factory model? A subordinate research question is: How can model-based decision support be provided for a more efficient design of the conversion within a factory? Because of the high adaptation rate of factories and its building described above, a methodology for rescheduling factories based on the requirements engineering method from software development is conceived and designed for practical application in factory restructuring projects. The explorative research procedure according to Kubicek is applied. Explorative research is suitable if the practical usability of the research results has priority. Furthermore, it will be shown how to best use a digital factory model in practice. The focus will be on mobile applications to meet the needs of factory planners on site. An augmented reality (AR) application will be designed and created to provide decision support for planning variants. The aim is to contribute to a shortening of the planning process and model-based decision support for more efficient change management. This requires the application of a methodology that reduces the deficits of the existing approaches. The time and cost expenditure are represented in the AR tablet solution based on a building information model (BIM). Overall, the requirements of those involved in the planning process for a digital factory model in the case of restructuring within a factory are thus first determined in a structured manner. The results are then applied and transferred to a construction site solution based on augmented reality.Keywords: augmented reality, digital factory model, factory planning, restructuring
Procedia PDF Downloads 134282 The Effects of Heavy Metal and Aromatic Hydrocarbon Pollution on Bees
Authors: Katarzyna Zięba, Hajnalka Szentgyörgyi, Paweł Miśkowiec, Agnieszka Moos-Matysik
Abstract:
Bees are effective pollinators of plants using by humans. However, there is a concern about the fate different species due to their recently decline. Pollution of the environment is described in the literature as one of the causes of this phenomenon. Due to human activities, heavy metals and aromatic hydrocarbons can occur in bee organisms in high concentrations. The presented study aims to provide information on how pollution affects bee quality, taking into account, also the biological differences between various groups of bees. Understanding the consequences of environmental pollution on bees can help to create and promote bee friendly habitats and actions. The analyses were carried out using two contamination gradients with 5 sites on each. The first, mainly heavy metal polluted gradient is stretching approx. 30km from the Bukowno Zinc smelter near Olkusz in the Lesser Poland Voivodship, to the north. The second cuts through the agglomeration of Kraków up to the southern borders of the Ojców National Park. The gradient near Olkusz is a well-described pollution gradient contaminated mainly by zinc, lead, and cadmium. The second gradient cut through the agglomeration of Kraków and end below the Ojców National Park. On each gradient, two bee species were installed: red mason bees (Osmia bicornis) and honey bees (Apis mellifera). Red mason bee is a polylectic, solitary bee species, widely distributed in Poland. Honey bees are a highly social species of bees, with clearly defined casts and roles in the colony. Before installing the bees in the field, samples of imagos of red mason bees and samples of pollen and imagos from each honey bee colony were analysed for zinc, lead cadmium, polycyclic and monocyclic hydrocarbons levels. After collecting the bees from the field, samples of bees and pollen samples for each site were prepared for heavy metal, monocyclic hydrocarbon, and polycyclic hydrocarbon analysis. Analyses of aromatic hydrocarbons were performed with gas chromatography coupled with a headspace sampler (HP 7694E) and mass spectrometer (MS) as detector. Monocyclic compounds were injected into column with headspace sampler while polycyclic ones with manual injector (after solid-liquid extraction with hexane). The heavy metal content (zinc, lead and cadmium) was assessed with flame atomic absorption spectroscopy (FAAS AAnalyst 300 Perkin Elmer spectrometer) according to the methods for honey and bee products described in the literature. Pollution levels found in bee bodies and imago body masses in both species, and proportion of sex in case of red mason bees were correlated with pollution levels found in pollen for each site and colony or trap nest. An attempt to pinpoint the most important form of contamination regarding bee health was also be undertaken based on the achieved results.Keywords: heavy metals, aromatic hydrocarbons, bees, pollution
Procedia PDF Downloads 508281 Green Extraction Technologies of Flavonoids Containing Pharmaceuticals
Authors: Lamzira Ebralidze, Aleksandre Tsertsvadze, Dali Berashvili, Aliosha Bakuridze
Abstract:
Nowadays, there is an increasing demand for biologically active substances from vegetable, animal, and mineral resources. In terms of the use of natural compounds, pharmaceutical, cosmetic, and nutrition industry has big interest. The biggest drawback of conventional extraction methods is the need to use a large volume of organic extragents. The removal of the organic solvent is a multi-stage process. And their absolute removal cannot be achieved, and they still appear in the final product as impurities. A large amount of waste containing organic solvent damages not only human health but also has the harmful effects of the environment. Accordingly, researchers are focused on improving the extraction methods, which aims to minimize the use of organic solvents and energy sources, using alternate solvents and renewable raw materials. In this context, green extraction principles were formed. Green Extraction is a need of today’s environment. Green Extraction is the concept, and it totally corresponds to the challenges of the 21st century. The extraction of biologically active compounds based on green extraction principles is vital from the view of preservation and maintaining biodiversity. Novel technologies of green extraction are known, such as "cold methods" because during the extraction process, the temperature is relatively lower, and it doesn’t have a negative impact on the stability of plant compounds. Novel technologies provide great opportunities to reduce or replace the use of organic toxic solvents, the efficiency of the process, enhance excretion yield, and improve the quality of the final product. The objective of the research is the development of green technologies of flavonoids containing preparations. Methodology: At the first stage of the research, flavonoids containing preparations (Tincture Herba Leonuri, flamine, rutine) were prepared based on conventional extraction methods: maceration, bismaceration, percolation, repercolation. At the same time, the same preparations were prepared based on green technologies, microwave-assisted, UV extraction methods. Product quality characteristics were evaluated by pharmacopeia methods. At the next stage of the research technological - economic characteristics and cost efficiency of products prepared based on conventional and novel technologies were determined. For the extraction of flavonoids, water is used as extragent. Surface-active substances are used as co-solvent in order to reduce surface tension, which significantly increases the solubility of polyphenols in water. Different concentrations of water-glycerol mixture, cyclodextrin, ionic solvent were used for the extraction process. In vitro antioxidant activity will be studied by the spectrophotometric method, using DPPH (2,2-diphenyl-1- picrylhydrazyl) as an antioxidant assay. The advantage of green extraction methods is also the possibility of obtaining higher yield in case of low temperature, limitation extraction process of undesirable compounds. That is especially important for the extraction of thermosensitive compounds and maintaining their stability.Keywords: extraction, green technologies, natural resources, flavonoids
Procedia PDF Downloads 129280 Analyzing Global User Sentiments on Laptop Features: A Comparative Study of Preferences Across Economic Contexts
Authors: Mohammadreza Bakhtiari, Mehrdad Maghsoudi, Hamidreza Bakhtiari
Abstract:
The widespread adoption of laptops has become essential to modern lifestyles, supporting work, education, and entertainment. Social media platforms have emerged as key spaces where users share real-time feedback on laptop performance, providing a valuable source of data for understanding consumer preferences. This study leverages aspect-based sentiment analysis (ABSA) on 1.5 million tweets to examine how users from developed and developing countries perceive and prioritize 16 key laptop features. The analysis reveals that consumers in developing countries express higher satisfaction overall, emphasizing affordability, durability, and reliability. Conversely, users in developed countries demonstrate more critical attitudes, especially toward performance-related aspects such as cooling systems, battery life, and chargers. The study employs a mixed-methods approach, combining ABSA using the PyABSA framework with expert insights gathered through a Delphi panel of ten industry professionals. Data preprocessing included cleaning, filtering, and aspect extraction from tweets. Universal issues such as battery efficiency and fan performance were identified, reflecting shared challenges across markets. However, priorities diverge between regions, while users in developed countries demand high-performance models with advanced features, those in developing countries seek products that offer strong value for money and long-term durability. The findings suggest that laptop manufacturers should adopt a market-specific strategy by developing differentiated product lines. For developed markets, the focus should be on cutting-edge technologies, enhanced cooling solutions, and comprehensive warranty services. In developing markets, emphasis should be placed on affordability, versatile port options, and robust designs. Additionally, the study highlights the importance of universal charging solutions and continuous sentiment monitoring to adapt to evolving consumer needs. This research offers practical insights for manufacturers seeking to optimize product development and marketing strategies for global markets, ensuring enhanced user satisfaction and long-term competitiveness. Future studies could explore multi-source data integration and conduct longitudinal analyses to capture changing trends over time.Keywords: consumer behavior, durability, laptop industry, sentiment analysis, social media analytics
Procedia PDF Downloads 15279 Switchable Lipids: From a Molecular Switch to a pH-Sensitive System for the Drug and Gene Delivery
Authors: Jeanne Leblond, Warren Viricel, Amira Mbarek
Abstract:
Although several products have reached the market, gene therapeutics are still in their first stages and require optimization. It is possible to improve their lacking efficiency by the use of carefully engineered vectors, able to carry the genetic material through each of the biological barriers they need to cross. In particular, getting inside the cell is a major challenge, because these hydrophilic nucleic acids have to cross the lipid-rich plasmatic and/or endosomal membrane, before being degraded into lysosomes. It takes less than one hour for newly endocytosed liposomes to reach highly acidic lysosomes, meaning that the degradation of the carried gene occurs rapidly, thus limiting the transfection efficiency. We propose to use a new pH-sensitive lipid able to change its conformation upon protonation at endosomal pH values, leading to the disruption of the lipidic bilayer and thus to the fast release of the nucleic acids into the cytosol. It is expected that this new pH-sensitive mechanism promote endosomal escape of the gene, thereby its transfection efficiency. The main challenge of this work was to design a preparation presenting fast-responding lipidic bilayer destabilization properties at endosomal pH 5 while remaining stable at blood pH value and during storage. A series of pH-sensitive lipids able to perform a conformational switch upon acidification were designed and synthesized. Liposomes containing these switchable lipids, as well as co-lipids were prepared and characterized. The liposomes were stable at 4°C and pH 7.4 for several months. Incubation with siRNA led to the full entrapment of nucleic acids as soon as the positive/negative charge ratio was superior to 2. The best liposomal formulation demonstrated a silencing efficiency up to 10% on HeLa cells, very similar to a commercial agent, with a lowest toxicity than the commercial agent. Using flow cytometry and microscopy assays, we demonstrated that drop of pH was required for the transfection efficiency, since bafilomycin blocked the transfection efficiency. Additional evidence was brought by the synthesis of a negative control lipid, which was unable to switch its conformation, and consequently exhibited no transfection ability. Mechanistic studies revealed that the uptake was mediated through endocytosis, by clathrin and caveolae pathways, as reported for previous lipid nanoparticle systems. This potent system was used for the treatment of hypercholesterolemia. The switchable lipids were able to knockdown PCSK9 expression on human hepatocytes (Huh-7). Its efficiency is currently evaluated on in vivo mice model of PCSK9 KO mice. In summary, we designed and optimized a new cationic pH-sensitive lipid for gene delivery. Its transfection efficiency is similar to the best available commercial agent, without the usually associated toxicity. The promising results lead to its use for the treatment of hypercholesterolemia on a mice model. Anticancer applications and pulmonary chronic disease are also currently investigated.Keywords: liposomes, siRNA, pH-sensitive, molecular switch
Procedia PDF Downloads 204278 Accidental U.S. Taxpayers Residing Abroad: Choosing between U.S. Citizenship or Keeping Their Local Investment Accounts
Authors: Marco Sewald
Abstract:
Due to the current enforcement of exterritorial U.S. legislation, up to 9 million U.S. (dual) citizens residing abroad are subject to U.S. double and surcharge taxation and at risk of losing access to otherwise basic financial services and investment opportunities abroad. The United States is the only OECD country that taxes non-resident citizens, lawful permanent residents and other non-resident aliens on their worldwide income, based on local U.S. tax laws. To enforce these policies the U.S. has implemented ‘saving clauses’ in all tax treaties and implemented several compliance provisions, including the Foreign Account Tax Compliance Act (FATCA), Qualified Intermediaries Agreements (QI) and Intergovernmental Agreements (IGA) addressing Foreign Financial Institutions (FFIs) to implement these provisions in foreign jurisdictions. This policy creates systematic cases of double and surcharge taxation. The increased enforcement of compliance rules is creating additional report burdens for U.S. persons abroad and FFIs accepting such U.S. persons as customers. FFIs in Europe react with a growing denial of specific financial services to this population. The numbers of U.S. citizens renouncing has dramatically increased in the last years. A case study is chosen as an appropriate methodology and research method, as being an empirical inquiry that investigates a contemporary phenomenon within its real-life context; when the boundaries between phenomenon and context are not clearly evident; and in which multiple sources of evidence are used. This evaluative approach is testing whether the combination of policies works in practice, or whether they are in accordance with desirable moral, political, economical aims, or may serve other causes. The research critically evaluates the financial and non-financial consequences and develops sufficient strategies. It further discusses these strategies to avoid the undesired consequences of exterritorial U.S. legislation. Three possible strategies are resulting from the use cases: (1) Duck and cover, (2) Pay U.S. double/surcharge taxes, tax preparing fees and accept imposed product limitations and (3) Renounce U.S. citizenship and pay possible exit taxes, tax preparing fees and the requested $2,350 fee to renounce. While the first strategy is unlawful and therefore unsuitable, the second strategy is only suitable if the U.S. citizen residing abroad is planning to move to the U.S. in the future. The last strategy is the only reasonable and lawful way provided by the U.S. to limit the exposure to U.S. double and surcharge taxation and the limitations on financial products. The results are believed to add a perspective to the current academic discourse regarding U.S. citizenship based taxation, currently dominated by U.S. scholars, while providing sufficient strategies for the affected population at the same time.Keywords: citizenship based taxation, FATCA, FBAR, qualified intermediaries agreements, renounce U.S. citizenship
Procedia PDF Downloads 201277 Foreseen the Future: Human Factors Integration in European Horizon Projects
Authors: José Manuel Palma, Paula Pereira, Margarida Tomás
Abstract:
Foreseen the future: Human factors integration in European Horizon Projects The development of new technology as artificial intelligence, smart sensing, robotics, cobotics or intelligent machinery must integrate human factors to address the need to optimize systems and processes, thereby contributing to the creation of a safe and accident-free work environment. Human Factors Integration (HFI) consistently pose a challenge for organizations when applied to daily operations. AGILEHAND and FORTIS projects are grounded in the development of cutting-edge technology - industry 4.0 and 5.0. AGILEHAND aims to create advanced technologies for autonomously sort, handle, and package soft and deformable products, whereas FORTIS focuses on developing a comprehensive Human-Robot Interaction (HRI) solution. Both projects employ different approaches to explore HFI. AGILEHAND is mainly empirical, involving a comparison between the current and future work conditions reality, coupled with an understanding of best practices and the enhancement of safety aspects, primarily through management. FORTIS applies HFI throughout the project, developing a human-centric approach that includes understanding human behavior, perceiving activities, and facilitating contextual human-robot information exchange. it intervention is holistic, merging technology with the physical and social contexts, based on a total safety culture model. In AGILEHAND we will identify safety emergent risks, challenges, their causes and how to overcome them by resorting to interviews, questionnaires, literature review and case studies. Findings and results will be presented in “Strategies for Workers’ Skills Development, Health and Safety, Communication and Engagement” Handbook. The FORTIS project will implement continuous monitoring and guidance of activities, with a critical focus on early detection and elimination (or mitigation) of risks associated with the new technology, as well as guidance to adhere correctly with European Union safety and privacy regulations, ensuring HFI, thereby contributing to an optimized safe work environment. To achieve this, we will embed safety by design, and apply questionnaires, perform site visits, provide risk assessments, and closely track progress while suggesting and recommending best practices. The outcomes of these measures will be compiled in the project deliverable titled “Human Safety and Privacy Measures”. These projects received funding from European Union’s Horizon 2020/Horizon Europe research and innovation program under grant agreement No101092043 (AGILEHAND) and No 101135707 (FORTIS).Keywords: human factors integration, automation, digitalization, human robot interaction, industry 4.0 and 5.0
Procedia PDF Downloads 64276 Fuzzy Availability Analysis of a Battery Production System
Authors: Merve Uzuner Sahin, Kumru D. Atalay, Berna Dengiz
Abstract:
In today’s competitive market, there are many alternative products that can be used in similar manner and purpose. Therefore, the utility of the product is an important issue for the preferability of the brand. This utility could be measured in terms of its functionality, durability, reliability. These all are affected by the system capabilities. Reliability is an important system design criteria for the manufacturers to be able to have high availability. Availability is the probability that a system (or a component) is operating properly to its function at a specific point in time or a specific period of times. System availability provides valuable input to estimate the production rate for the company to realize the production plan. When considering only the corrective maintenance downtime of the system, mean time between failure (MTBF) and mean time to repair (MTTR) are used to obtain system availability. Also, the MTBF and MTTR values are important measures to improve system performance by adopting suitable maintenance strategies for reliability engineers and practitioners working in a system. Failure and repair time probability distributions of each component in the system should be known for the conventional availability analysis. However, generally, companies do not have statistics or quality control departments to store such a large amount of data. Real events or situations are defined deterministically instead of using stochastic data for the complete description of real systems. A fuzzy set is an alternative theory which is used to analyze the uncertainty and vagueness in real systems. The aim of this study is to present a novel approach to compute system availability using representation of MTBF and MTTR in fuzzy numbers. Based on the experience in the system, it is decided to choose 3 different spread of MTBF and MTTR such as 15%, 20% and 25% to obtain lower and upper limits of the fuzzy numbers. To the best of our knowledge, the proposed method is the first application that is used fuzzy MTBF and fuzzy MTTR for fuzzy system availability estimation. This method is easy to apply in any repairable production system by practitioners working in industry. It is provided that the reliability engineers/managers/practitioners could analyze the system performance in a more consistent and logical manner based on fuzzy availability. This paper presents a real case study of a repairable multi-stage production line in lead-acid battery production factory in Turkey. The following is focusing on the considered wet-charging battery process which has a higher production level than the other types of battery. In this system, system components could exist only in two states, working or failed, and it is assumed that when a component in the system fails, it becomes as good as new after repair. Instead of classical methods, using fuzzy set theory and obtaining intervals for these measures would be very useful for system managers, practitioners to analyze system qualifications to find better results for their working conditions. Thus, much more detailed information about system characteristics is obtained.Keywords: availability analysis, battery production system, fuzzy sets, triangular fuzzy numbers (TFNs)
Procedia PDF Downloads 224275 Improvement of the Traditional Techniques of Artistic Casting through the Development of Open Source 3D Printing Technologies Based on Digital Ultraviolet Light Processing
Authors: Drago Diaz Aleman, Jose Luis Saorin Perez, Cecile Meier, Itahisa Perez Conesa, Jorge De La Torre Cantero
Abstract:
Traditional manufacturing techniques used in artistic contexts compete with highly productive and efficient industrial procedures. The craft techniques and associated business models tend to disappear under the pressure of the appearance of mass-produced products that compete in all niche markets, including those traditionally reserved for the work of art. The surplus value derived from the prestige of the author, the exclusivity of the product or the mastery of the artist, do not seem to be sufficient reasons to preserve this productive model. In the last years, the adoption of open source digital manufacturing technologies in small art workshops can favor their permanence by assuming great advantages such as easy accessibility, low cost, and free modification, adapting to specific needs of each workshop. It is possible to use pieces modeled by computer and made with FDM (Fused Deposition Modeling) 3D printers that use PLA (polylactic acid) in the procedures of artistic casting. Models printed by PLA are limited to approximate minimum sizes of 3 cm, and optimal layer height resolution is 0.1 mm. Due to these limitations, it is not the most suitable technology for artistic casting processes of smaller pieces. An alternative to solve size limitation, are printers from the type (SLS) "selective sintering by laser". And other possibility is a laser hardens, by layers, metal powder and called DMLS (Direct Metal Laser Sintering). However, due to its high cost, it is a technology that is difficult to introduce in small artistic foundries. The low-cost DLP (Digital Light Processing) type printers can offer high resolutions for a reasonable cost (around 0.02 mm on the Z axis and 0.04 mm on the X and Y axes), and can print models with castable resins that allow the subsequent direct artistic casting in precious metals or their adaptation to processes such as electroforming. In this work, the design of a DLP 3D printer is detailed, using backlit LCD screens with ultraviolet light. Its development is totally "open source" and is proposed as a kit made up of electronic components, based on Arduino and easy to access mechanical components in the market. The CAD files of its components can be manufactured in low-cost FDM 3D printers. The result is less than 500 Euros, high resolution and open-design with free access that allows not only its manufacture but also its improvement. In future works, we intend to carry out different comparative analyzes, which allow us to accurately estimate the print quality, as well as the real cost of the artistic works made with it.Keywords: traditional artistic techniques, DLP 3D printer, artistic casting, electroforming
Procedia PDF Downloads 142274 Regularizing Software for Aerosol Particles
Authors: Christine Böckmann, Julia Rosemann
Abstract:
We present an inversion algorithm that is used in the European Aerosol Lidar Network for the inversion of data collected with multi-wavelength Raman lidar. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. The algorithm is based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithm allows us to derive particle effective radius, volume, surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo (SSA) can be computed from the retrieve microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. From mathematical point of view the algorithm is based on the concept of using truncated singular value decomposition as regularization method. This method was adapted to work for the retrieval of the particle size distribution function (PSD) and is called hybrid regularization technique since it is using a triple of regularization parameters. The inversion of an ill-posed problem, such as the retrieval of the PSD, is always a challenging task because very small measurement errors will be amplified most often hugely during the solution process unless an appropriate regularization method is used. Even using a regularization method is difficult since appropriate regularization parameters have to be determined. Therefore, in a next stage of our work we decided to use two regularization techniques in parallel for comparison purpose. The second method is an iterative regularization method based on Pade iteration. Here, the number of iteration steps serves as the regularization parameter. We successfully developed a semi-automated software for spherical particles which is able to run even on a parallel processor machine. From a mathematical point of view, it is also very important (as selection criteria for an appropriate regularization method) to investigate the degree of ill-posedness of the problem which we found is a moderate ill-posedness. We computed the optical data from mono-modal logarithmic PSD and investigated particles of spherical shape in our simulations. We considered particle radii as large as 6 nm which does not only cover the size range of particles in the fine-mode fraction of naturally occurring PSD but also covers a part of the coarse-mode fraction of PSD. We considered errors of 15% in the simulation studies. For the SSA, 100% of all cases achieve relative errors below 12%. In more detail, 87% of all cases for 355 nm and 88% of all cases for 532 nm are well below 6%. With respect to the absolute error for non- and weak-absorbing particles with real parts 1.5 and 1.6 in all modes the accuracy limit +/- 0.03 is achieved. In sum, 70% of all cases stay below +/-0.03 which is sufficient for climate change studies.Keywords: aerosol particles, inverse problem, microphysical particle properties, regularization
Procedia PDF Downloads 343273 Reliability Analysis of Geometric Performance of Onboard Satellite Sensors: A Study on Location Accuracy
Authors: Ch. Sridevi, A. Chalapathi Rao, P. Srinivasulu
Abstract:
The location accuracy of data products is a critical parameter in assessing the geometric performance of satellite sensors. This study focuses on reliability analysis of onboard sensors to evaluate their performance in terms of location accuracy performance over time. The analysis utilizes field failure data and employs the weibull distribution to determine the reliability and in turn to understand the improvements or degradations over a period of time. The analysis begins by scrutinizing the location accuracy error which is the root mean square (RMS) error of differences between ground control point coordinates observed on the product and the map and identifying the failure data with reference to time. A significant challenge in this study is to thoroughly analyze the possibility of an infant mortality phase in the data. To address this, the Weibull distribution is utilized to determine if the data exhibits an infant stage or if it has transitioned into the operational phase. The shape parameter beta plays a crucial role in identifying this stage. Additionally, determining the exact start of the operational phase and the end of the infant stage poses another challenge as it is crucial to eliminate residual infant mortality or wear-out from the model, as it can significantly increase the total failure rate. To address this, an approach utilizing the well-established statistical Laplace test is applied to infer the behavior of sensors and to accurately ascertain the duration of different phases in the lifetime and the time required for stabilization. This approach also helps in understanding if the bathtub curve model, which accounts for the different phases in the lifetime of a product, is appropriate for the data and whether the thresholds for the infant period and wear-out phase are accurately estimated by validating the data in individual phases with Weibull distribution curve fitting analysis. Once the operational phase is determined, reliability is assessed using Weibull analysis. This analysis not only provides insights into the reliability of individual sensors with regards to location accuracy over the required period of time, but also establishes a model that can be applied to automate similar analyses for various sensors and parameters using field failure data. Furthermore, the identification of the best-performing sensor through this analysis serves as a benchmark for future missions and designs, ensuring continuous improvement in sensor performance and reliability. Overall, this study provides a methodology to accurately determine the duration of different phases in the life data of individual sensors. It enables an assessment of the time required for stabilization and provides insights into the reliability during the operational phase and the commencement of the wear-out phase. By employing this methodology, designers can make informed decisions regarding sensor performance with regards to location accuracy, contributing to enhanced accuracy in satellite-based applications.Keywords: bathtub curve, geometric performance, Laplace test, location accuracy, reliability analysis, Weibull analysis
Procedia PDF Downloads 65272 The Involvement of the Homing Receptors CCR7 and CD62L in the Pathogenesis of Graft-Versus-Host Disease
Authors: Federico Herrera, Valle Gomez García de Soria, Itxaso Portero Sainz, Carlos Fernández Arandojo, Mercedes Royg, Ana Marcos Jimenez, Anna Kreutzman, Cecilia MuñozCalleja
Abstract:
Introduction: Graft-versus-host disease (GVHD) still remains the major complication associated with allogeneic stem cell transplantation (SCT). The pathogenesis involves migration of donor naïve T-cells into recipient secondary lymphoid organs. Two molecules are important in this process: CD62L and CCR7, which are characteristically expressed in naïve/central memory T-cells. With this background, we aimed to study the influence of CCR7 and CD62L on donor lymphocytes in the development and severity of GVHD. Material and methods: This single center study included 98 donor-recipient pairs. Samples were collected prospectively from the apheresis product and phenotyped by flow cytometry. CCR7 and CD62L expression in CD4+ and CD8+ T-cells were compared between patients who developed acute (n=40) or chronic GVHD (n=33) and those who did not (n=38). Results: The patients who developed acute GVHD were transplanted with a higher percentage of CCR7+CD4+ T-cells (p = 0.05) compared to the no GVHD group. These results were confirmed when these patients were divided in degrees according to the severity of the disease; the more severe disease, the higher percentage of CCR7+CD4+ T-cells. Conversely, chronic GVHD patients received a higher percentage of CCR7+CD8+ T-cells (p=0.02) in comparison to those who did not develop the complication. These data were also confirmed when patients were subdivided in degrees of the disease severity. A multivariable analysis confirmed that percentage of CCR7+CD4+ T-cells is a predictive factor of acute GVHD whereas the percentage of CCR7+CD8+ T-cells is a predictive factor of chronic GVHD. In vitro functional assays (migration and activation assays) supported the idea of CCR7+ T-cells were involved in the development of GVHD. As low levels of CD62L expression were detected in all apheresis products, we tested the hypothesis that CD62L was shed during apheresis procedure. Comparing CD62L surface levels in T-cells from the same donor immediately before collecting the apheresis product, and the final apheresis product we found that this process down-regulated CD62L in both CD4+ and CD8+ T cells (p=0.008). Interestingly, when CD62L levels were analysed in days 30 or 60 after engraftment, they recovered to baseline (p=0.008). However, to investigate the relation between CD62L expression and the development of GVHD in the recipient samples after the engraftment, no differences were observed comparing patients with GVHD to those who did not develop the disease. Discussion: Our prospective study indicates that the CCR7+ T-cells from the donor, which include naïve and central memory T-cells, contain the alloreactive cells with a high ability to mediate GVHD (in the case of both migration and activation). Therefore we suggest that the proportion and functional properties of CCR7+CD4+ and CCR7+CD8+ T-cells in the apheresis could act as a predictive biomarker to both acute and chronic GVHD respectively. Importantly, our study precludes that CD62L is lost in the apheresis and therefore it is not a reliable biomarker for the development of GVHD.Keywords: CCR7, CD62L, GVHD, SCT
Procedia PDF Downloads 287271 Composition and Catalytic Behaviour of Biogenic Iron Containing Materials Obtained by Leptothrix Bacteria Cultivation in Different Growth Media
Authors: M. Shopska, D. Paneva, G. Kadinov, Z. Cherkezova-Zheleva, I. Mitov
Abstract:
The iron containing materials are used as catalysts in different processes. The chemical methods of their synthesis use toxic and expensive chemicals; sophisticated devices; energy consumption processes that raise their cost. Besides, dangerous waste products are formed. At present time such syntheses are out of date and wasteless technologies are indispensable. The bioinspired technologies are consistent with the ecological requirements. Different microorganisms participate in the biomineralization of the iron and some phytochemicals are involved, too. The methods for biogenic production of iron containing materials are clean, simple, nontoxic, realized at ambient temperature and pressure, cheaper. The biogenic iron materials embrace different iron compounds. Due to their origin these substances are nanosized, amorphous or poorly crystalline, porous and have number of useful properties like SPM, high magnetism, low toxicity, biocompatibility, absorption of microwaves, high surface area/volume ratio, active sites on the surface with unusual coordination that distinguish them from the bulk materials. The biogenic iron materials are applied in the heterogeneous catalysis in different roles - precursor, active component, support, immobilizer. The application of biogenic iron oxide materials gives rise to increased catalytic activity in comparison with those of abiotic origin. In our study we investigated the catalytic behavior of biomasses obtained by cultivation of Leptothrix bacteria in three nutrition media – Adler, Fedorov, and Lieske. The biomass composition was studied by Moessbauer spectroscopy and transmission IRS. Catalytic experiments on CO oxidation were carried out using in situ DRIFTS. Our results showed that: i) the used biomasses contain α-FeOOH, γ-FeOOH, γ-Fe2O3 in different ratios; ii) the biomass formed in Adler medium contains γ-FeOOH as main phase. The CO conversion was about 50% as evaluated by decreased integrated band intensity in the gas mixture spectra during the reaction. The main phase in the spent sample is γ-Fe2O3; iii) the biomass formed in Lieske medium contains α-FeOOH. The CO conversion was about 20%. The main phase in the spent sample is α-Fe2O3; iv) the biomass formed in Fedorov medium contains γ-Fe2O3 as main phase. CO conversion in the test reaction was about 19%. The results showed that the catalytic activity up to 200°C resulted predominantly from α-FeOOH and γ-FeOOH. The catalytic activity at temperatures higher than 200°C was due to the formation of γ-Fe2O3. The oxyhydroxides, which are the principal compounds in the biomass, have low catalytic activity in the used reaction; the maghemite has relatively good catalytic activity; the hematite has activity commensurate with that of the oxyhydroxides. Moreover it can be affirmed that catalytic activity is inherent in maghemite, which is obtained by transformation of the biogenic lepidocrocite, i.e. it has biogenic precursor.Keywords: nanosized biogenic iron compounds, catalytic behavior in reaction of CO oxidation, in situ DRIFTS, Moessbauer spectroscopy
Procedia PDF Downloads 369270 Anisakidosis in Turkey: Serological Survey and Risk for Humans
Authors: E. Akdur Öztürk, F. İrvasa Bilgiç, A. Ludovisi , O. Gülbahar, D. Dirim Erdoğan, M. Korkmaz, M. Á. Gómez Morales
Abstract:
Anisakidosis is a zoonotic human fish-borne parasitic disease caused by accidental ingestion of anisakid third-stage larvae (L3) of members of the Anisakidae family present in infected marine fish or cephalopods. Infection with anisakid larvae can lead to gastric, intestinal, extra-gastrointestinal and gastroallergic forms of the disease. Anisakid parasites have been reported in almost all seas, particularly in the Mediterranean Sea. There is a remarkably high level of risk exposure to these zoonotic parasites as they are present in economically and ecologically important fish of Europe. Anisakid L3 larvae have been also detected in several fish species from the Aegean Sea. Turkey is a peninsular country surrounded by Black, Aegean and the Mediterranean Sea. In this country, fishing habit and fishery product consumption are highly common. In recent years, there was also an increase in the consumption of raw fish due to the increasing interest in the cuisine of the Far East countries. In different regions of Turkey, A. simplex (inMerluccius Merluccius Scomber japonicus, Trachurus mediterraneus, Sardina pilchardus, Engraulis encrasicolus, etc.), Anisakis spp., Contraceucum spp., Pseudoterronova spp. and, C. aduncum were identified as well. Although it is accepted both the presence of anisakid parasites in fish and fishery products in Turkey and the presence of Turkish people with allergic manifestations after fish consumption, there are no reports of human anisakiasis in this country. Given the high prevalence of anisakid parasites in the country, the absence of reports is likely not due to the absence of clinical cases rather to the unavailability of diagnostic tools and the low awareness of the presence of this infection. The aim of the study was to set up an IgE-Western Blot (WB) based test to detect the anisakidosis sensitization among Turkish people with a history of allergic manifestation related to fish consumption. To this end, crude worm antigens (CWA) and allergen enriched fraction (50-66% ) were prepared from L3 of A. simplex (s.l.) collected from Lepidopus caudatus fished in the Mediterranean Sea. These proteins were electrophoretically separated and transferred into the nitrocellulose membranes. By WB, specific proteins recognized by positive control serum samples from sensitized patients were visualized on nitrocellulose membranes by a colorimetric reaction. The CWA and 50–66% fraction showed specific bands, mainly due to Ani s 1 (20-22 kD) and Ani s 4 (9-10 kD). So far, a total of 7 serum samples from people with allergic manifestation and positive skin prick test (SPT) after fish consumption, have been tested and all of them resulted negative by WB, indicating the lack of sensitization to anisakids. This preliminary study allowed to set up a specific test and evidence the lack of correlation between both tests, SPT and WB. However, the sample size should be increased to estimate the anisakidosis burden in Turkish people.Keywords: anisakidosis, fish parasite, serodiagnosis, Turkey
Procedia PDF Downloads 141269 Foodborne Outbreak Calendar: Application of Time Series Analysis
Authors: Ryan B. Simpson, Margaret A. Waskow, Aishwarya Venkat, Elena N. Naumova
Abstract:
The Centers for Disease Control and Prevention (CDC) estimate that 31 known foodborne pathogens cause 9.4 million cases of these illnesses annually in US. Over 90% of these illnesses are associated with exposure to Campylobacter, Cryptosporidium, Cyclospora, Listeria, Salmonella, Shigella, Shiga-Toxin Producing E.Coli (STEC), Vibrio, and Yersinia. Contaminated products contain parasites typically causing an intestinal illness manifested by diarrhea, stomach cramping, nausea, weight loss, fatigue and may result in deaths in fragile populations. Since 1998, the National Outbreak Reporting System (NORS) has allowed for routine collection of suspected and laboratory-confirmed cases of food poisoning. While retrospective analyses have revealed common pathogen-specific seasonal patterns, little is known concerning the stability of those patterns over time and whether they can be used for preventative forecasting. The objective of this study is to construct a calendar of foodborne outbreaks of nine infections based on the peak timing of outbreak incidence in the US from 1996 to 2017. Reported cases were abstracted from FoodNet for Salmonella (135115), Campylobacter (121099), Shigella (48520), Cryptosporidium (21701), STEC (18022), Yersinia (3602), Vibrio (3000), Listeria (2543), and Cyclospora (758). Monthly counts were compiled for each agent, seasonal peak timing and peak intensity were estimated, and the stability of seasonal peaks and synchronization of infections was examined. Negative Binomial harmonic regression models with the delta-method were applied to derive confidence intervals for the peak timing for each year and overall study period estimates. Preliminary results indicate that five infections continue to lead as major causes of outbreaks, exhibiting steady upward trends with annual increases in cases ranging from 2.71% (95%CI: [2.38, 3.05]) in Campylobacter, 4.78% (95%CI: [4.14, 5.41]) in Salmonella, 7.09% (95%CI: [6.38, 7.82]) in E.Coli, 7.71% (95%CI: [6.94, 8.49]) in Cryptosporidium, and 8.67% (95%CI: [7.55, 9.80]) in Vibrio. Strong synchronization of summer outbreaks were observed, caused by Campylobacter, Vibrio, E.Coli and Salmonella, peaking at 7.57 ± 0.33, 7.84 ± 0.47, 7.85 ± 0.37, and 7.82 ± 0.14 calendar months, respectively, with the serial cross-correlation ranging 0.81-0.88 (p < 0.001). Over 21 years, Listeria and Cryptosporidium peaks (8.43 ± 0.77 and 8.52 ± 0.45 months, respectively) have a tendency to arrive 1-2 weeks earlier, while Vibrio peaks (7.8 ± 0.47) delay by 2-3 weeks. These findings will be incorporated in the forecast models to predict common paths of the spread, long-term trends, and the synchronization of outbreaks across etiological agents. The predictive modeling of foodborne outbreaks should consider long-term changes in seasonal timing, spatiotemporal trends, and sources of contamination.Keywords: foodborne outbreak, national outbreak reporting system, predictive modeling, seasonality
Procedia PDF Downloads 128