Search results for: luggage handling
85 Electret: A Solution of Partial Discharge in High Voltage Applications
Authors: Farhina Haque, Chanyeop Park
Abstract:
The high efficiency, high field, and high power density provided by wide bandgap (WBG) semiconductors and advanced power electronic converter (PEC) topologies enabled the dynamic control of power in medium to high voltage systems. Although WBG semiconductors outperform the conventional Silicon based devices in terms of voltage rating, switching speed, and efficiency, the increased voltage handling properties, high dv/dt, and compact device packaging increase local electric fields, which are the main causes of partial discharge (PD) in the advanced medium and high voltage applications. PD, which occurs actively in voids, triple points, and airgaps, is an inevitable dielectric challenge that causes insulation and device aging. The aging process accelerates over time and eventually leads to the complete failure of the applications. Hence, it is critical to mitigating PD. Sharp edges, airgaps, triple points, and bubbles are common defects that exist in any medium to high voltage device. The defects are created during the manufacturing processes of the devices and are prone to high-electric-field-induced PD due to the low permittivity and low breakdown strength of the gaseous medium filling the defects. A contemporary approach of mitigating PD by neutralizing electric fields in high power density applications is introduced in this study. To neutralize the locally enhanced electric fields that occur around the triple points, airgaps, sharp edges, and bubbles, electrets are developed and incorporated into high voltage applications. Electrets are electric fields emitting dielectric materials that are embedded with electrical charges on the surface and in bulk. In this study, electrets are fabricated by electrically charging polyvinylidene difluoride (PVDF) films based on the widely used triode corona discharge method. To investigate the PD mitigation performance of the fabricated electret films, a series of PD experiments are conducted on both the charged and uncharged PVDF films under square voltage stimuli that represent PWM waveform. In addition to the use of single layer electrets, multiple layers of electrets are also experimented with to mitigate PD caused by higher system voltages. The electret-based approach shows great promise in mitigating PD by neutralizing the local electric field. The results of the PD measurements suggest that the development of an ultimate solution to the decades-long dielectric challenge would be possible with further developments in the fabrication process of electrets.Keywords: electrets, high power density, partial discharge, triode corona discharge
Procedia PDF Downloads 20384 Enhancement of Cross-Linguistic Effect with the Increase in the Multilingual Proficiency during Early Childhood: A Case Study of English Language Acquisition by a Pre-School Child
Authors: Anupama Purohit
Abstract:
The paper is a study on the inevitable cross-linguistic effect found in the early multilingual learners. The cross-linguistic behaviour like code-mixing, code-switching, foreign accent, literal translation, redundancy and syntactic manipulation effected due to other languages on the English language output of a non-native pre-school child are discussed here. A case study method is adopted in this paper to support the claim of the title. A simultaneously tetra lingual pre-school child’s (within 1;3 to 4;0) language behaviour is analysed here. The sample output data of the child is gathered from the diary entries maintained by her family, regular observations and video recordings done since her birth. She is getting the input of her mother tongue, Sambalpuri, from her grandparents only; Hindi, the local language from her play-school and the neighbourhood; English only from her mother and occasional visit of other family friends; Odia only during the reading of the Odia story book. The child is exposed to code-mixing of all the languages throughout her childhood. But code-mixing, literal translation, redundancy and duplication were absent in her initial stage of multilingual acquisition. As the child was more proficient in English in comparison to her other first languages and had never heard code-mixing in English language; it was expected from her input pattern of English (one parent, English language) that she would maintain purity in her use of English while talking to the English language interlocutor. But with gradual increase in the language proficiency in each of the languages of the child, her handling of the multiple codes becomes deft cross-linguistically. It can be deduced from the case study that after attaining certain milestone proficiency in each language, the child’s linguistic faculty can operate at a metalinguistic level. The functional use of each morpheme, their arrangement in words and in the sentences, the supra segmental features, lexical-semantic mapping, culture specific use of a language and the pragmatic skills converge to give a typical childlike multilingual output in an intelligible manner to the multilingual people (with the same set of languages in combination). The result is appealing because for expressing the same ideas which the child used to speak (may be with grammatically wrong expressions) in one language, gradually, she starts showing cross-linguistic effect in her expressions. So the paper pleads for the separatist view from the very beginning of the holophrastic phase (as the child expresses in addressee-specific language); but development of a metalinguistic ability that helps the child in communicating in a sophisticated way according to the linguistic status of the addressee is unique to the multilingual child. This metalinguistic ability is independent of the mode if input of a multilingual child.Keywords: code-mixing, cross-linguistic effect, early multilingualism, literal translation
Procedia PDF Downloads 29983 Digital Transformation of Lean Production: Systematic Approach for the Determination of Digitally Pervasive Value Chains
Authors: Peter Burggräf, Matthias Dannapfel, Hanno Voet, Patrick-Benjamin Bök, Jérôme Uelpenich, Julian Hoppe
Abstract:
The increasing digitalization of value chains can help companies to handle rising complexity in their processes and thereby reduce the steadily increasing planning and control effort in order to raise performance limits. Due to technological advances, companies face the challenge of smart value chains for the purpose of improvements in productivity, handling the increasing time and cost pressure and the need of individualized production. Therefore, companies need to ensure quick and flexible decisions to create self-optimizing processes and, consequently, to make their production more efficient. Lean production, as the most commonly used paradigm for complexity reduction, reaches its limits when it comes to variant flexible production and constantly changing market and environmental conditions. To lift performance limits, which are inbuilt in current value chains, new methods and tools must be applied. Digitalization provides the potential to derive these new methods and tools. However, companies lack the experience to harmonize different digital technologies. There is no practicable framework, which instructs the transformation of current value chains into digital pervasive value chains. Current research shows that a connection between lean production and digitalization exists. This link is based on factors such as people, technology and organization. In this paper, the introduced method for the determination of digitally pervasive value chains takes the factors people, technology and organization into account and extends existing approaches by a new dimension. It is the first systematic approach for the digital transformation of lean production and consists of four steps: The first step of ‘target definition’ describes the target situation and defines the depth of the analysis with regards to the inspection area and the level of detail. The second step of ‘analysis of the value chain’ verifies the lean-ability of processes and lies in a special focus on the integration capacity of digital technologies in order to raise the limits of lean production. Furthermore, the ‘digital evaluation process’ ensures the usefulness of digital adaptions regarding their practicability and their integrability into the existing production system. Finally, the method defines actions to be performed based on the evaluation process and in accordance with the target situation. As a result, the validation and optimization of the proposed method in a German company from the electronics industry shows that the digital transformation of current value chains based on lean production achieves a raise of their inbuilt performance limits.Keywords: digitalization, digital transformation, Industrie 4.0, lean production, value chain
Procedia PDF Downloads 31382 Densities and Volumetric Properties of {Difurylmethane + [(C5 – C8) N-Alkane or an Amide]} Binary Systems at 293.15, 298.15 and 303.15 K: Modelling Excess Molar Volumes by Prigogine-Flory-Patterson Theory
Authors: Belcher Fulele, W. A. A. Ddamba
Abstract:
Study of solvent systems contributes to the understanding of intermolecular interactions that occur in binary mixtures. These interactions involves among others strong dipole-dipole interactions and weak van de Waals interactions which are of significant application in pharmaceuticals, solvent extractions, design of reactors and solvent handling and storage processes. Binary mixtures of solvents can thus be used as a model to interpret thermodynamic behavior that occur in a real solution mixture. Densities of pure DFM, n-alkanes (n-pentane, n-hexane, n-heptane and n-octane) and amides (N-methylformamide, N-ethylformamide, N,N-dimethylformamide and N,N-dimethylacetamide) as well as their [DFM + ((C5-C8) n-alkane or amide)] binary mixtures over the entire composition range, have been reported at temperature 293.15, 298.15 and 303.15 K and atmospheric pressure. These data has been used to derive the thermodynamic properties: the excess molar volume of solution, apparent molar volumes, excess partial molar volumes, limiting excess partial molar volumes, limiting partial molar volumes of each component of a binary mixture. The results are discussed in terms of possible intermolecular interactions and structural effects that occur in the binary mixtures. The variation of excess molar volume with DFM composition for the [DFM + (C5-C7) n-alkane] binary mixture exhibit a sigmoidal behavior while for the [DFM + n-octane] binary system, positive deviation of excess molar volume function was observed over the entire composition range. For each of the [DFM + (C5-C8) n-alkane] binary mixture, the excess molar volume exhibited a fall with increase in temperature. The excess molar volume for each of [DFM + (NMF or NEF or DMF or DMA)] binary system was negative over the entire DFM composition at each of the three temperatures investigated. The negative deviations in excess molar volume values follow the order: DMA > DMF > NEF > NMF. Increase in temperature has a greater effect on component self-association than it has on complex formation between molecules of components in [DFM + (NMF or NEF or DMF or DMA)] binary mixture which shifts complex formation equilibrium towards complex to give a drop in excess molar volume with increase in temperature. The Prigogine-Flory-Patterson model has been applied at 298.15 K and reveals that the free volume is the most important contributing term to the excess experimental molar volume data for [DFM + (n-pentane or n-octane)] binary system. For [DFM + (NMF or DMF or DMA)] binary mixture, the interactional term and characteristic pressure term contributions are the most important contributing terms in describing the sign of experimental excess molar volume. The mixture systems contributed to the understanding of interactions of polar solvents with proteins (amides) with non-polar solvents (alkanes) in biological systems.Keywords: alkanes, amides, excess thermodynamic parameters, Prigogine-Flory-Patterson model
Procedia PDF Downloads 35581 Commissioning, Test and Characterization of Low-Tar Biomass Gasifier for Rural Applications and Small-Scale Plant
Authors: M. Mashiur Rahman, Ulrik Birk Henriksen, Jesper Ahrenfeldt, Maria Puig Arnavat
Abstract:
Using biomass gasification to make producer gas is one of the promising sustainable energy options available for small scale plant and rural applications for power and electricity. Tar content in producer gas is the main problem if it is used directly as a fuel. A low-tar biomass (LTB) gasifier of approximately 30 kW capacity has been developed to solve this. Moving bed gasifier with internal recirculation of pyrolysis gas has been the basic principle of the LTB gasifier. The gasifier focuses on the concept of mixing the pyrolysis gases with gasifying air and burning the mixture in separate combustion chamber. Five tests were carried out with the use of wood pellets and wood chips separately, with moisture content of 9-34%. The LTB gasifier offers excellent opportunities for handling extremely low-tar in the producer gas. The gasifiers producer gas had an extremely low tar content of 21.2 mg/Nm³ (avg.) and an average lower heating value (LHV) of 4.69 MJ/Nm³. Tar content found in different tests in the ranges of 10.6-29.8 mg/Nm³. This low tar content makes the producer gas suitable for direct use in internal combustion engine. Using mass and energy balances, the average gasifier capacity and cold gas efficiency (CGE) observed 23.1 kW and 82.7% for wood chips, and 33.1 kW and 60.5% for wood pellets, respectively. Average heat loss in term of higher heating value (HHV) observed 3.2% of thermal input for wood chips and 1% for wood pellets, where heat loss was found 1% of thermal input in term of enthalpy. Thus, the LTB gasifier performs better compared to typical gasifiers in term of heat loss. Equivalence ratio (ER) in the range of 0.29 to 0.41 gives better performance in terms of heating value and CGE. The specific gas production yields at the above ER range were in the range of 2.1-3.2 Nm³/kg. Heating value and CGE changes proportionally with the producer gas yield. The average gas compositions (H₂-19%, CO-19%, CO₂-10%, CH₄-0.7% and N₂-51%) obtained for wood chips are higher than the typical producer gas composition. Again, the temperature profile of the LTB gasifier observed relatively low temperature compared to typical moving bed gasifier. The average partial oxidation zone temperature of 970°C observed for wood chips. The use of separate combustor in the partial oxidation zone substantially lowers the bed temperature to 750°C. During the test, the engine was started and operated completely with the producer gas. The engine operated well on the produced gas, and no deposits were observed in the engine afterwards. Part of the producer gas flow was used for engine operation, and corresponding electrical power was found to be 1.5 kW continuously, and maximum power of 2.5 kW was also observed, while maximum generator capacity is 3 kW. A thermodynamic equilibrium model is good agreement with the experimental results and correctly predicts the equilibrium bed temperature, gas composition, LHV of the producer gas and ER with the experimental data, when the heat loss of 4% of the energy input is considered.Keywords: biomass gasification, low-tar biomass gasifier, tar elimination, engine, deposits, condensate
Procedia PDF Downloads 11480 Classification of Foliar Nitrogen in Common Bean (Phaseolus Vulgaris L.) Using Deep Learning Models and Images
Authors: Marcos Silva Tavares, Jamile Raquel Regazzo, Edson José de Souza Sardinha, Murilo Mesquita Baesso
Abstract:
Common beans are a widely cultivated and consumed legume globally, serving as a staple food for humans, especially in developing countries, due to their nutritional characteristics. Nitrogen (N) is the most limiting nutrient for productivity, and foliar analysis is crucial to ensure balanced nitrogen fertilization. Excessive N applications can cause, either isolated or cumulatively, soil and water contamination, plant toxicity, and increase their susceptibility to diseases and pests. However, the quantification of N using conventional methods is time-consuming and costly, demanding new technologies to optimize the adequate supply of N to plants. Thus, it becomes necessary to establish constant monitoring of the foliar content of this macronutrient in plants, mainly at the V4 stage, aiming at precision management of nitrogen fertilization. In this work, the objective was to evaluate the performance of a deep learning model, Resnet-50, in the classification of foliar nitrogen in common beans using RGB images. The BRS Estilo cultivar was sown in a greenhouse in a completely randomized design with four nitrogen doses (T1 = 0 kg N ha-1, T2 = 25 kg N ha-1, T3 = 75 kg N ha-1, and T4 = 100 kg N ha-1) and 12 replications. Pots with 5L capacity were used with a substrate composed of 43% soil (Neossolo Quartzarênico), 28.5% crushed sugarcane bagasse, and 28.5% cured bovine manure. The water supply of the plants was done with 5mm of water per day. The application of urea (45% N) and the acquisition of images occurred 14 and 32 days after sowing, respectively. A code developed in Matlab© R2022b was used to cut the original images into smaller blocks, originating an image bank composed of 4 folders representing the four classes and labeled as T1, T2, T3, and T4, each containing 500 images of 224x224 pixels obtained from plants cultivated under different N doses. The Matlab© R2022b software was used for the implementation and performance analysis of the model. The evaluation of the efficiency was done by a set of metrics, including accuracy (AC), F1-score (F1), specificity (SP), area under the curve (AUC), and precision (P). The ResNet-50 showed high performance in the classification of foliar N levels in common beans, with AC values of 85.6%. The F1 for classes T1, T2, T3, and T4 was 76, 72, 74, and 77%, respectively. This study revealed that the use of RGB images combined with deep learning can be a promising alternative to slow laboratory analyses, capable of optimizing the estimation of foliar N. This can allow rapid intervention by the producer to achieve higher productivity and less fertilizer waste. Future approaches are encouraged to develop mobile devices capable of handling images using deep learning for the classification of the nutritional status of plants in situ.Keywords: convolutional neural network, residual network 50, nutritional status, artificial intelligence
Procedia PDF Downloads 1979 Harrison’s Stolen: Addressing Aboriginal and Indigenous Islanders Human Rights
Authors: M. Shukry
Abstract:
According to the United Nations Declaration of Human Rights in 1948, every human being is entitled to rights in life that should be respected by others and protected by the state and community. Such rights are inherent regardless of colour, ethnicity, gender, religion or otherwise, and it is expected that all humans alike have the right to live without discrimination of any sort. However, that has not been the case with Aborigines in Australia. Over a long period of time, the governments of the State and the Territories and the Australian Commonwealth denied the Aboriginal and Indigenous inhabitants of the Torres Strait Islands such rights. Past Australian governments set policies and laws that enabled them to forcefully remove Indigenous children from their parents, which resulted in creating lost generations living the trauma of the loss of cultural identity, alienation and even their own selfhood. Intending to reduce that population of natives and their Aboriginal culture while, on the other hand, assimilate them into mainstream society, they gave themselves the right to remove them from their families with no hope of return. That practice has led to tragic consequences due to the trauma that has affected those children, an experience that is depicted by Jane Harrison in her play Stolen. The drama is the outcome of a six-year project on lost children and which was first performed in 1997 in Melbourne. Five actors only appear on the stage, playing the role of all the different characters, whether the main protagonists or the remaining cast, present or non-present ones as voices. The play outlines the life of five children who have been taken from their parents at an early age, entailing a disastrous negative impact that differs from one to the other. Unknown to each other, what connects between them is being put in a children’s home. The purpose of this paper is to analyse the play’s text in light of the 1948 Declaration of Human Rights, using it as a lens that reflects the atrocities practiced against the Aborigines. It highlights how such practices formed an outrageous violation of those natives’ rights as human beings. Harrison’s dramatic technique in conveying the children’s experiences is through a non-linear structure, fluctuating between past and present that are linked together within each of the five characters, reflecting their suffering and pain to create an emotional link between them and the audience. Her dramatic handling of the issue by fusing tragedy with humour as well as symbolism is a successful technique in revealing the traumatic memory of those children and their present life. The play has made a difference in commencing to address the problem of the right of all children to be with their families, which renders the real meaning of having a home and an identity as people.Keywords: aboriginal, audience, Australia, children, culture, drama, home, human rights, identity, Indigenous, Jane Harrison, memory, scenic effects, setting, stage, stage directions, Stolen, trauma
Procedia PDF Downloads 30078 Factors Mitigating against the Use of Alternative to Antibiotics (Phytobiotics) In Poultry Production among Farming Households in Nigeria
Authors: Akinola Helen Olufunke, Soetan Olatunbosun Jonathan, Adeleye Oludamola
Abstract:
Introduction: Antibiotic resistance has grown significantly, which is a major cause for concern. There have not been many significant developments in antibiotics over the past few decades, and practically all of the ones that are currently in use are losing effectiveness against pathogenic germs. Researchers are starting to focus more on the physiologically active compounds found in plants, particularly phytobiotics in poultry production. Consumption of chicken products is among the greatest in the country, but numerous nations, including Nigeria, use excessive amounts of necessary antibiotics in poultry farming, endangering the safety of such goods (through antimicrobial residues). Drug resistance has become a widespread issue as a result of the risky use of antibiotics in the chicken production industry. In order to replace antibiotics, biotic or natural products like phytobiotics (also known as botanicals or phytogenics) have drawn a lot of interest. Phytobiotics or their components are thought to be a relatively recent category of natural herbs that have acquired acceptance and favor among chicken farmers. The addition of several phytobiotic additions to poultry feed has demonstrated its capacity to improve both the broiler and layer populations' productivity. Design: Experimental research design and cross-sectional study was carried out at every 300 purposively selected farming household in the six-geopolitical zone in Nigeria. Data Analysis: A semi-structured questionnaire was administered to each farmer, and quantitative data were analyzed using Statistical Package for Social Science (SPSS) while the Chi-square test was used to analyze factors mitigating the use of Phytobiotics. Result: The result shows that the benefits associated with the use of phytobiotics are contributed to growth promotion in chickens and enhancement of productive performance of broiler and layer, which could be attributed to their antioxidant activity. The result further revealed that factors mitigating the use of phytobiotics were lack of knowledge in the use of phytobiotics, overdose or underdose usage, and seasonal availability of the phytobiotics. Others are the educational level of the farmers, intrinsic motivation, income poultry farming experience, price of phytobiotics based additives feeds, and intensity of extension agents in visiting them. Conclusion: The difficulties associated with using phytobiotics in chicken farms limit their willingness to boost productivity. The study found that most farmers were ignorant, which prevented them from handling this notion and turning their poultry into a viable enterprise while also allowing them to be creative. They believed that packing phytobiotics-based additive feed was expensive, and lastly, the seasonal availability of some phytobiotics. Recommendation: Further research in phytobiotics use in Nigeria should be carried out in order to establish its efficiency, safety, and awareness.Keywords: mitigating, antibiotics, phytobiotics, poultry farming
Procedia PDF Downloads 17177 Safety Assessment of Traditional Ready-to-Eat Meat Products Vended at Retail Outlets in Kebbi and Sokoto States, Nigeria
Authors: M. I. Ribah, M. Jibir, Y. A. Bashar, S. S. Manga
Abstract:
Food safety is a significant and growing public health problem in the world and Nigeria as a developing country, since food-borne diseases are important contributors to the huge burden of sickness and death of humans. In Nigeria, traditional ready-to-eat meat products (RTE-MPs) like balangu, tsire, guru and dried meat products like kilishi, dambun nama, banda, were reported to be highly appreciated because of their eating qualities. The consumption of these products was considered as safe due to the treatments that are usually involved during their production process. However, during processing and handling, the products could be contaminated by pathogens that could cause food poisoning. Therefore, a hazard identification for pathogenic bacteria on some traditional RTE-MPs was conducted in Kebbi and Sokoto States, Nigeria. A total of 116 RTE-MPs (balangu-38, kilishi-39 and tsire-39) samples were obtained from retail outlets and analyzed using standard cultural microbiological procedures in general and selective enrichment media to isolate the target pathogens. A six-fold serial dilution was prepared and using the pour plating method, colonies were counted. Serial dilutions were selected based on the prepared pre-labeled Petri dishes for each sample. A volume of 10-12 ml of molten Nutrient agar cooled to 42-45°C was poured into each Petri dish and 1 ml each from dilutions of 102, 104 and 106 for every sample was respectively poured on a pre-labeled Petri plate after which colonies were counted. The isolated pathogens were identified and confirmed after series of biochemical tests. Frequencies and percentages were used to describe the presence of pathogens. The General Linear Model was used to analyze data on pathogen presence according to RTE-MPs and means were separated using the Tukey test at 0.05 confidence level. Of the 116 RTE-MPs samples collected, 35 (30.17%) samples were found to be contaminated with some tested pathogens. Prevalence results showed that Escherichia coli, salmonella and Staphylococcus aureus were present in the samples. Mean total bacterial count was 23.82×106 cfu/g. The frequency of individual pathogens isolated was; Staphylococcus aureus 18 (15.51%), Escherichia coli 12 (10.34%) and Salmonella 5 (4.31%). Also, among the RTE-MPs tested, the total bacterial counts were found to differ significantly (P < 0.05), with 1.81, 2.41 and 2.9×104 cfu/g for tsire, kilishi, and balangu, respectively. The study concluded that the presence of pathogenic bacteria in balangu could pose grave health risks to consumers, and hence, recommended good manufacturing practices in the production of balangu to improve the products’ safety.Keywords: ready-to-eat meat products, retail outlets, public health, safety assessment
Procedia PDF Downloads 13376 Characterization of Phenolic Compounds from Carménère Wines during Aging with Oak Wood (Staves, Chips and Barrels)
Authors: E. Obreque-Slier, J. Laqui-Estaña, A. Peña-Neira, M. Medel-Marabolí
Abstract:
Wine is an important source of polyphenols. Red wines show important concentrations of nonflavonoid (gallic acid, ellagic acid, caffeic acid and coumaric acid) and flavonoid compounds [(+)-catechin, (-)-epicatechin, (+)-gallocatechin and (-)-epigallocatechin]. However, a significant variability in the quantitative and qualitative distribution of chemical constituents in wine has to be expected depending on an array of important factors, such as the varietal differences of Vitis vinifera and cultural practices. It has observed that Carménère grapes present a differential composition and evolution of phenolic compounds when compared to other varieties and specifically with Cabernet Sauvignon grapes. Likewise, among the cultural practices, the aging in contact with oak wood is a high relevance factor. Then, the extraction of different polyphenolic compounds from oak wood into wine during its ageing process produces both qualitative and quantitative changes. Recently, many new techniques have been introduced in winemaking. One of these involves putting new pieces of wood (oak chips or inner staves) into inert containers. It offers some distinct and previously unavailable flavour advantages, as well as new options in wine handling. To our best knowledge, there is not information about the behaviour of Carménère wines (Chilean emblematic cultivar) in contact with oak wood. In addition, the effect of aging time and wood product (barrels, chips or staves) on the phenolic composition in Carménère wines has not been studied. This study aims at characterizing the condensed and hydrolyzable tannins from Carménère wines during the aging with staves, chips and barrels from French oak wood. The experimental design was completely randomized with two independent assays: aging time (0-12 month) and different formats of wood (barrel, chips and staves). The wines were characterized by spectrophotometric (total tannins and fractionation of proanthocyanidins into monomers, oligomers and polymers) and HPLC-DAD (ellagitannins) analysis. The wines in contact with different products of oak wood showed a similar content of total tannins during the study, while the control wine (without oak wood) presented a lower content of these compounds. In addition, it was observed that the polymeric proanthocyanidin fraction was the most abundant, while the monomeric fraction was the less abundant fraction in all treatments in two sample. However, significative differences in each fractions were observed between wines in contact from barrel, chips, and staves in two sample dates. Finally, the wine from barrels presented the highest content of the ellagitannins from the fourth to the last sample date. In conclusion, the use of alternative formats of oak wood affects the chemical composition of wines during aging, and these enological products are an interesting alternative to contribute with tannins to wine.Keywords: enological inputs, oak wood aging, polyphenols, red wine
Procedia PDF Downloads 15875 A Q-Methodology Approach for the Evaluation of Land Administration Mergers
Authors: Tsitsi Nyukurayi Muparari, Walter Timo De Vries, Jaap Zevenbergen
Abstract:
The nature of Land administration accommodates diversity in terms of both spatial data handling activities and the expertise involved, which supposedly aims to satisfy the unpredictable demands of land data and the diverse demands of the customers arising from the land. However, it is known that strategic decisions of restructuring are in most cases repelled in favour of complex structures that strive to accommodate professional diversity and diverse roles in the field of Land administration. Yet despite of this widely accepted knowledge, there is scanty theoretical knowledge concerning the psychological methodologies that can extract the deeper perceptions from the diverse spatial expertise in order to explain the invisible control arm of the polarised reception of the ideas of change. This paper evaluates Q methodology in the context of a cadastre and land registry merger (under one agency) using the Swedish cadastral system as a case study. Precisely, the aim of this paper is to evaluate the effectiveness of Q methodology towards modelling the diverse psychological perceptions of spatial professionals who are in a widely contested decision of merging the cadastre and land registry components of Land administration using the Swedish cadastral system as a case study. An empirical approach that is prescribed by Q methodology starts with the concourse development, followed by the design of statements and q sort instrument, selection of the participants, the q-sorting exercise, factor extraction by PQMethod and finally narrative development by logic of abduction. The paper uses 36 statements developed from a dominant competing value theory that stands out on its reliability and validity, purposively selects 19 participants to do the Qsorting exercise, proceeds with factor extraction from the diversity using varimax rotation and judgemental rotation provided by PQMethod and effect the narrative construction using the logic abduction. The findings from the diverse perceptions from cadastral professionals in the merger decision of land registry and cadastre components in Sweden’s mapping agency (Lantmäteriet) shows that focus is rather inclined on the perfection of the relationship between the legal expertise and technical spatial expertise. There is much emphasis on tradition, loyalty and communication attributes which concern the organisation’s internal environment rather than innovation and market attributes that reveals customer behavior and needs arising from the changing humankind-land needs. It can be concluded that Q methodology offers effective tools that pursues a psychological approach for the evaluation and gradations of the decisions of strategic change through extracting the local perceptions of spatial expertise.Keywords: cadastre, factor extraction, land administration merger, land registry, q-methodology, rotation
Procedia PDF Downloads 19474 “Uninformed” Religious Orientation Can Lead to Violence in Any Given Community: The Case of African Independence Churches in South Africa
Authors: Ngwako Daniel Sebola
Abstract:
Introductory Statement: Religions are necessary as they offer and teach something to their adherence. People in one religion may not have a complete understanding of the Supreme Being (Deity) in a certain religion other than their own. South Africa, like other countries in the world, consists of various religions, including Christianity. Almost 80% of South African population adheres to the Christian faith, though in different denominations and sects. Each church fulfils spiritual needs that perhaps others cannot fill. African Independent Churches is one of the denominations in the country. These churches arose as a protest to the Western forms and expressions of Christianity. Their major concern was to develop an indigenous expression of Christianity. The relevance of African Independent Churches includes addressing the needs of the people holistically. Controlling diseases was an important aspect of change in different historical periods. Through healing services, leaders of African churches are able to attract many followers. The healing power associated with the founders of many African Initiated Churches leads to people following and respecting them as true leaders within many African communities. Despite its strong points, African Independent Churches, like many others, face a variety of challenges, especially conflicts. Ironically, destructive conflicts resulted in violence.. Such violence demonstrates a lack of informed religious orientation among those concerned. This paper investigates and analyses the causes of conflict and violence in the African Independent Church. The researcher used the Shembe and International Pentecostal Holiness Churches, in South Africa, as a point of departure. As a solution to curb violence, the researcher suggests useful strategies in handling conflicts. Methodology: Comparative and qualitative approaches have been used as methods of collecting data in this research. The intention is to analyse the similarities and differences of violence among members of the Shembe and International Pentecostal Holiness Churches. Equally important, the researcher aims to obtain data through interviews, questionnaires, focus groups, among others. The researcher aims to interview fifteen individuals from both churches. Finding: Leadership squabbles and power struggle appear to be the main contributing factors of violence in many Independent Churches. Ironically, violence resulted in the loss of life and destruction of properties, like in the case of the Shembe and International Pentecostal Holiness Churches. Violence is an indication that congregations and some leaders have not been properly equipped to deal with conflict. Concluding Statement: Conflict is a common part of every human existence in any given community. The concern is when such conflict becomes contagious; it leads to violence. There is a need to understand consciously and objectively towards devising the appropriate measures to handle the conflict. Conflict management calls for emotional maturity, self-control, empathy, patience, tolerance and informed religious orientation.Keywords: African, church, religion, violence
Procedia PDF Downloads 11673 Outputs from the Implementation of 'PHILOS' Programme: Emergency Health Response to Refugee Crisis, Greece, 2017
Authors: K. Mellou, G. Anastopoulos, T. Zakinthinos, C. Botsi, A. Terzidis
Abstract:
‘PHILOS – Emergency health response to refugee crisis’ is a programme of the Greek Ministry of Health, implemented by the Hellenic Center for Disease Control and Prevention (HCDCP). The programme is funded by the Asylum, Migration and Integration Fund (AMIF) of EU’s DG Migration and Home Affairs. With the EU Member States accepting, the last period, accelerating migration flows, Greece inevitably occupies a prominent position in the migratory map due to this geographical location. The main objectives of the programme are a) reinforcement of the capacity of the public health system and enhancement of the epidemiological surveillance in order to cover refugees/migrant population, b) provision of on-site primary health care and psychological support services, and c) strengthening of national health care system task-force. The basic methods for achieving the aforementioned goals are: a) implementation of syndromic surveillance system at camps and enhancement of public health response with the use of mobile medical units (Sub-action A), b) enhancement of health care services inside the camps via increasing human resources and implementing standard operating procedures (Sub-action B), and c) reinforcement of the national health care system (primary healthcare units, hospitals, and emergency care spots) of affected regions with personnel (Sub-action C). As a result, 58 health professionals were recruited under sub-action 2 and 10 mobile unit teams (one or two at each health region) were formed. The main actions taken so far by the mobile units are the evaluation, of syndromic surveillance, of living conditions at camps and medical services. Also, vaccination coverage of children population was assessed, and more than 600 catch-up vaccinations were performed by the end of June 2017. Mobile units supported transportation of refugees/migrants from camps to medical services reducing the load of the National Center for Emergency Care (more than 350 transportations performed). The total number of health professionals (MD, nurses, etc.) placed at camps was 104. Common practices were implemented in the recording and collection of psychological and medical history forms at the camps. Protocols regarding maternity care, gender based violence and handling of violent incidents were produced and distributed at personnel working at camps. Finally, 290 health care professionals were placed at primary healthcare units, public hospitals and the National Center for Emergency Care at affected regions. The program has, also, supported training activities inside the camps and resulted to better coordination of offered services on site.Keywords: migrants, refugees, public health, syndromic surveillance, national health care system, primary care, emergency health response
Procedia PDF Downloads 20672 Inhibition of Mild Steel Corrosion in Hydrochloric Acid Medium Using an Aromatic Hydrazide Derivative
Authors: Preethi Kumari P., Shetty Prakasha, Rao Suma A.
Abstract:
Mild steel has been widely employed as construction materials for pipe work in the oil and gas production such as down hole tubular, flow lines and transmission pipelines, in chemical and allied industries for handling acids, alkalis and salt solutions due to its excellent mechanical property and low cost. Acid solutions are widely used for removal of undesirable scale and rust in many industrial processes. Among the commercially available acids hydrochloric acid is widely used for pickling, cleaning, de-scaling and acidization of oil process. Mild steel exhibits poor corrosion resistance in presence of hydrochloric acid. The high reactivity of mild steel in presence of hydrochloric acid is due to the soluble nature of ferrous chloride formed and the cementite phase (Fe3C) normally present in the steel is also readily soluble in hydrochloric acid. Pitting attack is also reported to be a major form of corrosion in mild steel in the presence of high concentrations of acids and thereby causing the complete destruction of metal. Hydrogen from acid reacts with the metal surface and makes it brittle and causes cracks, which leads to pitting type of corrosion. The use of chemical inhibitor to minimize the rate of corrosion has been considered to be the first line of defense against corrosion. In spite of long history of corrosion inhibition, a highly efficient and durable inhibitor that can completely protect mild steel in aggressive environment is yet to be realized. It is clear from the literature review that there is ample scope for the development of new organic inhibitors, which can be conveniently synthesized from relatively cheap raw materials and provide good inhibition efficiency with least risk of environmental pollution. The aim of the present work is to evaluate the electrochemical parameters for the corrosion inhibition behavior of an aromatic hydrazide derivative, 4-hydroxy- N '-[(E)-1H-indole-2-ylmethylidene)] benzohydrazide (HIBH) on mild steel in 2M hydrochloric acid using Tafel polarization and electrochemical impedance spectroscopy (EIS) techniques at 30-60 °C. The results showed that inhibition efficiency increased with increase in inhibitor concentration and decreased marginally with increase in temperature. HIBH showed a maximum inhibition efficiency of 95 % at 8×10-4 M concentration at 30 °C. Polarization curves showed that HIBH act as a mixed-type inhibitor. The adsorption of HIBH on mild steel surface obeys the Langmuir adsorption isotherm. The adsorption process of HIBH at the mild steel/hydrochloric acid solution interface followed mixed adsorption with predominantly physisorption at lower temperature and chemisorption at higher temperature. Thermodynamic parameters for the adsorption process and kinetic parameters for the metal dissolution reaction were determined.Keywords: electrochemical parameters, EIS, mild steel, tafel polarization
Procedia PDF Downloads 33671 The Data Quality Model for the IoT based Real-time Water Quality Monitoring Sensors
Authors: Rabbia Idrees, Ananda Maiti, Saurabh Garg, Muhammad Bilal Amin
Abstract:
IoT devices are the basic building blocks of IoT network that generate enormous volume of real-time and high-speed data to help organizations and companies to take intelligent decisions. To integrate this enormous data from multisource and transfer it to the appropriate client is the fundamental of IoT development. The handling of this huge quantity of devices along with the huge volume of data is very challenging. The IoT devices are battery-powered and resource-constrained and to provide energy efficient communication, these IoT devices go sleep or online/wakeup periodically and a-periodically depending on the traffic loads to reduce energy consumption. Sometime these devices get disconnected due to device battery depletion. If the node is not available in the network, then the IoT network provides incomplete, missing, and inaccurate data. Moreover, many IoT applications, like vehicle tracking and patient tracking require the IoT devices to be mobile. Due to this mobility, If the distance of the device from the sink node become greater than required, the connection is lost. Due to this disconnection other devices join the network for replacing the broken-down and left devices. This make IoT devices dynamic in nature which brings uncertainty and unreliability in the IoT network and hence produce bad quality of data. Due to this dynamic nature of IoT devices we do not know the actual reason of abnormal data. If data are of poor-quality decisions are likely to be unsound. It is highly important to process data and estimate data quality before bringing it to use in IoT applications. In the past many researchers tried to estimate data quality and provided several Machine Learning (ML), stochastic and statistical methods to perform analysis on stored data in the data processing layer, without focusing the challenges and issues arises from the dynamic nature of IoT devices and how it is impacting data quality. A comprehensive review on determining the impact of dynamic nature of IoT devices on data quality is done in this research and presented a data quality model that can deal with this challenge and produce good quality of data. This research presents the data quality model for the sensors monitoring water quality. DBSCAN clustering and weather sensors are used in this research to make data quality model for the sensors monitoring water quality. An extensive study has been done in this research on finding the relationship between the data of weather sensors and sensors monitoring water quality of the lakes and beaches. The detailed theoretical analysis has been presented in this research mentioning correlation between independent data streams of the two sets of sensors. With the help of the analysis and DBSCAN, a data quality model is prepared. This model encompasses five dimensions of data quality: outliers’ detection and removal, completeness, patterns of missing values and checks the accuracy of the data with the help of cluster’s position. At the end, the statistical analysis has been done on the clusters formed as the result of DBSCAN, and consistency is evaluated through Coefficient of Variation (CoV).Keywords: clustering, data quality, DBSCAN, and Internet of things (IoT)
Procedia PDF Downloads 13970 Influence of Counter-Face Roughness on the Friction of Bionic Microstructures
Authors: Haytam Kasem
Abstract:
The problem of quick and easy reversible attachment has become of great importance in different fields of technology. For the reason, during the last decade, a new emerging field of adhesion science has been developed. Essentially inspired by some animals and insects, which during their natural evolution have developed fantastic biological attachment systems allowing them to adhere and run on walls and ceilings of uneven surfaces. Potential applications of engineering bio-inspired solutions include climbing robots, handling systems for wafers in nanofabrication facilities, and mobile sensor platforms, to name a few. However, despite the efforts provided to apply bio-inspired patterned adhesive-surfaces to the biomedical field, they are still in the early stages compared with their conventional uses in other industries mentioned above. In fact, there are some critical issues that still need to be addressed for the wide usage of the bio-inspired patterned surfaces as advanced biomedical platforms. For example, surface durability and long-term stability of surfaces with high adhesive capacity should be improved, but also the friction and adhesion capacities of these bio-inspired microstructures when contacting rough surfaces. One of the well-known prototypes for bio-inspired attachment systems is biomimetic wall-shaped hierarchical microstructure for gecko-like attachments. Although physical background of these attachment systems is widely understood, the influence of counter-face roughness and its relationship with the friction force generated when sliding against wall-shaped hierarchical microstructure have yet to be fully analyzed and understood. To elucidate the effect of the counter-face roughness on the friction of biomimetic wall-shaped hierarchical microstructure we have replicated the isotropic topography of 12 different surfaces using replicas made of the same epoxy material. The different counter-faces were fully characterized under 3D optical profilometer to measure roughness parameters. The friction forces generated by spatula-shaped microstructure in contact with the tested counter-faces were measured on a home-made tribometer and compared with the friction forces generated by the spatulae in contact with a smooth reference. It was found that classical roughness parameters, such as average roughness Ra and others, could not be utilized to explain topography-related variation in friction force. This has led us to the development of an integrated roughness parameter obtained by combining different parameters which are the mean asperity radius of curvature (R), the asperity density (η), the deviation of asperities high (σ) and the mean asperities angle (SDQ). This new integrated parameter is capable of explaining the variation of results of friction measurements. Based on the experimental results, we developed and validated an analytical model to predict the variation of the friction force as a function of roughness parameters of the counter-face and the applied normal load, as well.Keywords: friction, bio-mimetic micro-structure, counter-face roughness, analytical model
Procedia PDF Downloads 23969 A Retrospective Cohort Study on an Outbreak of Gastroenteritis Linked to a Buffet Lunch Served during a Conference in Accra
Authors: Benjamin Osei Tutu, Sharon Annison
Abstract:
On 21st November, 2016, an outbreak of foodborne illness occurred after a buffet lunch served during a stakeholders’ consultation meeting held in Accra. An investigation was conducted to characterise the affected people, determine the etiologic food, the source of contamination and the etiologic agent and to implement appropriate public health measures to prevent future occurrences. A retrospective cohort study was conducted via telephone interviews, using a structured questionnaire developed from the buffet menu. A case was defined as any person suffering from symptoms of foodborne illness e.g. diarrhoea and/or abdominal cramps after eating food served during the stakeholder consultation meeting in Accra on 21st November, 2016. The exposure status of all the members of the cohort was assessed by taking the food history of each respondent during the telephone interview. The data obtained was analysed using Epi Info 7. An environmental risk assessment was conducted to ascertain the source of the food contamination. Risks of foodborne infection from the foods eaten were determined using attack rates and odds ratios. Data was obtained from 54 people who consumed food served during the stakeholders’ meeting. Out of this population, 44 people reported with symptoms of food poisoning representing 81.45% (overall attack rate). The peak incubation period was seven hours with a minimum and maximum incubation periods of four and 17 hours, respectively. The commonly reported symptoms were diarrhoea (97.73%, 43/44), vomiting (84.09%, 37/44) and abdominal cramps (75.00%, 33/44). From the incubation period, duration of illness and the symptoms, toxin-mediated food poisoning was suspected. The environmental risk assessment of the implicated catering facility indicated a lack of time/temperature control, inadequate knowledge on food safety among workers and sanitation issues. Limited number of food samples was received for microbiological analysis. Multivariate analysis indicated that illness was significantly associated with the consumption of the snacks served (OR 14.78, P < 0.001). No stool and blood or samples of etiologic food were available for organism isolation; however, the suspected etiologic agent was Staphylococcus aureus or Clostridium perfringens. The outbreak could probably be due to the consumption of unwholesome snack (tuna sandwich or chicken. The contamination and/or growth of the etiologic agent in the snack may be due to the breakdown in cleanliness, time/temperature control and good food handling practices. Training of food handlers in basic food hygiene and safety is recommended.Keywords: Accra, buffet, conference, C. perfringens, cohort study, food poisoning, gastroenteritis, office workers, Staphylococcus aureus
Procedia PDF Downloads 23068 Assessment of Nuclear Medicine Radiation Protection Practices Among Radiographers and Nurses at a Small Nuclear Medicine Department in a Tertiary Hospital
Authors: Nyathi Mpumelelo; Moeng Thabiso Maria
Abstract:
BACKGROUND AND OBJECTIVES: Radiopharmaceuticals are used for diagnosis, treatment, staging and follow up of various diseases. However, there is concern that the ionizing radiation (gamma rays, α and ß particles) emitted by radiopharmaceuticals may result in exposure of radiographers and nurses with limited knowledge of the principles of radiation protection and safety, raising the risk of cancer induction. This study aimed at investigation radiation safety awareness levels among radiographers and nurses at a small tertiary hospital in South Africa. METHODS: An analytical cross-sectional study. A validated two-part questionnaire was implemented to consenting radiographers and nurses working in a Nuclear Medicine Department. Part 1 gathered demographic information (age, gender, work experience, attendance to/or passing ionizing radiation protection courses). Part 2 covered questions related to knowledge and awareness of radiation protection principles. RESULTS: Six radiographers and five nurses participated (27% males and 73% females). The mean age was 45 years (age range 20-60 years). The study revealed that neither professional development courses nor radiation protection courses are offered at the Nuclear Medicine Department understudy. However, 6/6 (100%) radiographers exhibited a high level of awareness of radiation safety principles on handling and working with radiopharmaceuticals which correlated to their years of experience. As for nurses, 4/5 (80%) showed limited knowledge and awareness of radiation protection principles irrespective of the number of years in the profession. CONCLUSION: Despite their major role of caring for patients undergoing diagnostic and therapeutic treatments, the nurses showed limited knowledge of ionizing radiation and associated side effects. This was not surprising since they never received any formal basic radiation safety course. These findings were not unique to this Centre. A study conducted in a Kuwaiti Radiology Department also established that the vast majority of nurses did not understand the risks of working with ionizing radiation. Similarly, nurses in an Australian hospital exhibited knowledge limitations. However, nursing managers did provide the necessary radiation safety training when requested. In Guatemala and Saudi Arabia, where there was shortage of professional radiographers, nurses underwent radiography training, a course that equipped them with basic radiation safety principles. The radiographers in the Centre understudy unlike others in various parts of the world demonstrated substantial knowledge and awareness on radiation protection. Radiations safety courses attended when an opportunity arose played a critical role in their awareness. The knowledge and awareness levels of these radiographers were comparable to their counterparts in Sudan. However, it was much more above that of their counterparts in Jordan, Nigeria, Nepal and Iran who were found to have limited awareness and inadequate knowledge on radiation dose. Formal radiation safety and awareness courses and workshops can play a crucial role in raising the awareness of nurses and radiographers on radiation safety for their personal benefit and that of their patients.Keywords: radiation safety, radiation awareness, training, nuclear medicine
Procedia PDF Downloads 7967 Scalable Performance Testing: Facilitating The Assessment Of Application Performance Under Substantial Loads And Mitigating The Risk Of System Failures
Authors: Solanki Ravirajsinh
Abstract:
In the software testing life cycle, failing to conduct thorough performance testing can result in significant losses for an organization due to application crashes and improper behavior under high user loads in production. Simulating large volumes of requests, such as 5 million within 5-10 minutes, is challenging without a scalable performance testing framework. Leveraging cloud services to implement a performance testing framework makes it feasible to handle 5-10 million requests in just 5-10 minutes, helping organizations ensure their applications perform reliably under peak conditions. Implementing a scalable performance testing framework using cloud services and tools like JMeter, EC2 instances (Virtual machine), cloud logs (Monitor errors and logs), EFS (File storage system), and security groups offers several key benefits for organizations. Creating performance test framework using this approach helps optimize resource utilization, effective benchmarking, increased reliability, cost savings by resolving performance issues before the application is released. In performance testing, a master-slave framework facilitates distributed testing across multiple EC2 instances to emulate many concurrent users and efficiently handle high loads. The master node orchestrates the test execution by coordinating with multiple slave nodes to distribute the workload. Slave nodes execute the test scripts provided by the master node, with each node handling a portion of the overall user load and generating requests to the target application or service. By leveraging JMeter's master-slave framework in conjunction with cloud services like EC2 instances, EFS, CloudWatch logs, security groups, and command-line tools, organizations can achieve superior scalability and flexibility in their performance testing efforts. In this master-slave framework, JMeter must be installed on both the master and each slave EC2 instance. The master EC2 instance functions as the "brain," while the slave instances operate as the "body parts." The master directs each slave to execute a specified number of requests. Upon completion of the execution, the slave instances transmit their results back to the master. The master then consolidates these results into a comprehensive report detailing metrics such as the number of requests sent, encountered errors, network latency, response times, server capacity, throughput, and bandwidth. Leveraging cloud services, the framework benefits from automatic scaling based on the volume of requests. Notably, integrating cloud services allows organizations to handle more than 5-10 million requests within 5 minutes, depending on the server capacity of the hosted website or application.Keywords: identify crashes of application under heavy load, JMeter with cloud Services, Scalable performance testing, JMeter master and slave using cloud Services
Procedia PDF Downloads 2766 A Generalized Framework for Adaptive Machine Learning Deployments in Algorithmic Trading
Authors: Robert Caulk
Abstract:
A generalized framework for adaptive machine learning deployments in algorithmic trading is introduced, tested, and released as open-source code. The presented software aims to test the hypothesis that recent data contains enough information to form a probabilistically favorable short-term price prediction. Further, the framework contains various adaptive machine learning techniques that are geared toward generating profit during strong trends and minimizing losses during trend changes. Results demonstrate that this adaptive machine learning approach is capable of capturing trends and generating profit. The presentation also discusses the importance of defining the parameter space associated with the dynamic training data-set and using the parameter space to identify and remove outliers from prediction data points. Meanwhile, the generalized architecture enables common users to exploit the powerful machinery while focusing on high-level feature engineering and model testing. The presentation also highlights common strengths and weaknesses associated with the presented technique and presents a broad range of well-tested starting points for feature set construction, target setting, and statistical methods for enforcing risk management and maintaining probabilistically favorable entry and exit points. The presentation also describes the end-to-end data processing tools associated with FreqAI, including automatic data fetching, data aggregation, feature engineering, safe and robust data pre-processing, outlier detection, custom machine learning and statistical tools, data post-processing, and adaptive training backtest emulation, and deployment of adaptive training in live environments. Finally, the generalized user interface is also discussed in the presentation. Feature engineering is simplified so that users can seed their feature sets with common indicator libraries (e.g. TA-lib, pandas-ta). The user also feeds data expansion parameters to fill out a large feature set for the model, which can contain as many as 10,000+ features. The presentation describes the various object-oriented programming techniques employed to make FreqAI agnostic to third-party libraries and external data sources. In other words, the back-end is constructed in such a way that users can leverage a broad range of common regression libraries (Catboost, LightGBM, Sklearn, etc) as well as common Neural Network libraries (TensorFlow, PyTorch) without worrying about the logistical complexities associated with data handling and API interactions. The presentation finishes by drawing conclusions about the most important parameters associated with a live deployment of the adaptive learning framework and provides the road map for future development in FreqAI.Keywords: machine learning, market trend detection, open-source, adaptive learning, parameter space exploration
Procedia PDF Downloads 8865 Devulcanization of Waste Rubber Using Thermomechanical Method Combined with Supercritical CO₂
Authors: L. Asaro, M. Gratton, S. Seghar, N. Poirot, N. Ait Hocine
Abstract:
Rubber waste disposal is an environmental problem. Particularly, many researches are centered in the management of discarded tires. In spite of all different ways of handling used tires, the most common is to deposit them in a landfill, creating a stock of tires. These stocks can cause fire danger and provide ambient for rodents, mosquitoes and other pests, causing health hazards and environmental problems. Because of the three-dimensional structure of the rubbers and their specific composition that include several additives, their recycling is a current technological challenge. The technique which can break down the crosslink bonds in the rubber is called devulcanization. Strictly, devulcanization can be defined as a process where poly-, di-, and mono-sulfidic bonds, formed during vulcanization, are totally or partially broken. In the recent years, super critical carbon dioxide (scCO₂) was proposed as a green devulcanization atmosphere. This is because it is chemically inactive, nontoxic, nonflammable and inexpensive. Its critical point can be easily reached (31.1 °C and 7.38 MPa), and residual scCO₂ in the devulcanized rubber can be easily and rapidly removed by releasing pressure. In this study thermomechanical devulcanization of ground tire rubber (GTR) was performed in a twin screw extruder under diverse operation conditions. Supercritical CO₂ was added in different quantities to promote the devulcanization. Temperature, screw speed and quantity of CO₂ were the parameters that were varied during the process. The devulcanized rubber was characterized by its devulcanization percent and crosslink density by swelling in toluene. Infrared spectroscopy (FTIR) and Gel permeation chromatography (GPC) were also done, and the results were related with the Mooney viscosity. The results showed that the crosslink density decreases as the extruder temperature and speed increases, and, as expected, the soluble fraction increase with both parameters. The Mooney viscosity of the devulcanized rubber decreases as the extruder temperature increases. The reached values were in good correlation (R= 0.96) with de the soluble fraction. In order to analyze if the devulcanization was caused by main chains or crosslink scission, the Horikx's theory was used. Results showed that all tests fall in the curve that corresponds to the sulfur bond scission, which indicates that the devulcanization has successfully happened without degradation of the rubber. In the spectra obtained by FTIR, it was observed that none of the characteristic peaks of the GTR were modified by the different devulcanization conditions. This was expected, because due to the low sulfur content (~1.4 phr) and the multiphasic composition of the GTR, it is very difficult to evaluate the devulcanization by this technique. The lowest crosslink density was reached with 1 cm³/min of CO₂, and the power consumed in that process was also near to the minimum. These results encourage us to do further analyses to better understand the effect of the different conditions on the devulcanization process. The analysis is currently extended to monophasic rubbers as ethylene propylene diene monomer rubber (EPDM) and natural rubber (NR).Keywords: devulcanization, recycling, rubber, waste
Procedia PDF Downloads 38564 Re-Development and Lost Industrial History: Darling Harbour of Sydney
Authors: Ece Kaya
Abstract:
Urban waterfront re-development is a well-established phenomenon internationally since 1960s. In cities throughout the world, old industrial waterfront land is being redeveloped into luxury housing, offices, tourist attractions, cultural amenities and shopping centres. These developments are intended to attract high-income residents, tourists and investors to the city. As urban waterfronts are iconic places for the cities and catalyst for further development. They are often referred as flagship project. In Sydney, the re-development of industrial waterfront has been exposed since 1980s with Darling Harbour Project. Darling Harbour waterfront used to be the main arrival and landing place for commercial and industrial shipping until 1970s. Its urban development has continued since the establishment of the city. It was developed as a major industrial and goods-handling precinct in 1812. This use was continued by the mid-1970s. After becoming a redundant industrial waterfront, the area was ripe for re-development in 1984. Darling Harbour is now one of the world’s fascinating waterfront leisure and entertainment destinations and its transformation has been considered as a success story. It is a contradictory statement for this paper. Data collection was carried out using an extensive archival document analysis. The data was obtained from Australian Institute of Architects, City of Sydney Council Archive, Parramatta Heritage Office, Historic Houses Trust, National Trust, and University of Sydney libraries, State Archive, State Library and Sydney Harbour Foreshore Authority Archives. Public documents, primarily newspaper articles and design plans, were analysed to identify possible differences in motives and to determine the process of implementation of the waterfront redevelopments. It was also important to obtain historical photographs and descriptions to understand how the waterfront had been altered. Sites maps in different time periods have been identified to understand what kind of changes happened on the urban landscape and how the developments affected areas. Newspaper articles and editorials have been examined in order to discover what aspects of the projects reflected the history and heritage. The thematic analysis of the archival data helped determine Darling Harbour is a historically important place as it had represented a focal point for Sydney's industrial growth and the cradle of industrial development in European Australia. It has been found that the development area was designated in order to be transformed to a place for tourist, education, recreational, entertainment, cultural and commercial activities and as a result little evidence remained of its industrial past. This paper aims to discuss the industrial significance of Darling Harbour and to explain the changes on its industrial landscape. What is absent now is the layer of its history that creates the layers of meaning to the place so its historic industrial identity is effectively lost.Keywords: historical significance, industrial heritage, industrial waterfront, re-development
Procedia PDF Downloads 30163 Time Travel Testing: A Mechanism for Improving Renewal Experience
Authors: Aritra Majumdar
Abstract:
While organizations strive to expand their new customer base, retaining existing relationships is a key aspect of improving overall profitability and also showcasing how successful an organization is in holding on to its customers. It is an experimentally proven fact that the lion’s share of profit always comes from existing customers. Hence seamless management of renewal journeys across different channels goes a long way in improving trust in the brand. From a quality assurance standpoint, time travel testing provides an approach to both business and technology teams to enhance the customer experience when they look to extend their partnership with the organization for a defined phase of time. This whitepaper will focus on key pillars of time travel testing: time travel planning, time travel data preparation, and enterprise automation. Along with that, it will call out some of the best practices and common accelerator implementation ideas which are generic across verticals like healthcare, insurance, etc. In this abstract document, a high-level snapshot of these pillars will be provided. Time Travel Planning: The first step of setting up a time travel testing roadmap is appropriate planning. Planning will include identifying the impacted systems that need to be time traveled backward or forward depending on the business requirement, aligning time travel with other releases, frequency of time travel testing, preparedness for handling renewal issues in production after time travel testing is done and most importantly planning for test automation testing during time travel testing. Time Travel Data Preparation: One of the most complex areas in time travel testing is test data coverage. Aligning test data to cover required customer segments and narrowing it down to multiple offer sequencing based on defined parameters are keys for successful time travel testing. Another aspect is the availability of sufficient data for similar combinations to support activities like defect retesting, regression testing, post-production testing (if required), etc. This section will talk about the necessary steps for suitable data coverage and sufficient data availability from a time travel testing perspective. Enterprise Automation: Time travel testing is never restricted to a single application. The workflow needs to be validated in the downstream applications to ensure consistency across the board. Along with that, the correctness of offers across different digital channels needs to be checked in order to ensure a smooth customer experience. This section will talk about the focus areas of enterprise automation and how automation testing can be leveraged to improve the overall quality without compromising on the project schedule. Along with the above-mentioned items, the white paper will elaborate on the best practices that need to be followed during time travel testing and some ideas pertaining to accelerator implementation. To sum it up, this paper will be written based on the real-time experience author had on time travel testing. While actual customer names and program-related details will not be disclosed, the paper will highlight the key learnings which will help other teams to implement time travel testing successfully.Keywords: time travel planning, time travel data preparation, enterprise automation, best practices, accelerator implementation ideas
Procedia PDF Downloads 15962 Comparative Investigation of Two Non-Contact Prototype Designs Based on a Squeeze-Film Levitation Approach
Authors: A. Almurshedi, M. Atherton, C. Mares, T. Stolarski, M. Miyatake
Abstract:
Transportation and handling of delicate and lightweight objects is currently a significant issue in some industries. Two common contactless movement prototype designs, ultrasonic transducer design and vibrating plate design, are compared. Both designs are based on the method of squeeze-film levitation, and this study aims to identify the limitations, and challenges of each. The designs are evaluated in terms of levitation capabilities, and characteristics. To this end, theoretical and experimental explorations are made. It is demonstrated that the ultrasonic transducer prototype design is better suited to the terms of levitation capabilities. However, the design has some operating and mechanical designing difficulties. For making accurate industrial products in micro-fabrication and nanotechnology contexts, such as semiconductor silicon wafers, micro-components and integrated circuits, non-contact oil-free, ultra-precision and low wear transport along the production line is crucial for enabling. One of the designs (design A) is called the ultrasonic chuck, for which an ultrasonic transducer (Langevin, FBI 28452 HS) comprises the main part. Whereas the other (design B), is a vibrating plate design, which consists of a plain rectangular plate made of Aluminium firmly fastened at both ends. The size of the rectangular plate is 200x100x2 mm. In addition, four rounded piezoelectric actuators of size 28 mm diameter with 0.5 mm thickness are glued to the underside of the plate. The vibrating plate is clamped at both ends in the horizontal plane through a steel supporting structure. In addition, the dynamic of levitation using the designs (A and B) has been investigated based on the squeeze film levitation (SFL). The input apparatus that is used with designs consist of a sine wave signal generator connected to an amplifier type ENP-1-1U (Echo Electronics). The latter has to be utilised to magnify the sine wave voltage that is produced by the signal generator. The measurements of the maximum levitation for three different semiconductor wafers of weights 52, 70 and 88 [g] for design A are 240, 205 and 187 [um], respectively. Whereas the physical results show that the average separation distance for a disk of 5 [g] weight for design B reaches 70 [um]. By using the methodology of squeeze film levitation, it is possible to hold an object in a non-contact manner. The analyses of the investigation outcomes signify that the non-contact levitation of design A provides more improvement than design B. However, design A is more complicated than design B in terms of its manufacturing. In order to identify an adequate non-contact SFL design, a comparison between two common such designs has been adopted for the current investigation. Specifically, the study will involve making comparisons in terms of the following issues: floating component geometries and material type constraints; final created pressure distributions; dangerous interactions with the surrounding space; working environment constraints; and complication and compactness of the mechanical design. Considering all these matters is essential for proficiently distinguish the better SFL design.Keywords: ANSYS, floating, piezoelectric, squeeze-film
Procedia PDF Downloads 14961 Ecological Relationships Between Material, Colonizing Organisms, and Resulting Performances
Authors: Chris Thurlbourne
Abstract:
Due to the continual demand for material to build, and a limit of good environmental material credentials of 'normal' building materials, there is a need to look at new and reconditioned material types - both biogenic and non-biogenic - and a field of research that accompanies this. This research development focuses on biogenic and non-biogenic material engineering and the impact of our environment on new and reconditioned material types. In our building industry and all the industries involved in constructing our built environment, building material types can be broadly categorized into two types, biogenic and non-biogenic material properties. Both play significant roles in shaping our built environment. Regardless of their properties, all material types originate from our earth, whereas many are modified through processing to provide resistance to 'forces of nature', be it rain, wind, sun, gravity, or whatever the local environmental conditions throw at us. Modifications are succumbed to offer benefits in endurance, resistance, malleability in handling (building with), and ergonomic values - in all types of building material. We assume control of all building materials through rigorous quality control specifications and regulations to ensure materials perform under specific constraints. Yet materials confront an external environment that is not controlled with live forces undetermined, and of which materials naturally act and react through weathering, patination and discoloring, promoting natural chemical reactions such as rusting. The purpose of the paper is to present recent research that explores the after-life of specific new and reconditioned biogenic and non-biogenic material types and how the understanding of materials' natural processes of transformation when exposed to the external climate, can inform initial design decisions. With qualities to receive in a transient and contingent manner, ecological relationships between material, the colonizing organisms and resulting performances invite opportunities for new design explorations for the benefit of both the needs of human society and the needs of our natural environment. The research follows designing for the benefit of both and engaging in both biogenic and non-biogenic material engineering whilst embracing the continual demand for colonization - human and environment, and the aptitude of a material to be colonized by one or several groups of living organisms without necessarily undergoing any severe deterioration, but embracing weathering, patination and discoloring, and at the same time establishing new habitat. The research follows iterative prototyping processes where knowledge has been accumulated via explorations of specific material performances, from laboratory to construction mock-ups focusing on the architectural qualities embedded in control of production techniques and facilitating longer-term patinas of material surfaces to extend the aesthetic beyond common judgments. Experiments are therefore focused on how the inherent material qualities drive a design brief toward specific investigations to explore aesthetics induced through production, patinas and colonization obtained over time while exposed and interactions with external climate conditions.Keywords: biogenic and non-biogenic, natural processes of transformation, colonization, patina
Procedia PDF Downloads 8760 Mapping and Measuring the Vulnerability Level of the Belawan District Community in Encountering the Rob Flood Disaster
Authors: Dessy Pinem, Rahmadian Sembiring, Adanil Bushra
Abstract:
Medan Belawan is one of the subdistricts of 21 districts in Medan. Medan Belawan Sub-district is directly adjacent to the Malacca Strait in the North. Due to its direct border with the Malacca Strait, the problem in this sub-district, which has continued for many years, is a flood of rob. In 2015, rob floods inundated Sicanang urban village, Belawan I urban village, Belawan Bahagia urban village and Bagan Deli village. The extent of inundation in the flood of rob that occurred in September 2015 reached 540, 938 ha. Rob flood is a phenomenon where the sea water is overflowing into the mainland. Rob floods can also be interpreted as a puddle of water on the coastal land that occurs when the tidal waters. So this phenomenon will inundate parts of the coastal plain or lower place of high tide sea level. Rob flood is a daily disaster faced by the residents in the district of Medan Belawan. Rob floods can happen every month and last for a week. The flood is not only the residents' houses, the flood also soaked the main road to Belawan Port reaching 50 cm. To deal with the problems caused by the flood and to prepare coastal communities to face the character of coastal areas, it is necessary to know the vulnerability of the people who are always the victims of the rob flood. Are the people of Medan Belawan sub-district, especially in the flood-affected villages, able to cope with the consequences of the floods? To answer this question, it is necessary to assess the vulnerability of the Belawan District community in the face of the flood disaster. This research is descriptive, qualitative and quantitative. Data were collected by observation, interview and questionnaires in 4 urban villages often affected by rob flood. The vulnerabilities measured are physical, economic, social, environmental, organizational and motivational vulnerabilities. For vulnerability in the physical field, the data collected is the distance of the building, floor area ratio, drainage, and building materials. For economic vulnerability, data collected are income, employment, building ownership, and insurance ownership. For the vulnerability in the social field, the data collected is education, number of family members, children, the elderly, gender, training for disasters, and how to dispose of waste. For the vulnerability in the field of organizational data collected is the existence of organizations that advocate for the victims, their policies and laws governing the handling of tidal flooding. The motivational vulnerability is seen from the information center or question and answer about the rob flood, and the existence of an evacuation plan or path to avoid disaster or reduce the victim. The results of this study indicate that most people in Medan Belawan sub-district have a high-level vulnerability in physical, economic, social, environmental, organizational and motivational fields. They have no access to economic empowerment, no insurance, no motivation to solve problems and only hope to the government, not to have organizations that support and defend them, and have physical buildings that are easily destroyed by rob floods.Keywords: disaster, rob flood, Medan Belawan, vulnerability
Procedia PDF Downloads 12659 Identifying Confirmed Resemblances in Problem-Solving Engineering, Both in the Past and Present
Authors: Colin Schmidt, Adrien Lecossier, Pascal Crubleau, Philippe Blanchard, Simon Richir
Abstract:
Introduction:The widespread availability of artificial intelligence, exemplified by Generative Pre-trained Transformers (GPT) relying on large language models (LLM), has caused a seismic shift in the realm of knowledge. Everyone now has the capacity to swiftly learn how these models can either serve them well or not. Today, conversational AI like ChatGPT is grounded in neural transformer models, a significant advance in natural language processing facilitated by the emergence of renowned LLMs constructed using neural transformer architecture. Inventiveness of an LLM : OpenAI's GPT-3 stands as a premier LLM, capable of handling a broad spectrum of natural language processing tasks without requiring fine-tuning, reliably producing text that reads as if authored by humans. However, even with an understanding of how LLMs respond to questions asked, there may be lurking behind OpenAI’s seemingly endless responses an inventive model yet to be uncovered. There may be some unforeseen reasoning emerging from the interconnection of neural networks here. Just as a Soviet researcher in the 1940s questioned the existence of Common factors in inventions, enabling an Under standing of how and according to what principles humans create them, it is equally legitimate today to explore whether solutions provided by LLMs to complex problems also share common denominators. Theory of Inventive Problem Solving (TRIZ) : We will revisit some fundamentals of TRIZ and how Genrich ALTSHULLER was inspired by the idea that inventions and innovations are essential means to solve societal problems. It's crucial to note that traditional problem-solving methods often fall short in discovering innovative solutions. The design team is frequently hampered by psychological barriers stemming from confinement within a highly specialized knowledge domain that is difficult to question. We presume ChatGPT Utilizes TRIZ 40. Hence, the objective of this research is to decipher the inventive model of LLMs, particularly that of ChatGPT, through a comparative study. This will enhance the efficiency of sustainable innovation processes and shed light on how the construction of a solution to a complex problem was devised. Description of the Experimental Protocol : To confirm or reject our main hypothesis that is to determine whether ChatGPT uses TRIZ, we will follow a stringent protocol that we will detail, drawing on insights from a panel of two TRIZ experts. Conclusion and Future Directions : In this endeavor, we sought to comprehend how an LLM like GPT addresses complex challenges. Our goal was to analyze the inventive model of responses provided by an LLM, specifically ChatGPT, by comparing it to an existing standard model: TRIZ 40. Of course, problem solving is our main focus in our endeavours.Keywords: artificial intelligence, Triz, ChatGPT, inventiveness, problem-solving
Procedia PDF Downloads 7358 Natural Fibers Design Attributes
Authors: Brayan S. Pabón, R. Ricardo Moreno, Edith Gonzalez
Abstract:
Inside the wide Colombian natural fiber set is the banana stem leaf, known as Calceta de Plátano, which is a material present in several regions of the country and is a fiber extracted from the pseudo stem of the banana plant (Musa paradisiaca) as a regular maintenance process. Colombia had a production of 2.8 million tons in 2007 and 2008 corresponding to 8.2% of the international production, number that is growing. This material was selected to be studied because it is not being used by farmers due to it being perceived as a waste from the banana harvest and a propagation pest agent inside the planting. In addition, the Calceta does not have industrial applications in Colombia since there is not enough concrete knowledge that informs us about the properties of the material and the possible applications it could have. Based on this situation the industrial design is used as a link between the properties of the material and the need to transform it into industrial products for the market. Therefore, the project identifies potential design attributes that the banana stem leaf can have for product development. The methodology was divided into 2 main chapters: Methodology for the material recognition: -Data Collection, inquiring the craftsmen experience and bibliography. -Knowledge in practice, with controlled experiments and validation tests. -Creation of design attributes and material profile according to the knowledge developed. Moreover, the Design methodology: -Application fields selection, exploring the use of the attributes and the relation with product functions. -Evaluating the possible fields and selection of the optimum application. -Design Process with sketching, ideation, and product development. Different protocols were elaborated to qualitatively determine some material properties of the Calceta, and if they could be designated as design attributes. Once defined, performed and analyzed the validation protocols, 25 design attributes were identified and classified into 4 attribute categories (Environmental, Functional, Aesthetics and Technical) forming the material profile. Then, 15 application fields were defined based on the relation between functions of product and the use of the Calceta attributes. Those fields were evaluated to measure how much are being used the functional attributes. After fields evaluation, a final field was definedKeywords: banana stem leaf, Calceta de Plátano, design attributes, natural fibers, product design
Procedia PDF Downloads 25957 Environmental Impact of Pallets in the Supply Chain: Including Logistics and Material Durability in a Life Cycle Assessment Approach
Authors: Joana Almeida, Kendall Reid, Jonas Bengtsson
Abstract:
Pallets are devices that are used for moving and storing freight and are nearly omnipresent in supply chains. The market is dominated by timber pallets, with plastic being a common alternative. Either option underpins the use of important resources (oil, land, timber), the emission of greenhouse gases and additional waste generation in most supply chains. This study uses a dynamic approach to the life cycle assessment (LCA) of pallets. It demonstrates that what ultimately defines the environmental burden of pallets in the supply chain is how often the length of its lifespan, which depends on the durability of the material and on how pallets are utilized. This study proposes a life cycle assessment (LCA) of pallets in supply chains supported by an algorithm that estimates pallet durability in function of material resilience and of logistics. The LCA runs from cradle-to-grave, including raw material provision, manufacture, transport and end of life. The scope is representative of timber and plastic pallets in the Australian and South-East Asia markets. The materials included in this analysis are: -tropical mixed hardwood, unsustainably harvested in SE Asia; -certified softwood, sustainably harvested; -conventional plastic, a mix of virgin and scrap plastic; -recycled plastic pallets, 100% mixed plastic scrap, which are being pioneered by Re > Pal. The logistical model purports that more complex supply chains and rougher handling subject pallets to higher stress loads. More stress shortens the lifespan of pallets in function of their composition. Timber pallets can be repaired, extending their lifespan, while plastic pallets cannot. At the factory gate, softwood pallets have the lowest carbon footprint. Re > pal follows closely due to its burden-free feedstock. Tropical mixed hardwood and plastic pallets have the highest footprints. Harvesting tropical mixed hardwood in SE Asia often leads to deforestation, leading to emissions from land use change. The higher footprint of plastic pallets is due to the production of virgin plastic. Our findings show that manufacture alone does not determine the sustainability of pallets. Even though certified softwood pallets have lower carbon footprint and their lifespan can be extended by repair, the need for re-supply of materials and disposal of waste timber offsets this advantage. It also leads to most waste being generated among all pallets. In a supply chain context, Re > Pal pallets have the lowest footprint due to lower replacement and disposal needs. In addition, Re > Pal are nearly ‘waste neutral’, because the waste that is generated throughout their life cycle is almost totally offset by the scrap uptake for production. The absolute results of this study can be confirmed by progressing the logistics model, improving data quality, expanding the range of materials and utilization practices. Still, this LCA demonstrates that considering logistics, raw materials and material durability is central for sustainable decision-making on pallet purchasing, management and disposal.Keywords: carbon footprint, life cycle assessment, recycled plastic, waste
Procedia PDF Downloads 22156 Forced Migrants in Israel and Their Impact on the Urban Structure of Southern Neighborhoods of Tel Aviv
Authors: Arnon Medzini, Lilach Lev Ari
Abstract:
Migration, the driving force behind increased urbanization, has made cities much more diverse places to live in. Nearly one-fifth of all migrants live in the world’s 20 largest cities. In many of these global cities, migrants constitute over a third of the population. Many of contemporary migrants are in fact ‘forced migrants,’ pushed from their countries of origin due to political or ethnic violence and persecution or natural disasters. During the past decade, massive numbers of labor migrants and asylum seekers have migrated from African countries to Israel via Egypt. Their motives for leaving their countries of origin include ongoing and bloody wars in the African continent as well as corruption, severe conditions of poverty and hunger, and economic and political disintegration. Most of the African migrants came to Israel from Eritrea and Sudan as they saw Israel the closest natural geographic asylum to Africa; soon they found their way to the metropolitan Tel-Aviv area. There they concentrated in poor neighborhoods located in the southern part of the city, where they live under conditions of crowding, poverty, and poor sanitation. Today around 45,000 African migrants reside in these neighborhoods, and yet there is no legal option for expelling them due to dangers they might face upon returning to their native lands. Migration of such magnitude to the weakened neighborhoods of south Tel-Aviv can lead to the destruction of physical, social and human infrastructures. The character of the neighborhoods is changing, and the local population is the main victim. These local residents must bear the brunt of the failure of both authorities and the government to handle the illegal inhabitants. The extremely crowded living conditions place a heavy burden on the dilapidated infrastructures in the weakened areas where the refugees live and increase the distress of the veteran residents of the neighborhoods. Some problems are economic and some stem from damage to the services the residents are entitled to, others from a drastic decline in their standard of living. Even the public parks no longer serve the purpose for which they were originally established—the well-being of the public and the neighborhood residents; they have become the main gathering place for the infiltrators and a center of crime and violence. Based on secondary data analysis (for example: The Israel’s Population, Immigration and Border Authority, the hotline for refugees and migrants), the objective of this presentation is to discuss the effects of forced migration to Tel Aviv on the following tensions: between the local population and the immigrants; between the local population and the state authorities, and between human rights groups vis-a-vis nationalist local organizations. We will also describe the changes which have taken place in the urban infrastructure of the city of Tel Aviv, and discuss the efficacy of various Israeli strategic trajectories when handling human problems arising in the marginal urban regions where the forced migrant population is concentrated.Keywords: African asylum seekers, forced migrants, marginal urban regions, urban infrastructure
Procedia PDF Downloads 252