Search results for: food patterns
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6182

Search results for: food patterns

932 Redeeming the Self-Settling Scores with the Nazis by the Means of Poetics

Authors: Liliane Steiner

Abstract:

Beyond the testimonial act, that sheds light on the feminine experience in the Holocaust, the survivors' writing voices first and foremost the abjection of the feminine self brutally inflicted by the Nazis in the Holocaust, and in the same movement redeems the self by the means of poetics, and brings it to an existential state of being a subject. This study aims to stress the poetics of this writing in order to promote the Holocaust literature from the margins to the mainstream and to contribute to the commemoration of the Holocaust in the next generations. Methodology: The study of the survivors' redeeming of self is based on Julia Kristeva's theory of the abject: the self-throws out everything that threatens its existence and Liliane Steiner's theory of the post- abjection of hell: the belated act of vomiting the abject experiences settles cores with the author of the abject to redeem the self. The research will focus on Ruth Sender's trilogy The Cage, To Life and The Holocaust Lady as a case study. Findings: The binary mode that characterizes this writing reflects the experience of Jewish women, who were subject(s), were treated violently as object(s), debased, defeminized and, eventually turned into abject by the Nazis. In a tour de force, this writing re-enacts the postponed resistance, that vomited the abject imposed on the feminine self by the very act of narration, which denounces the real abject, the perpetrators. The post-abjection of self is acted out in constructs of abject, relating the abject experience of the Holocaust as well as the rehabilitation of the surviving self (subject). The transcription of abject surfaces in deconstructing the abject through self- characterization, and in the elusive rendering of bad memories, having recourse to literary figures. The narrative 'I' selects, obstructs, mends and tells the past events from an active standpoint, as would a subject in control of its (narrative) fate. In a compensatory movement, the narrating I tells itself by reconstructing the subject and proving time and again that I is other. Moreover, in the belated endeavor to revenge, testify and narrate the abject, the narrative I defies itself, and represents itself as a dialectical I, splitting and multiplying itself in a deconstructing way. The dialectical I is never (one) I. It voices not only the unvoiced but also and mainly the other silenced 'I's. Drawing its nature and construct from traumatic memories, the dialectical I transgresses boundaries to narrate her story, and in the same breath, the story of Jewish women doomed to silence. In this narrative feat, the dialectical I stresses its essential dialectical existence with the past, never to be (one) again. Conclusion: The pattern of I is other generates patterns of subject(s) that defy, transgress and repudiate the abject and its repercussions on the feminine I. The feminine I writes itself as a survivor that defies the abject (Nazis) and takes revenge. The paradigm of metamorphosis that accompanies the journey of the Holocaust memoirist engenders life and surviving as well as a narration that defies stagnation and death.

Keywords: abject, feminine writing, holocaust, post-abjection

Procedia PDF Downloads 96
931 The Evaluation of the Effect of a Weed-Killer Sulfonylurea on Durum Wheat (Triticum durum Desf)

Authors: Meksem Amara Leila, Ferfar Meriem, Meksem Nabila, Djebar Mohammed Reda

Abstract:

The wheat is the cereal the most consumed in the world. In Algeria, the production of this cereal covers only 20 in 25 % of the needs for the country, the rest being imported. To improve the efficiency and the productivity of the durum wheat, the farmers turn to the use of pesticides: weed-killers, fungicides and insecticides. However this use often entrains losses of products more at least important contaminating the environment and all the food chain. Weed-killers are substances developed to control or destroy plants considered unwanted. That they are natural or produced by the human being (molecule of synthesis), the absorption and the metabolization of weed-killers by plants cause the death of these plants.In this work, we set as goal the evaluation of the effect of a weed-killer sulfonylurea, the CossackOD with various concentrations (0, 2, 4 and 9 µg) on variety of Triticum durum: Cirta. We evaluated the plant growth by measuring the leaves and root length, compared with the witness as well as the content of proline and analyze the level of one of the antioxydative enzymes: catalse, after 14 days of treatment. Sulfonylurea is foliar and root weed-killers inhibiting the acetolactate synthase: a vegetable enzyme essential to the development of the plant. This inhibition causes the ruling of the growth then the death. The obtained results show a diminution of the average length of leaves and roots this can be explained by the fact that the ALS inhibitors are more active in the young and increasing regions of the plant, what inhibits the cellular division and talks a limitation of the foliar and root’s growth. We also recorded a highly significant increase in the proline levels and a stimulation of the catalase activity. As a response to increasing the herbicide concentrations a particular increases in antioxidative mechanisms in wheat cultivar Cirta suggest that the high sensitivity of Cirta to this sulfonylurea herbicide is related to the enhanced production and oxidative damage of reactive oxygen species.

Keywords: sulfonylurea, Triticum durum, oxydative stress, Toxicity

Procedia PDF Downloads 413
930 Associated Factors of Hypertension, Hypercholesterolemia and Double Burden Hypertension-Hypercholesterolemia in Patients With Congestive Heart Failure: Hospital Based Study

Authors: Pierre Mintom, William Djeukeu Asongni, Michelle Moni, William Dakam, Christine Fernande Nyangono Biyegue.

Abstract:

Background: In order to prevent congestive heart failure, control of hypertension and hypercholesterolemia is necessary because those risk factors frequently occur in combination. Objective: The aim of the study is to determine the prevalence and risk factors of hypertension, hypercholesterolemia and double burden HTA-Hypercholesterolemia in patients with congestive heart failure. Methodology: A database of 98 patients suffering from congestive heart failure was used. The latter were recruited from August 15, 2017, to March 5, 2018, in the Cardiology department of Deido District Hospital of Douala. This database provides information on sociodemographic parameters, biochemical examinations, characteristics of heart failure and food consumption. ESC/ESH and NCEP-ATPIII definitions were used to define Hypercholesterolemia (total cholesterol ≥200mg/dl), Hypertension (SBP≥140mmHg and/or DBP≥90mmHg). Double burden hypertension-hypercholesterolemia was defined as follows: total cholesterol (CT)≥200mg/dl, SBP≥140mmHg and DBP≥90mmHg. Results: The prevalence of hypertension (HTA), hypercholesterolemia (hyperchol) and double burden HTA-Hyperchol were 61.2%, 66.3% and 45.9%, respectively. No sociodemographic factor was associated with hypertension, hypercholesterolemia and double burden, but Male gender was significantly associated (p<0.05) with hypercholesterolemia. HypoHDLemia significantly increased hypercholesterolemia and the double burden by 19.664 times (p=0.001) and 14.968 times (p=0.021), respectively. Regarding dietary habits, the consumption of rice, peanuts and derivatives and cottonseed oil respectively significantly (p<0.05) exposed to the occurrence of hypertension. The consumption of tomatoes, green bananas, corn and derivatives, peanuts and derivatives and cottonseed oil significantly exposed (p<0.05) to the occurrence of hypercholesterolemia. The consumption of palm oil and cottonseed oil exposed the occurrence of the double burden of hypertension-hypercholesterolemia. Consumption of eggs protects against hypercholesterolemia, and consumption of peanuts and tomatoes protects against the double burden. Conclusion: hypercholesterolemia associated with hypertension appears as a complicating factor of congestive heart failure. Key risk factors are mainly diet-based, suggesting the importance of nutritional education for patients. New management protocols emphasizing diet should be considered.

Keywords: risk factors, hypertension, hypercholesterolemia, congestive heart failure

Procedia PDF Downloads 56
929 Removal of Nickel Ions from Industrial Effluents by Batch and Column Experiments: A Comparison of Activated Carbon with Pinus Roxburgii Saw Dust

Authors: Sardar Khana, Zar Ali Khana

Abstract:

Rapid industrial development and urbanization contribute a lot to wastewater discharge. The wastewater enters into natural aquatic ecosystems from industrial activities and considers as one of the main sources of water pollution. Discharge of effluents loaded with heavy metals into the surrounding environment has become a key issue regarding human health risk, environment, and food chain contamination. Nickel causes fatigue, cancer, headache, heart problems, skin diseases (Nickel Itch), and respiratory disorders. Nickel compounds such as Nickel Sulfide and Nickel oxides in industrial environment, if inhaled, have an association with an increased risk of lung cancer. Therefore the removal of Nickel from effluents before discharge is necessary. Removal of Nickel by low-cost biosorbents is an efficient method. This study was aimed to investigate the efficiency of activated carbon and Pinusroxburgiisaw dust for the removal of Nickel from industrial effluents using commercial Activated Carbon, and raw P.roxburgii saw dust. Batch and column adsorption experiments were conducted for the removal of Nickel. The study conducted indicates that removal of Nickel greatly dependent on pH, contact time, Nickel concentration, and adsorbent dose. Maximum removal occurred at pH 9, contact time of 600 min, and adsorbent dose of 1 g/100 mL. The highest removal was 99.62% and 92.39% (pH based), 99.76% and 99.9% (dose based), 99.80% and 100% (agitation time), 92% and 72.40% (Ni Conc. based) for P.roxburgii saw dust and activated Carbon, respectively. Similarly, the Ni removal in column adsorption was 99.77% and 99.99% (bed height based), 99.80% and 99.99% (Concentration based), 99.98%, and 99.81% (flow rate based) during column studies for Nickel using P.Roxburgiisaw dust and activated carbon, respectively. Results were compared with Freundlich isotherm model, which showed “r2” values of 0.9424 (Activated carbon) and 0.979 (P.RoxburgiiSaw Dust). While Langmuir isotherm model values were 0.9285 (Activated carbon) and 0.9999 (P.RoxburgiiSaw Dust), the experimental results were fitted to both the models. But the results were in close agreement with Langmuir isotherm model.

Keywords: nickel removal, batch, and column, activated carbon, saw dust, plant uptake

Procedia PDF Downloads 121
928 The Background of Ornamental Design Practice: Theory and Practice Based Research on Ornamental Traditions

Authors: Jenna Pyorala

Abstract:

This research looks at the principles and purposes ornamental design has served in the field of textile design. Ornamental designs are characterized by richness of details, abundance of elements, vegetative motifs and organic forms that flow harmoniously in complex compositions. Research on ornamental design is significant, because ornaments have been overlooked and considered as less meaningful and aesthetically pleasing than minimalistic, modern designs. This is despite the fact that in many parts of the world ornaments have been an important part of the cultural identification and expression for centuries. Ornament has been claimed to be superficial and merely used as a decorative way to hide the faults of designs. Such generalization is an incorrect interpretation of the real purposes of ornament. Many ornamental patterns tell stories, present mythological scenes or convey symbolistic meanings. Historically, ornamental decorations have been representing ideas and characteristics such as abundance, wealth, power and personal magnificence. The production of fine ornaments required refined skill, eye for intricate detail and perseverance while compiling complex elements into harmonious compositions. For this reason, ornaments have played an important role in the advancement of craftsmanship. Even though it has been claimed that people in the western design world have lost the relationship to ornament, the relation to it has merely changed from the practice of a craftsman to conceptualisation of a designer. With the help of new technological tools the production of ornaments has become faster and more efficient, demanding less manual labour. Designers who commit to this style of organic forms and vegetative motifs embrace and respect nature by representing its organically growing forms and by following its principles. The complexity of the designs is used as a way to evoke a sense of extraordinary beauty and stimulate intellect by freeing the mind from the predetermined interpretations. Through the study of these purposes it can be demonstrated that complex and richer design styles are as valuable a part of the world of design as more modern design approaches. The study highlights the meaning of ornaments by presenting visual examples and literature research findings. The practice based part of the project is the visual analysis of historical and cultural ornamental traditions such as Indian Chikan embroidery, Persian carpets, Art Nouveau and Rococo according to the rubric created for the purpose. The next step is the creation of ornamental designs based on the key elements in different styles. Theoretical and practical parts are woven together in this study that respects respect the long traditions of ornaments and highlight the importance of these design approaches to the field, in contrast to the more commonly preferred styles.

Keywords: cultural design traditions, ornamental design, organic forms from nature, textile design

Procedia PDF Downloads 217
927 Control of the Sustainability of Fresh Cheese in Order to Extend the Shelf-Life of the Product

Authors: Radovan Čobanović, Milica Rankov Šicar

Abstract:

The fresh cheese is in the group of perishable food which cannot be kept a long period of time. The study of sustainability have been done in order to extend the shelf-life of the product which was 15 days. According to the plan of sustainability it was defined that 35 samples had to be stored for 30 days at 2°C−6°C and analyzed every 7th day from the day of reception until 30th day. Shelf life of the cheese has expired during the study of sustainability in the period between 15th and 30th day of analyses. Cheese samples were subjected to sensory analysis (appearance, odor, taste, color, aroma) and bacteriological analyzes (Listeria monocytogenes, Salmonella spp., Bacillus cereus, Staphylococcus aureus and total plate count) according to Serbian state regulation. All analyses were tested according to ISO methodology: sensory analysis ISO 6658, Listeria monocytogenes ISO 11 290-1, Salmonella spp ISO 6579, Bacillus cereus ISO 7932, Staphylococcus aureus ISO 6888-1, and total plate count ISO 4833. Analyses showed that after fifteen days of storage at a temperature defined by the manufacturers and within the product's shelf life, the cheese did not have any noticeable changes in sensory characteristics. Smell and taste are unaffected there was no separation of whey and there was not presence of strange smell or taste. As far as microbiological analyses are concerned neither one pathogen was detected and total plate count was at level of 103 cfu/g. After expiry of shelf life in a period of 15th and 30th day of storage, the analysis showed that there was a separation of whey on the surface. Along the edge of the container was present a dried part of cheese and sour-milky smell and taste were very weakly expressed. Concerning the microbiological analyses there still were not positive results for pathogen microorganisms but the total plate count was at a level of 106cfu/g. Based on the obtained results it can be concluded that this product cannot have longer shelf life than shelf life which is already defined because there are a sensory changes that would certainly have influence on decision of customers when purchase of this product is concerned.

Keywords: sustainability, fresh cheese, shelf-life, product

Procedia PDF Downloads 369
926 Wicking Bed Cultivation System as a Strategic Proposal for the Cultivation of Milpa and Mexican Medicinal Plants in Urban Spaces

Authors: David Lynch Steinicke, Citlali Aguilera Lira, Andrea León García

Abstract:

The proposal posed in this work comes from a researching-action approach. In Mexico, a dialogue of knowledge may function as a link between traditional, local, pragmatic knowledge, and technological, scientific knowledge. The advantage of generating this nexus lies on the positive impact in the environment, in society and economy. This work attempts to combine, on the one hand the traditional Mexican knowledge such as the usage of medicinal herb and the agroecosystem milpa; and on the other hand make use of a newly created agricultural ecotechnology which main function is to take advantage of the urban space and to save water. This ecotechnology is the wicking bed. In a globalized world, is relevant to have a proposal where the most important aspect is to revalorize the culture through the acquisition of traditional knowledge but at the same time adapting them to the new social and urbanized structures without threatening the environment. The methodology used in this work comes from a researching-action approach combined with a practical dimension where an experimental model made of three wickingbeds was implemented. In this model, there were cultivated medicinal herb and milpa components. The water efficiency and the social acceptance were compared with a traditional ground crop, all this practice was made in an urban social context. The implementation of agricultural ecotechnology has had great social acceptance as its irrigation involves minimal effort and it is economically feasible for low-income people. The wicking bed system raised in this project is attainable to be implemented in schools, urban and peri-urban environments, homemade gardens and public areas. The proposal managed to carry out an innovative and sustainable knowledge-based traditional Mexican agricultural technology, allowing regain Milpa agroecosystem in urban environments to strengthen food security in favour of nutritional and protein benefits for the Mexican fare.

Keywords: milpa, traditional medicine, urban agriculture, wicking bed

Procedia PDF Downloads 374
925 Effect of Different Methods to Control the Parasitic Weed Phelipanche ramosa (L. Pomel) in Tomato Crop

Authors: Disciglio G., Lops F., Carlucci A., Gatta G., Tarantino A., Frabboni L, Tarantino E.

Abstract:

The Phelipanche ramosa is considered the most damaging obligate flowering parasitic weed on a wide species of cultivated plants. The semiarid regions of the world are considered the main center of this parasitic weed, where heavy infestation are due to the ability to produce high numbers of seeds (up to 200,000), that remain viable for extended period (more than 19 years). In this paper 13 treatments of parasitic weed control, as physical, chemical, biological and agronomic methods, including the use of the resistant plants, have been carried out. In 2014 a trial was performed on processing tomato (cv Docet), grown in pots filled with soil taken from a plot heavily infested by Phelipanche ramosa, at the Department of Agriculture, Food and Environment, University of Foggia (southern Italy). Tomato seedlings were transplanted on August 8, 2014 on a clay soil (USDA) 100 kg ha-1 of N; 60 kg ha-1 of P2O5 and 20 kg ha-1 of S. Afterwards, top dressing was performed with 70 kg ha-1 of N. The randomized block design with 3 replicates was adopted. During the growing cycle of the tomato, at 70-75-81 and 88 days after transplantation the number of parasitic shoots emerged in each pot was detected. Also values of leaf chlorophyll Meter SPAD of tomato plants were measured. All data were subjected to analysis of variance (ANOVA) using the JMP software (SAS Institute Inc., Cary, NC, USA), and for comparison of means was used Tukey's test. The results show lower values of the color index SPAD in tomato plants parasitized compared to those healthy. In addition, each treatment studied did not provide complete control against Phelipanche ramosa. However the virulence of the attacks was mitigated by some treatments: radicon product, compost activated with Fusarium, mineral fertilizer nitrogen, sulfur, enzone and resistant tomato genotype. It is assumed that these effects can be improved by combining some of these treatments each other, especially for a gradual and continuing reduction of the “seed bank” of the parasite in the soil.

Keywords: control methods, Phelipanche ramose, tomato crop

Procedia PDF Downloads 607
924 New Coating Materials Based on Mixtures of Shellac and Pectin for Pharmaceutical Products

Authors: M. Kumpugdee-Vollrath, M. Tabatabaeifar, M. Helmis

Abstract:

Shellac is a natural polyester resin secreted by insects. Pectins are natural, non-toxic and water-soluble polysaccharides extracted from the peels of citrus fruits or the leftovers of apples. Both polymers are allowed for the use in the pharmaceutical industry and as a food additive. SSB Aquagold® is the aqueous solution of shellac and can be used for a coating process as an enteric or controlled drug release polymer. In this study, tablets containing 10 mg methylene blue as a model drug were prepared with a rotary press. Those tablets were coated with mixtures of shellac and one of the pectin different types (i.e. CU 201, CU 501, CU 701 and CU 020) mostly in a 2:1 ratio or with pure shellac in a small scale fluidized bed apparatus. A stable, simple and reproducible three-stage coating process was successfully developed. The drug contents of the coated tablets were determined using UV-VIS spectrophotometer. The characterization of the surface and the film thickness were performed with the scanning electron microscopy (SEM) and the light microscopy. Release studies were performed in a dissolution apparatus with a basket. Most of the formulations were enteric coated. The dissolution profiles showed a delayed or sustained release with a lagtime of at least 4 h. Dissolution profiles of coated tablets with pure shellac had a very long lagtime ranging from 13 to 17.5 h and the slopes were quite high. The duration of the lagtime and the slope of the dissolution profiles could be adjusted by adding the proper type of pectin to the shellac formulation and by variation of the coating amount. In order to apply a coating formulation as a colon delivery system, the prepared film should be resistant against gastric fluid for at least 2 h and against intestinal fluid for 4-6 h. The required delay time was gained with most of the shellac-pectin polymer mixtures. The release profiles were fitted with the modified model of the Korsmeyer-Peppas equation and the Hixson-Crowell model. A correlation coefficient (R²) > 0.99 was obtained by Korsmeyer-Peppas equation.

Keywords: shellac, pectin, coating, fluidized bed, release, colon delivery system, kinetic, SEM, methylene blue

Procedia PDF Downloads 400
923 Working Memory and Phonological Short-Term Memory in the Acquisition of Academic Formulaic Language

Authors: Zhicheng Han

Abstract:

This study examines the correlation between knowledge of formulaic language, working memory (WM), and phonological short-term memory (PSTM) in Chinese L2 learners of English. This study investigates if WM and PSTM correlate differently to the acquisition of formulaic language, which may be relevant for the discourse around the conceptualization of formulas. Connectionist approaches have lead scholars to argue that formulas are form-meaning connections stored whole, making PSTM significant in the acquisitional process as it pertains to the storage and retrieval of chunk information. Generativist scholars, on the other hand, argued for active participation of interlanguage grammar in the acquisition and use of formulaic language, where formulas are represented in the mind but retain the internal structure built around a lexical core. This would make WM, especially the processing component of WM an important cognitive factor since it plays a role in processing and holding information for further analysis and manipulation. The current study asked L1 Chinese learners of English enrolled in graduate programs in China to complete a preference raking task where they rank their preference for formulas, grammatical non-formulaic expressions, and ungrammatical phrases with and without the lexical core in academic contexts. Participants were asked to rank the options in order of the likeliness of them encountering these phrases in the test sentences within academic contexts. Participants’ syntactic proficiency is controlled with a cloze test and grammar test. Regression analysis found a significant relationship between the processing component of WM and preference of formulaic expressions in the preference ranking task while no significant correlation is found for PSTM or syntactic proficiency. The correlational analysis found that WM, PSTM, and the two proficiency test scores have significant covariates. However, WM and PSTM have different predictor values for participants’ preference for formulaic language. Both storage and processing components of WM are significantly correlated with the preference for formulaic expressions while PSTM is not. These findings are in favor of the role of interlanguage grammar and syntactic knowledge in the acquisition of formulaic expressions. The differing effects of WM and PSTM suggest that selective attention to and processing of the input beyond simple retention play a key role in successfully acquiring formulaic language. Similar correlational patterns were found for preferring the ungrammatical phrase with the lexical core of the formula over the ones without the lexical core, attesting to learners’ awareness of the lexical core around which formulas are constructed. These findings support the view that formulaic phrases retain internal syntactic structures that are recognized and processed by the learners.

Keywords: formulaic language, working memory, phonological short-term memory, academic language

Procedia PDF Downloads 49
922 Mapping and Mitigation Strategy for Flash Flood Hazards: A Case Study of Bishoftu City

Authors: Berhanu Keno Terfa

Abstract:

Flash floods are among the most dangerous natural disasters that pose a significant threat to human existence. They occur frequently and can cause extensive damage to homes, infrastructure, and ecosystems while also claiming lives. Although flash floods can happen anywhere in the world, their impact is particularly severe in developing countries due to limited financial resources, inadequate drainage systems, substandard housing options, lack of early warning systems, and insufficient preparedness. To address these challenges, a comprehensive study has been undertaken to analyze and map flood inundation using Geographic Information System (GIS) techniques by considering various factors that contribute to flash flood resilience and developing effective mitigation strategies. Key factors considered in the analysis include slope, drainage density, elevation, Curve Number, rainfall patterns, land-use/cover classes, and soil data. These variables were computed using ArcGIS software platforms, and data from the Sentinel-2 satellite image (with a 10-meter resolution) were utilized for land-use/cover classification. Additionally, slope, elevation, and drainage density data were generated from the 12.5-meter resolution of the ALOS Palsar DEM, while other relevant data were obtained from the Ethiopian Meteorological Institute. By integrating and regularizing the collected data through GIS and employing the analytic hierarchy process (AHP) technique, the study successfully delineated flash flood hazard zones (FFHs) and generated a suitable land map for urban agriculture. The FFH model identified four levels of risk in Bishoftu City: very high (2106.4 ha), high (10464.4 ha), moderate (1444.44 ha), and low (0.52 ha), accounting for 15.02%, 74.7%, 10.1%, and 0.004% of the total area, respectively. The results underscore the vulnerability of many residential areas in Bishoftu City, particularly the central areas that have been previously developed. Accurate spatial representation of flood-prone areas and potential agricultural zones is crucial for designing effective flood mitigation and agricultural production plans. The findings of this study emphasize the importance of flood risk mapping in raising public awareness, demonstrating vulnerability, strengthening financial resilience, protecting the environment, and informing policy decisions. Given the susceptibility of Bishoftu City to flash floods, it is recommended that the municipality prioritize urban agriculture adaptation, proper settlement planning, and drainage network design.

Keywords: remote sensing, flush flood hazards, Bishoftu, GIS.

Procedia PDF Downloads 19
921 Computer Simulation Approach in the 3D Printing Operations of Surimi Paste

Authors: Timilehin Martins Oyinloye, Won Byong Yoon

Abstract:

Simulation technology is being adopted in many industries, with research focusing on the development of new ways in which technology becomes embedded within production, services, and society in general. 3D printing (3DP) technology is fast developing in the food industry. However, the limited processability of high-performance material restricts the robustness of the process in some cases. Significantly, the printability of materials becomes the foundation for extrusion-based 3DP, with residual stress being a major challenge in the printing of complex geometry. In many situations, the trial-a-error method is being used to determine the optimum printing condition, which results in time and resource wastage. In this report, the analysis of 3 moisture levels for surimi paste was investigated for an optimum 3DP material and printing conditions by probing its rheology, flow characteristics in the nozzle, and post-deposition process using the finite element method (FEM) model. Rheological tests revealed that surimi pastes with 82% moisture are suitable for 3DP. According to the FEM model, decreasing the nozzle diameter from 1.2 mm to 0.6 mm, increased the die swell from 9.8% to 14.1%. The die swell ratio increased due to an increase in the pressure gradient (1.15107 Pa to 7.80107 Pa) at the nozzle exit. The nozzle diameter influenced the fluid properties, i.e., the shear rate, velocity, and pressure in the flow field, as well as the residual stress and the deformation of the printed sample, according to FEM simulation. The post-printing stability of the model was investigated using the additive layer manufacturing (ALM) model. The ALM simulation revealed that the residual stress and total deformation of the sample were dependent on the nozzle diameter. A small nozzle diameter (0.6 mm) resulted in a greater total deformation (0.023), particularly at the top part of the model, which eventually resulted in the sample collapsing. As the nozzle diameter increased, the accuracy of the model improved until the optimum nozzle size (1.0 mm). Validation with 3D-printed surimi products confirmed that the nozzle diameter was a key parameter affecting the geometry accuracy of 3DP of surimi paste.

Keywords: 3D printing, deformation analysis, die swell, numerical simulation, surimi paste

Procedia PDF Downloads 57
920 Degradation of Commercial Polychlorinated Biphenyl Mixture by Naturally Occurring Facultative Microorganisms via Anaerobic Dechlorination and Aerobic Oxidation

Authors: P. M. G. Pathiraja, P. Egodawatta, A. Goonetilleke, V. S. J. Te'o

Abstract:

The production and use of Polychlorinated biphenyls (PCBs), a group of synthetic halogenated hydrocarbons have been restricted worldwide due to its toxicity and categorized as one of the twelve priority persistent organic pollutants (POP) by the Stockholm Convention. Low reactivity and high chemical stability of PCBs have made them highly persistent in the environment and bio-concentration and bio-magnification along the food chain contribute to multiple health impacts in humans and animals. Remediating environments contaminated with PCBs is a challenging task for decades. Use of microorganisms for remediation of PCB contaminated soils and sediments have been widely investigated due to the potential of breakdown these complex contaminants with minimum environmental impacts. To achieve an effective bioremediation of polychlorinated biphenyls (PCBs) contaminated environments, microbes were sourced from environmental samples and tested for their ability to hydrolyze PCBs under different conditions. Comparison of PCB degradation efficiencies of four naturally occurring facultative bacterial cultures isolated through selective enrichment under aerobic and anaerobic conditions were simultaneously investigated in minimal salt medium using 50 mg/L Aroclor 1260, a commonly used commercial PCB mixture as the sole source of carbon. The results of a six-week study demonstrated that all the tested facultative Achromobacter, Ochrobactrum, Lysinibacillus and Pseudomonas strains are capable of degrading PCBs under both anaerobic and aerobic conditions while assisting hydrophobic PCBs to make solubilize in the aqueous minimal medium. Overall, the results suggest that some facultative bacteria are capable of effective in degrading PCBs under anaerobic conditions through reductive dechlorination and under aerobic conditions through oxidation. Therefore, use of suitable facultative microorganisms under combined anaerobic-aerobic conditions and combination of such strains capable of solubilization and breakdown of PCBs has high potential in achieving higher PCB removal rates.

Keywords: bioremediation, combined anaerobic-aerobic degradation, facultative microorganisms, polychlorinated biphenyls

Procedia PDF Downloads 231
919 Convective Boiling of CO₂/R744 in Macro and Micro-Channels

Authors: Adonis Menezes, J. C. Passos

Abstract:

The current panorama of technology in heat transfer and the scarcity of information about the convective boiling of CO₂ and hydrocarbon in small diameter channels motivated the development of this work. Among non-halogenated refrigerants, CO₂/ R744 has distinct thermodynamic properties compared to other fluids. The R744 presents significant differences in operating pressures and temperatures, operating at higher values compared to other refrigerants, and this represents a challenge for the design of new evaporators, as the original systems must normally be resized to meet the specific characteristics of the R744, which creates the need for a new design and optimization criteria. To carry out the convective boiling tests of CO₂, an experimental apparatus capable of storing (m= 10kg) of saturated CO₂ at (T = -30 ° C) in an accumulator tank was used, later this fluid was pumped using a positive displacement pump with three pistons, and the outlet pressure was controlled and could reach up to (P = 110bar). This high-pressure saturated fluid passed through a Coriolis type flow meter, and the mass velocities varied between (G = 20 kg/m².s) up to (G = 1000 kg/m².s). After that, the fluid was sent to the first test section of circular cross-section in diameter (D = 4.57mm), where the inlet and outlet temperatures and pressures, were controlled and the heating was promoted by the Joule effect using a source of direct current with a maximum heat flow of (q = 100 kW/m²). The second test section used a cross-section with multi-channels (seven parallel channels) with a square cross-section of (D = 2mm) each; this second test section has also control of temperature and pressure at the inlet and outlet as well as for heating a direct current source was used, with a maximum heat flow of (q = 20 kW/m²). The fluid in a biphasic situation was directed to a parallel plate heat exchanger so that it returns to the liquid state, thus being able to return to the accumulator tank, continuing the cycle. The multi-channel test section has a viewing section; a high-speed CMOS camera was used for image acquisition, where it was possible to view the flow patterns. The experiments carried out and presented in this report were conducted in a rigorous manner, enabling the development of a database on the convective boiling of the R744 in macro and micro channels. The analysis prioritized the processes from the beginning of the convective boiling until the drying of the wall in a subcritical regime. The R744 resurfaces as an excellent alternative to chlorofluorocarbon refrigerants due to its negligible ODP (Ozone Depletion Potential) and GWP (Global Warming Potential) rates, among other advantages. The results found in the experimental tests were very promising for the use of CO₂ in micro-channels in convective boiling and served as a basis for determining the flow pattern map and correlation for determining the heat transfer coefficient in the convective boiling of CO₂.

Keywords: convective boiling, CO₂/R744, macro-channels, micro-channels

Procedia PDF Downloads 134
918 The Biomechanical Assessment of Balance and Gait for Stroke Patients and the Implications in the Diagnosis and Rehabilitation

Authors: A. Alzahrani, G. Arnold, W. Wang

Abstract:

Background: Stroke commonly occurs in middle-aged and elderly populations, and the diagnosis of early stroke is still difficult. Patients who have suffered a stroke have different balance and gait patterns from healthy people. Advanced techniques of motion analysis have been routinely used in the clinical assessment of cerebral palsy. However, so far, little research has been done on the direct diagnosis of early stroke patients using motion analysis. Objectives: The aim of this study was to investigate whether patients with stroke have different balance and gait from healthy people and which biomechanical parameters could be used to predict and diagnose potential patients who are at a potential risk to stroke. Methods: Thirteen patients with stroke were recruited as subjects whose gait and balance was analysed. Twenty normal subjects at the matched age participated in this study as a control group. All subjects’ gait and balance were collected using Vicon Nexus® to obtain the gait parameters, kinetic, and kinematic parameters of the hip, knee, and ankle joints in three planes of both limbs. Participants stood on force platforms to perform a single leg balance test. Then, they were asked to walk along a 10 m walkway at their comfortable speed. Participants performed 6 trials of single-leg balance for each side and 10 trials of walking. From the recorded trials, three good ones were analysed using the Vicon Plug-in-Gait model to obtain gait parameters, e.g., walking speed, cadence, stride length, and joint parameters, e.g., joint angle, force, moments, etc. Result: The temporal-spatial variables of Stroke subjects were compared with the healthy subjects; it was found that there was a significant difference (p < 0.05) between the groups. The step length, speed, cadence were lower in stroke subjects as compared to the healthy groups. The stroke patients group showed significantly decreased in gait speed (mean and SD: 0.85 ± 0.33 m/s), cadence ( 96.71 ± 16.14 step/min), and step length (0.509 ± 017 m) in compared to healthy people group whereas the gait speed was 1.2 ± 0.11 m/s, cadence 112 ± 8.33 step/min, and step length 0.648 ± 0.43 m. Moreover, it was observed that patients with stroke have significant differences in the ankle, hip, and knee joints’ kinematics in the sagittal and coronal planes. Also, the result showed that there was a significant difference between groups in the single-leg balance test, e.g., maintaining single-leg stance time in the stroke patients showed shorter duration (5.97 ± 6.36 s) in compared to healthy people group (14.36 ± 10.20 s). Conclusion: Our result showed that there are significantly differences between stroke patients and healthy subjects in the various aspects of gait analysis and balance test, as a consequences of these findings some of the biomechanical parameters such as joints kinematics, gait parameters, and single-leg stance balance test could be used in clinical practice to predict and diagnose potential patients who are at a high risk of further stroke.

Keywords: gait analysis, kinetics, kinematics, single-leg stance, Stroke

Procedia PDF Downloads 132
917 The Valuable Triad of Adipokine Indices to Differentiate Pediatric Obesity from Metabolic Syndrome: Chemerin, Progranulin, Vaspin

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Obesity is associated with cardiovascular disease risk factors and metabolic syndrome (MetS). In this study, associations between adipokines and adipokine as well as obesity indices were evaluated. Plasma adipokine levels may exhibit variations according to body adipose tissue mass. Besides, upon consideration of obesity as an inflammatory disease, adipokines may play some roles in this process. The ratios of proinflammatory adipokines to adiponectin may act as highly sensitive indicators of body adipokine status. The aim of the study is to present some adipokine indices, which are thought to be helpful for the evaluation of childhood obesity and also to determine the best discriminators in the diagnosis of MetS. 80 prepubertal children (aged between 6-9.5 years) included in the study were divided into three groups; 30 children with normal weight (NW), 25 morbid obese (MO) children and 25 MO children with MetS. Physical examinations were performed. Written informed consent forms were obtained from the parents. The study protocol was approved by Ethics Committee of Namik Kemal University Medical Faculty. Anthropometric measurements, such as weight, height, waist circumference (C), hip C, head C, neck C were recorded. Values for body mass index (BMI), diagnostic obesity notation model assessment Index-II (D2 index) as well as waist-to-hip, head-to-neck ratios were calculated. Adiponectin, resistin, leptin, chemerin, vaspin, progranulin assays were performed by ELISA. Adipokine-to-adiponectin ratios were obtained. SPSS Version 20 was used for the evaluation of data. p values ≤ 0.05 were accepted as statistically significant. Values of BMI and D2 index, waist-to-hip, head-to-neck ratios did not differ between MO and MetS groups (p ≥ 0.05). Except progranulin (p ≤ 0.01), similar patterns were observed for plasma levels of each adipokine. There was not any difference in vaspin as well as resistin levels between NW and MO groups. Significantly increased leptin-to-adiponectin, chemerin-to-adiponectin and vaspin-to-adiponectin values were noted in MO in comparison with those of NW. The most valuable adipokine index was progranulin-to-adiponectin (p ≤ 0.01). This index was strongly correlated with vaspin-to-adiponectin ratio in all groups (p ≤ 0.05). There was no correlation between vaspin-to-adiponectin and chemerin-to--adiponectin in NW group. However, a correlation existed in MO group (r = 0.486; p ≤ 0.05). Much stronger correlation (r = 0.609; p ≤ 0.01) was observed in MetS group between these two adipokine indices. No correlations were detected between vaspin and progranulin as well as vaspin and chemerin levels. Correlation analyses showed a unique profile confined to MetS children. Adiponectin was found to be correlated with waist-to-hip (r = -0.435; p ≤ 0.05) as well as head-to-neck (r = 0.541; p ≤ 0.05) ratios only in MetS children. In this study, it has been investigated if adipokine indices have priority over adipokine levels. In conclusion, vaspin-to-adiponectin, progranulin-to-adiponectin, chemerin-to-adiponectin along with waist-to-hip and head-to-neck ratios were the optimal combinations. Adiponectin, waist-to-hip, head-to-neck, vaspin-to-adiponectin, chemerin-to-adiponectin ratios had appropriate discriminatory capability for MetS children.

Keywords: adipokine indices, metabolic syndrome, obesity indices, pediatric obesity

Procedia PDF Downloads 199
916 Synthesis, Characterization and Photocatalytic Activity of Electrospun Zinc and/or Titanium Oxide Nanofibers for Methylene Blue Degradation

Authors: Zainab Dahrouch, Beatrix Petrovičová, Claudia Triolo, Fabiola Pantò, Angela Malara, Salvatore Patanè, Maria Allegrini, Saveria Santangelo

Abstract:

Synthetic dyes dispersed in water cause environmental damage and have harmful effects on human health. Methylene blue (MB) is broadly used as a dye in the textile, pharmaceutical, printing, cosmetics, leather, and food industries. The complete removal of MB is difficult due to the presence of aromatic rings in its structure. The present study is focused on electrospun nanofibers (NFs) with engineered architecture and surface to be used as catalysts for the photodegradation of MB. Ti and/or Zn oxide NFs are produced by electrospinning precursor solutions with different Ti: Zn molar ratios (from 0:1 to 1:0). Subsequent calcination and cooling steps are operated at fast rates to generate porous NFs with capture centers to reduce the recombination rate of the photogenerated charges. The comparative evaluation of the NFs as photocatalysts for the removal of MB from an aqueous solution with a dye concentration of 15 µM under UV irradiation shows that the binary (wurtzite ZnO and anatase TiO₂) oxides exhibit higher catalytic activity compared to ternary (ZnTiO₃ and Zn₂TiO₄) oxides. The higher band gap and lower crystallinity of the ternary oxides are responsible for their lower photocatalytic activity. It has been found that the optimal load for the wurtzite ZnO is 0.66 mg mL⁻¹, obtaining a degradation rate of 7.94.10⁻² min⁻¹. The optimal load for anatase TiO₂ is lower (0.33 mg mL⁻¹) and the corresponding rate constant (1.12×10⁻¹ min⁻¹) is higher. This finding (higher activity with lower load) is of crucial importance for the scaling up of the process on an industrial scale. Indeed, the anatase NFs outperform even the commonly used P25-TiO₂ benchmark. Besides, they can be reused twice without any regeneration treatment, with 5.2% and 18.7% activity decrease after second and third use, respectively. Thanks to the scalability of the electrospinning technique, this laboratory-scale study provides a perspective towards the sustainable large-scale manufacture of photocatalysts for the treatment of industry effluents.

Keywords: anatase, capture centers, methylene blue dye, nanofibers, photodegradation, zinc oxide

Procedia PDF Downloads 147
915 Comparative Analysis between Thailand and the United States of a Wholesale Exemption for Vertical Restraint Regarding Intellectual Property Licensing

Authors: Sanpetchuda Krutkrua, Suphawatchara Malanond

Abstract:

Competition law is not a new thing in Thailand. Thailand first passed the first competition law during the Second World War in order to stop business operator monopolizing food and basic living supplies. The competition law in Thailand has been amended several times during the past eighty years in order to make it suitable for the current economic and social condition. In 2017, Thailand enacted the current Trade Competition Act of B.E. 2560, which contain several changes to the regime in order to enhance a prevention of collusive practices and monopolization through both vertical restraints and horizontal restraints. Section 56 of the Act provides exemptions for the vertical relationship; i.e., the arrangement in form of complementary relationship, between business operators, franchising agreements between franchisor and franchisee, and licensing agreement between licensor and licensee. The key is that such agreements must not be excessive, create monopolization or attempt to monopolize, or cause any impacts the consumers regarding price, quality, quantity of the goods. The goal of the paper is to explore the extent of the exemption under Section 56 and its sequential regulations regarding vertical trade restraints in the case intellectual property licensing. The research will be conducted in form of a comparative analysis on exemptions for collusive practices under the United States Antitrust law and the Thai Competition Act of B.E. 2560. The United Antitrust law, fairly similar to the Thai Competition Act of B.E. 2561, views the intellectual property licensing to have pro-competitive benefits to the market as long as the intellectual property licensing agreement does not harm the competition amongst the business operators that could have or would have been competitors. The United States Antitrust law identifies the relationship between the parties of the agreement whether such agreement is horizontal or vertical or both. Even though the nature of licensing agreements is primarily vertical, the relationship between licensor and licensees can also be horizontal if they could have been potential competitors in the market as well. The United States Antitrust law frowns upon, if not prohibits, the horizontal restraints regarding the intellectual property licensing but does not impose the same restrictions on the vertical trade restraints regarding intellectual property licensing.

Keywords: antitrust, competition law, vertical restraint, intellectual property, intellectual property licensing, comparative law

Procedia PDF Downloads 158
914 Thinking Historiographically in the 21st Century: The Case of Spanish Musicology, a History of Music without History

Authors: Carmen Noheda

Abstract:

This text provides a reflection on the way of thinking about the study of the history of music by examining the production of historiography in Spain at the turn of the century. Based on concepts developed by the historical theorist Jörn Rüsen, the article focuses on the following aspects: the theoretical artifacts that structure the interpretation of the limits of writing the history of music, the narrative patterns used to give meaning to the discourse of history, and the orientation context that functions as a source of criteria of significance for both interpretation and representation. This analysis intends to show that historical music theory is not only a means to abstractly explore the complex questions connected to the production of historical knowledge, but also a tool for obtaining concrete images about the intellectual practice of professional musicologists. Writing about the historiography of contemporary Spanish music is a task that requires both a knowledge of the history that is being written and investigated, as well as a familiarity with current theoretical trends and methodologies that allow for the recognition and definition of the different tendencies that have arisen in recent decades. With the objective of carrying out these premises, this project takes as its point of departure the 'immediate historiography' in relation to Spanish music at the beginning of the 21st century. The hesitation that Spanish musicology has shown in opening itself to new anthropological and sociological approaches, along with its rigidity in the face of the multiple shifts in dynamic forms of thinking about history, have produced a standstill whose consequences can be seen in the delayed reception of the historiographical revolutions that have emerged in the last century. Methodologically, this essay is underpinned by Rüsen’s notion of the disciplinary matrix, which is an important contribution to the understanding of historiography. Combined with his parallel conception of differing paradigms of historiography, it is useful for analyzing the present-day forms of thinking about the history of music. Following these theories, the article will in the first place address the characteristics and identification of present historiographical currents in Spanish musicology to thereby carry out an analysis based on the theories of Rüsen. Finally, it will establish some considerations for the future of musical historiography, whose atrophy has not only fostered the maintenance of an ingrained positivist tradition, but has also implied, in the case of Spain, an absence of methodological schools and an insufficient participation in international theoretical debates. An update of fundamental concepts has become necessary in order to understand that thinking historically about music demands that we remember that subjects are always linked by reciprocal interdependencies that structure and define what it is possible to create. In this sense, the fundamental aim of this research departs from the recognition that the history of music is embedded in the conditions that make it conceivable, communicable and comprehensible within a society.

Keywords: historiography, Jörn Rüssen, Spanish musicology, theory of history of music

Procedia PDF Downloads 181
913 Household Solid Waste Generation per Capita and Management Behaviour in Mthatha City, South Africa

Authors: Vuyayo Tsheleza, Simbarashe Ndhleve, Christopher Mpundu Musampa

Abstract:

Mismanagement of waste is continuously emerging as a rising malpractice in most developing countries, especially in fast growing cities. Household solid waste in Mthatha has been reported to be one of the problems facing the city and is overwhelming local authorities, as it is beyond the environment and management capacity of the existing waste management system. This study estimates per capita waste generation, quantity of different waste types generated by inhabitants of formal and informal settlements in Mthatha as well as waste management practices in the aforementioned socio-economic stratums. A total of 206 households were systematically selected for the study using stratified random sampling categorized into formal and informal settlements. Data on household waste generation rate, composition, awareness, and household waste management behaviour and practices was gathered through mixed methods. Sampled households from both formal and informal settlements with a total of 684 people generated 1949kg per week. This translates to 2.84kg per capita per week. On average, the rate of solid waste generation per capita was 0.40 kg per day for a person living in informal settlement and 0.56 kg per day person living in formal settlement. When recorded in descending order, the proportion food waste accounted for the most generated waste at approximately 23.7%, followed by disposable nappies at 15%, papers and cardboards 13.34%, glass 13.03%, metals at 11.99%, plastics at 11.58%, residue at 5.17, textiles 3.93%, with leather and rubber at 2.28% as the least generated waste type. Different waste management practices were reported in both formal and informal settlements with formal settlements proving to be more concerned about environmental management as compared to their counterparts, informal settlement. Understanding attitudes and perceptions on waste management, waste types and per capita solid waste generation rate can help evolve appropriate waste management strategies based on the principle of reduce, re-use, recycle, environmental sound disposal and also assist in projecting future waste generation rate. These results can be utilized as input when designing growing cities’ waste management plans.

Keywords: awareness, characterisation, per capita, quantification

Procedia PDF Downloads 283
912 Social Protection Reforms in Indonesia: Towards a Life Cycle Based Social Protection System

Authors: Dyah Larasati, Karishma Alize Huda, Sri Kusumastuti Rahayu, Martin Daniel Siyaranamual

Abstract:

Indonesia continues to reform its social protection system to provide the needed protection for its citizen. Indonesia Social Protection consisted of social assistance programs (non-contributory/tax-financed) specifically targeted for the poor and at-risk and social security/insurance program (contributory system). The social assistance programs have mostly been implemented since 1998. The national health insurance has been implemented since 2014 and the employment social insurance since 2015. One major reform implemented has been improving the targeting performance of its major social assistance portfolios including (1) Food Assistance for the poor families (Rastra and BPNT/noncash foods assistance); (2) Education Assistance for poor children; (3) Conditional Cash Transfer for poor families (PKH); and (4) Subsidized beneficiaries of National Health Insurance (JKN-PBI) for the poor and at-risk individuals. For the Social Insurance (through BPJS Employment program), several initiatives have been implemented to expand the program contributing members, although it mostly benefits the formal sector workers. However, major gaps still exist especially for the emerging middle-income groups who typically work at the informal sectors. They have yet to get the protection needed to sustain their social and economic growth. Since 2017, TNP2K (the National Team for Poverty Reduction) under the Vice President office has led the social protection discourse as the government understands the need to address vulnerabilities across the lifecycle and prioritize support to the most at-risk population particularly the elderly, young children and people with disabilities. Discussion and advocacy to recommend for more investment is continuing in order for the government to establish a comprehensive social protection system in the near future (2020-2024) that protects children through an inclusive child benefit program; build a system to benefit more working-age adults (including individuals with disabilities) and a three-tier elderly protection as they reach 65 years.

Keywords: poverty reduction, social assistance, social insurance, social protection

Procedia PDF Downloads 169
911 Perception of Predictive Confounders for the Prevalence of Hypertension among Iraqi Population: A Pilot Study

Authors: Zahraa Albasry, Hadeel D. Najim, Anmar Al-Taie

Abstract:

Background: Hypertension is considered as one of the most important causes of cardiovascular complications and one of the leading causes of worldwide mortality. Identifying the potential risk factors associated with this medical health problem plays an important role in minimizing its incidence and related complications. The objective of this study is to explore the prevalence of receptor sensitivity regarding assess and understand the perception of specific predictive confounding factors on the prevalence of hypertension (HT) among a sample of Iraqi population in Baghdad, Iraq. Materials and Methods: A randomized cross sectional study was carried out on 100 adult subjects during their visit to the outpatient clinic at a certain sector of Baghdad Province, Iraq. Demographic, clinical and health records alongside specific screening and laboratory tests of the participants were collected and analyzed to detect the potential of confounding factors on the prevalence of HT. Results: 63% of the study participants suffered from HT, most of them were female patients (P < 0.005). Patients aged between 41-50 years old significantly suffered from HT than other age groups (63.5%, P < 0.001). 88.9% of the participants were obese (P < 0.001) and 47.6% had diabetes with HT. Positive family history and sedentary lifestyle were significantly higher among all hypertensive groups (P < 0.05). High salt and fatty food intake was significantly found among patients suffered from isolated systolic hypertension (ISHT) (P < 0.05). A significant positive correlation between packed cell volume (PCV) and systolic blood pressure (SBP) (r = 0.353, P = 0.048) found among normotensive participants. Among hypertensive patients, a positive significant correlation found between triglycerides (TG) and both SBP (r = 0.484, P = 0.031) and diastolic blood pressure (DBP) (r = 0.463, P = 0.040), while low density lipoprotein-cholesterol (LDL-c) showed a positive significant correlation with DBP (r = 0.443, P = 0.021). Conclusion: The prevalence of HT among Iraqi populations is of major concern. Further consideration is required to detect the impact of potential risk factors and to minimize blood pressure (BP) elevation and reduce the risk of other cardiovascular complications later in life.

Keywords: Correlation, Hypertension, Iraq, Risk factors

Procedia PDF Downloads 124
910 The Confluence between Autism Spectrum Disorder and the Schizoid Personality

Authors: Murray David Schane

Abstract:

Though years of clinical encounters with patients with autism spectrum disorders and those with a schizoid personality the many defining diagnostic features shared between these conditions have been explored and current neurobiological differences have been reviewed; and, critical and different treatment strategies for each have been devised. The paper compares and contrasts the apparent similarities between autism spectrum disorders and the schizoid personality are found in these DSM descriptive categories: restricted range of social-emotional reciprocity; poor non-verbal communicative behavior in social interactions; difficulty developing and maintaining relationships; detachment from social relationships; lack of the desire for or enjoyment of close relationships; and preference for solitary activities. In this paper autism, fundamentally a communicative disorder, is revealed to present clinically as a pervasive aversive response to efforts to engage with or be engaged by others. Autists with the Asperger presentation typically have language but have difficulty understanding humor, irony, sarcasm, metaphoric speech, and even narratives about social relationships. They also tend to seek sameness, possibly to avoid problems of social interpretation. Repetitive behaviors engage many autists as a screen against ambient noise, social activity, and challenging interactions. Also in this paper, the schizoid personality is revealed as a pattern of social avoidance, self-sufficiency and apparent indifference to others as a complex psychological defense against a deep, long-abiding fear of appropriation and perverse manipulation. Neither genetic nor MRI studies have yet located the explanatory data that identifies the cause or the neurobiology of autism. Similarly, studies of the schizoid have yet to group that condition with those found in schizophrenia. Through presentations of clinical examples, the treatment of autists of the Asperger type is revealed to address the autist’s extreme social aversion which also precludes the experience of empathy. Autists will be revealed as forming social attachments but without the capacity to interact with mutual concern. Empathy will be shown be teachable and, as social avoidance relents, understanding of the meaning and signs of empathic needs that autists can recognize and acknowledge. Treatment of schizoids will be shown to revolve around joining empathically with the schizoid’s apprehensions about interpersonal, interactive proximity. Models of both autism and schizoid personality traits have yet to be replicated in animals, thereby eliminating the role of translational research in providing the kind of clues to behavioral patterns that can be related to genetic, epigenetic and neurobiological measures. But as these clinical examples will attest, treatment strategies have significant impact.

Keywords: autism spectrum, schizoid personality traits, neurobiological implications, critical diagnostic distinctions

Procedia PDF Downloads 103
909 Multi-Elemental Analysis Using Inductively Coupled Plasma Mass Spectrometry for the Geographical Origin Discrimination of Greek Giant Beans “Gigantes Elefantes”

Authors: Eleni C. Mazarakioti, Anastasios Zotos, Anna-Akrivi Thomatou, Efthimios Kokkotos, Achilleas Kontogeorgos, Athanasios Ladavos, Angelos Patakas

Abstract:

“Gigantes Elefantes” is a particularly dynamic crop of giant beans cultivated in western Macedonia (Greece). This variety of large beans growing in this area and specifically in the regions of Prespes and Kastoria is a protected designation of origin (PDO) species with high nutritional quality. Mislabeling of geographical origin and blending with unidentified samples are common fraudulent practices in Greek food market with financial and possible health consequences. In the last decades, multi-elemental composition analysis has been used in identifying the geographical origin of foods and agricultural products. In an attempt to discriminate the authenticity of Greek beans, multi-elemental analysis (Ag, Al, As, B, Ba, Be, Ca, Cd, Co, Cr, Cs, Cu, Fe, Ga, Ge, K, Li, Mg, Mn, Mo, Na, Nb, Ni, P, Pb, Rb, Re, Se, Sr, Ta, Ti, Tl, U, V, W, Zn, Zr) was performed by inductively coupled plasma mass spectrometry (ICP-MS) on 320 samples of beans, originated from Greece (Prespes and Kastoria), China and Poland. All samples were collected during the autumn of 2021. The obtained data were analysed by principal component analysis (PCA), an unsupervised statistical method, which allows for to reduce of the dimensionality of the enormous datasets. Statistical analysis revealed a clear separation of beans that had been cultivated in Greece compared with those from China and Poland. An adequate discrimination of geographical origin between bean samples originating from the two Greece regions, Prespes and Kastoria, was also evident. Our results suggest that multi-elemental analysis combined with the appropriate multivariate statistical method could be a useful tool for bean’s geographical authentication. Acknowledgment: This research has been financed by the Public Investment Programme/General Secretariat for Research and Innovation, under the call “YPOERGO 3, code 2018SE01300000: project title: ‘Elaboration and implementation of methodology for authenticity and geographical origin assessment of agricultural products.

Keywords: geographical origin, authenticity, multi-elemental analysis, beans, ICP-MS, PCA

Procedia PDF Downloads 63
908 Building Climate Resilience in the Health Sector in Developing Countries: Experience from Tanzania

Authors: Hussein Lujuo Mohamed

Abstract:

Introduction: Public health has always been influenced by climate and weather. Changes in climate and climate variability, particularly changes in weather extremes affect the environment that provides people with clean air, food, water, shelter, and security. Tanzania is not an exception to the threats of climate change. The health sector is mostly affected due to emergence and proliferation of infectious diseases, thereby affecting health of the population and thus impacting achievement of sustainable development goals. Methodology: A desk review on documented issues pertaining to climate change and health in Tanzania was done using Google search engine. Keywords included climate change, link, health, climate initiatives. In cases where information was not available, documents from Ministry of Health, Vice Presidents Office-Environment, Local Government Authority, Ministry of Water, WHO, research, and training institutions were reviewed. Some of the reviewed documents from these institutions include policy brief papers, fieldwork activity reports, training manuals, and guidelines. Results: Six main climate resilience activities were identified in Tanzania. These were development and implementation of climate resilient water safety plans guidelines both for rural and urban water authorities, capacity building of rural and urban water authorities on implementation of climate-resilient water safety plans, and capacity strengthening of local environmental health practitioners on mainstreaming climate change and health into comprehensive council health plans. Others were vulnerability and adaptation assessment for the health sector, mainstreaming climate change in the National Health Policy, and development of risk communication strategy on climate. In addition information, education, and communication materials on climate change and to create awareness were developed aiming to sensitize and create awareness among communities on climate change issues and its effect on public health. Conclusion: Proper implementation of these interventions will help the country become resilient to many impacts of climate change in the health sector and become a good example for other least developed countries.

Keywords: climate, change, Tanzania, health

Procedia PDF Downloads 105
907 Fabrication of Highly Conductive Graphene/ITO Transparent Bi-Film through Chemical Vapor Deposition (CVD) and Organic Additives-Free Sol-Gel Techniques

Authors: Bastian Waduge Naveen Harindu Hemasiri, Jae-Kwan Kim, Ji-Myon Lee

Abstract:

Indium tin oxide (ITO) remains the industrial standard transparent conducting oxides with better performances. Recently, graphene becomes as a strong material with unique properties to replace the ITO. However, graphene/ITO hybrid composite material is a newly born field in the electronic world. In this study, the graphene/ITO composite bi-film was synthesized by a two steps process. 10 wt.% tin-doped, ITO thin films were produced by an environmentally friendly aqueous sol-gel spin coating technique with economical salts of In(NO3)3.H2O and SnCl4 without using organic additives. The wettability and surface free energy (97.6986 mJ/m2) enhanced oxygen plasma treated glass substrates were used to form voids free continuous ITO film. The spin-coated samples were annealed at 600 0C for 1 hour under low vacuum conditions to obtained crystallized, ITO film. The crystal structure and crystalline phases of ITO thin films were analyzed by X-ray diffraction (XRD) technique. The Scherrer equation was used to determine the crystallite size. Detailed information about chemical composition and elemental composition of the ITO film were determined by X-ray photoelectron spectroscopy (XPS) and energy dispersive X-ray spectroscopy (EDX) coupled with FE-SEM respectively. Graphene synthesis was done under chemical vapor deposition (CVD) method by using Cu foil at 1000 0C for 1 min. The quality of the synthesized graphene was characterized by Raman spectroscopy (532nm excitation laser beam) and data was collected at room temperature and normal atmosphere. The surface and cross-sectional observation were done by using FE-SEM. The optical transmission and sheet resistance were measured by UV-Vis spectroscopy and four point probe head at room temperature respectively. Electrical properties were also measured by using V-I characteristics. XRD patterns reveal that the films contain the In2O3 phase only and exhibit the polycrystalline nature of the cubic structure with the main peak of (222) plane. The peak positions of In3d5/2 (444.28 eV) and Sn3d5/2 (486.7 eV) in XPS results indicated that indium and tin are in the oxide form only. The UV-visible transmittance shows 91.35 % at 550 nm with 5.88 x 10-3 Ωcm specific resistance. The G and 2D band in Raman spectroscopy of graphene appear at 1582.52 cm-1 and 2690.54 cm-1 respectively when the synthesized CVD graphene on SiO2/Si. The determined intensity ratios of 2D to G (I2D/IG) and D to G (ID/IG) were 1.531 and 0.108 respectively. However, the above-mentioned G and 2D peaks appear at 1573.57 cm-1 and 2668.14 cm-1 respectively when the CVD graphene on the ITO coated glass, the positions of G and 2D peaks were red shifted by 8.948 cm-1 and 22.396 cm-1 respectively. This graphene/ITO bi-film shows modified electrical properties when compares with sol-gel derived ITO film. The reduction of sheet resistance in the bi-film was 12.03 % from the ITO film. Further, the fabricated graphene/ITO bi-film shows 88.66 % transmittance at 550 nm wavelength.

Keywords: chemical vapor deposition, graphene, ITO, Raman Spectroscopy, sol-gel

Procedia PDF Downloads 251
906 Determining Disparities in the Distribution of the Energy Efficiency Resource through the History of Michigan Policy

Authors: M. Benjamin Stacey

Abstract:

Energy efficiency has been increasingly recognized as a high value resource through state policies that require utility companies to implement efficiency programs. While policymakers have recognized the statewide economic, environmental, and health related value to residents who rely on this grid supplied resource, varying interests in energy efficiency between socioeconomic groups stands undifferentiated in most state legislation. Instead, the benefits are oftentimes assumed to be distributed equitably across these groups. Despite this fact, these policies are frequently sited by advocacy groups, regulatory bodies and utility companies for their ability to address the negative financial, health and other social impacts of energy poverty in low income communities. Yet, while most states like Michigan require programs that target low income consumers, oftentimes no requirements exist for the equitable investment and energy savings for low income consumers, nor does it stipulate minimal spending levels on low income programs. To further understand the impact of the absence of these factors in legislation, this study examines the distribution of program funds and energy efficiency savings to answer a fundamental energy justice concern; Are there disparities in the investment and benefits of energy efficiency programs between socioeconomic groups? This study compiles data covering the history of Michigan’s Energy Efficiency policy implementation from 2010-2016, analyzing the energy efficiency portfolios of Michigan’s two main energy providers. To make accurate comparisons between these two energy providers' investments and energy savings in low and non-low income programs, the socioeconomic variation for each utility coverage area was captured and accounted for using GIS and US Census data. Interestingly, this study found that both providers invested more equitably in natural gas efficiency programs, however, together these providers invested roughly three times less per household in low income electricity efficiency programs, which resulted in ten times less electricity savings per household. This study also compares variation in commission approved utility plans and actual spending and savings results, with varying patterns pointing to differing portfolio management strategies between companies. This study reveals that for the history of the implementation of Michigan’s Energy Efficiency Policy, that the 35% of Michigan’s population who qualify as low income have received substantially disproportionate funding and energy savings because of the policy. This study provides an overview of results from a social perspective, raises concerns about the impact on energy poverty and equity between consumer groups and is an applicable tool for law makers, regulatory agencies, utility portfolio managers, and advocacy groups concerned with addressing issues related to energy poverty.

Keywords: energy efficiency, energy justice, low income, state policy

Procedia PDF Downloads 181
905 Simulation of the FDA Centrifugal Blood Pump Using High Performance Computing

Authors: Mehdi Behbahani, Sebastian Rible, Charles Moulinec, Yvan Fournier, Mike Nicolai, Paolo Crosetto

Abstract:

Computational Fluid Dynamics blood-flow simulations are increasingly used to develop and validate blood-contacting medical devices. This study shows that numerical simulations can provide additional and accurate estimates of relevant hemodynamic indicators (e.g., recirculation zones or wall shear stresses), which may be difficult and expensive to obtain from in-vivo or in-vitro experiments. The most recent FDA (Food and Drug Administration) benchmark consisted of a simplified centrifugal blood pump model that contains fluid flow features as they are commonly found in these devices with a clear focus on highly turbulent phenomena. The FDA centrifugal blood pump study is composed of six test cases with different volumetric flow rates ranging from 2.5 to 7.0 liters per minute, pump speeds, and Reynolds numbers ranging from 210,000 to 293,000. Within the frame of this study different turbulence models were tested including RANS models, e.g. k-omega, k-epsilon and a Reynolds Stress Model (RSM) and, LES. The partitioners Hilbert, METIS, ParMETIS and SCOTCH were used to create an unstructured mesh of 76 million elements and compared in their efficiency. Computations were performed on the JUQUEEN BG/Q architecture applying the highly parallel flow solver Code SATURNE and typically using 32768 or more processors in parallel. Visualisations were performed by means of PARAVIEW. Different turbulence models including all six flow situations could be successfully analysed and validated against analytical considerations and from comparison to other data-bases. It showed that an RSM represents an appropriate choice with respect to modeling high-Reynolds number flow cases. Especially, the Rij-SSG (Speziale, Sarkar, Gatzki) variant turned out to be a good approach. Visualisation of complex flow features could be obtained and the flow situation inside the pump could be characterized.

Keywords: blood flow, centrifugal blood pump, high performance computing, scalability, turbulence

Procedia PDF Downloads 374
904 Enhancing Sell-In and Sell-Out Forecasting Using Ensemble Machine Learning Method

Authors: Vishal Das, Tianyi Mao, Zhicheng Geng, Carmen Flores, Diego Pelloso, Fang Wang

Abstract:

Accurate sell-in and sell-out forecasting is a ubiquitous problem in the retail industry. It is an important element of any demand planning activity. As a global food and beverage company, Nestlé has hundreds of products in each geographical location that they operate in. Each product has its sell-in and sell-out time series data, which are forecasted on a weekly and monthly scale for demand and financial planning. To address this challenge, Nestlé Chilein collaboration with Amazon Machine Learning Solutions Labhas developed their in-house solution of using machine learning models for forecasting. Similar products are combined together such that there is one model for each product category. In this way, the models learn from a larger set of data, and there are fewer models to maintain. The solution is scalable to all product categories and is developed to be flexible enough to include any new product or eliminate any existing product in a product category based on requirements. We show how we can use the machine learning development environment on Amazon Web Services (AWS) to explore a set of forecasting models and create business intelligence dashboards that can be used with the existing demand planning tools in Nestlé. We explored recent deep learning networks (DNN), which show promising results for a variety of time series forecasting problems. Specifically, we used a DeepAR autoregressive model that can group similar time series together and provide robust predictions. To further enhance the accuracy of the predictions and include domain-specific knowledge, we designed an ensemble approach using DeepAR and XGBoost regression model. As part of the ensemble approach, we interlinked the sell-out and sell-in information to ensure that a future sell-out influences the current sell-in predictions. Our approach outperforms the benchmark statistical models by more than 50%. The machine learning (ML) pipeline implemented in the cloud is currently being extended for other product categories and is getting adopted by other geomarkets.

Keywords: sell-in and sell-out forecasting, demand planning, DeepAR, retail, ensemble machine learning, time-series

Procedia PDF Downloads 240
903 Exploring Social and Economic Barriers in Adoption and Expansion of Agricultural Technologies in Woliatta Zone, Southern Ethiopia

Authors: Akalework Mengesha

Abstract:

The adoption of improved agricultural technologies has been connected with higher earnings and lower poverty, enhanced nutritional status, lower staple food prices, and increased employment opportunities for landless laborers. The adoption and extension of the technologies are vastly crucial in that it enables the countries to achieve the millennium development goals (MDG) of reducing extreme poverty and hunger. There are efforts which directed to the enlargement and provision of modern crop varieties in sub-Saharan Africa in the past 30 years. Nevertheless, by and large, the adoption and expansion of rates for improved technologies have insulated behind other regions. This research aims to assess social and economic barriers in the adoption and expansion of agricultural technologies by local communities living around a private agricultural farm in Woliatta Zone, Southern Ethiopia. The study has been carried out among rural households which are located in the three localities selected for the study in the Woliatta zone. Across sectional mixed method, the design was used to address the study objective. The qualitative method was employed (in-depth interview, key informant, and focus group discussion) involving a total of 42 in-depth informants, 17 key-informant interviews, 2 focus group discussions comprising of 10 individuals in each group through purposive sampling techniques. The survey method was mainly used in the study to examine the impact of attitudinal, demographic, and socioeconomic variables on farmers’ adoption of agricultural technologies for quantitative data. The finding of the study revealed that Amibara commercial farm has not made a resolute and well-organized effort to extend agricultural technology to the surrounding local community. A comprehensive agricultural technology transfer scheme hasn’t been put in place by the commercial farm ever since it commenced operating in the study area. Besides, there is an ongoing conflict of interest between the farm and the community, which has kept on widening through time, bounds to be irreversible.

Keywords: adoption, technology transfer, agriculture, barriers

Procedia PDF Downloads 135