Search results for: laminaribiose production
495 New Bio-Strategies for Ochratoxin a Detoxification Using Lactic Acid Bacteria
Authors: José Maria, Vânia Laranjo, Luís Abrunhosa, António Inês
Abstract:
The occurrence of mycotoxigenic moulds such as Aspergillus, Penicillium and Fusarium in food and feed has an important impact on public health, by the appearance of acute and chronic mycotoxicoses in humans and animals, which is more severe in the developing countries due to lack of food security, poverty and malnutrition. This mould contamination also constitutes a major economic problem due the lost of crop production. A great variety of filamentous fungi is able to produce highly toxic secondary metabolites known as mycotoxins. Most of the mycotoxins are carcinogenic, mutagenic, neurotoxic and immunosuppressive, being ochratoxin A (OTA) one of the most important. OTA is toxic to animals and humans, mainly due to its nephrotoxic properties. Several approaches have been developed for decontamination of mycotoxins in foods, such as, prevention of contamination, biodegradation of mycotoxins-containing food and feed with microorganisms or enzymes and inhibition or absorption of mycotoxin content of consumed food into the digestive tract. Some group of Gram-positive bacteria named lactic acid bacteria (LAB) are able to release some molecules that can influence the mould growth, improving the shelf life of many fermented products and reducing health risks due to exposure to mycotoxins. Some LAB are capable of mycotoxin detoxification. Recently our group was the first to describe the ability of LAB strains to biodegrade OTA, more specifically, Pediococcus parvulus strains isolated from Douro wines. The pathway of this biodegradation was identified previously in other microorganisms. OTA can be degraded through the hydrolysis of the amide bond that links the L-β-phenylalanine molecule to the ochratoxin alpha (OTα) a non toxic compound. It is known that some peptidases from different origins can mediate the hydrolysis reaction like, carboxypeptidase A an enzyme from the bovine pancreas, a commercial lipase and several commercial proteases. So, we wanted to have a better understanding of this OTA degradation process when LAB are involved and identify which molecules where present in this process. For achieving our aim we used some bioinformatics tools (BLAST, CLUSTALX2, CLC Sequence Viewer 7, Finch TV). We also designed specific primers and realized gene specific PCR. The template DNA used came from LAB strains samples of our previous work, and other DNA LAB strains isolated from elderberry fruit, silage, milk and sausages. Through the employment of bioinformatics tools it was possible to identify several proteins belonging to the carboxypeptidase family that participate in the process of OTA degradation, such as serine type D-Ala-D-Ala carboxypeptidase and membrane carboxypeptidase. In conclusions, this work has identified carboxypeptidase proteins being one of the molecules present in the OTA degradation process when LAB are involved.Keywords: carboxypeptidase, lactic acid bacteria, mycotoxins, ochratoxin a.
Procedia PDF Downloads 463494 Effects of Drying and Extraction Techniques on the Profile of Volatile Compounds in Banana Pseudostem
Authors: Pantea Salehizadeh, Martin P. Bucknall, Robert Driscoll, Jayashree Arcot, George Srzednicki
Abstract:
Banana is one of the most important crops produced in large quantities in tropical and sub-tropical countries. Of the total plant material grown, approximately 40% is considered waste and left in the field to decay. This practice allows fungal diseases such as Sigatoka Leaf Spot to develop, limiting plant growth and spreading spores in the air that can cause respiratory problems in the surrounding population. The pseudostem is considered a waste residue of production (60 to 80 tonnes/ha/year), although it is a good source of dietary fiber and volatile organic compounds (VOC’s). Strategies to process banana pseudostem into palatable, nutritious and marketable food materials could provide significant social and economic benefits. Extraction of VOC’s with desirable odor from dried and fresh pseudostem could improve the smell of products from the confectionary and bakery industries. Incorporation of banana pseudostem flour into bakery products could provide cost savings and improve nutritional value. The aim of this study was to determine the effects of drying methods and different banana species on the profile of volatile aroma compounds in dried banana pseudostem. The banana species analyzed were Musa acuminata and Musa balbisiana. Fresh banana pseudostem samples were processed by either freeze-drying (FD) or heat pump drying (HPD). The extraction of VOC’s was performed at ambient temperature using vacuum distillation and the resulting, mostly aqueous, distillates were analyzed using headspace solid phase microextraction (SPME) gas chromatography – mass spectrometry (GC-MS). Optimal SPME adsorption conditions were 50 °C for 60 min using a Supelco 65 μm PDMS/DVB Stableflex fiber1. Compounds were identified by comparison of their electron impact mass spectra with those from the Wiley 9 / NIST 2011 combined mass spectral library. The results showed that the two species have notably different VOC profiles. Both species contained VOC’s that have been established in literature to have pleasant appetizing aromas. These included l-Menthone, D-Limonene, trans-linlool oxide, 1-Nonanol, CIS 6 Nonen-1ol, 2,6 Nonadien-1-ol, Benzenemethanol, 4-methyl, 1-Butanol, 3-methyl, hexanal, 1-Propanol, 2-methyl- acid، 2-Methyl-2-butanol. Results show banana pseudostem VOC’s are better preserved by FD than by HPD. This study is still in progress and should lead to the optimization of processing techniques that would promote the utilization of banana pseudostem in the food industry.Keywords: heat pump drying, freeze drying, SPME, vacuum distillation, VOC analysis
Procedia PDF Downloads 336493 Developing Confidence of Visual Literacy through Using MIRO during Online Learning
Authors: Rachel S. E. Lim, Winnie L. C. Tan
Abstract:
Visual literacy is about making meaning through the interaction of images, words, and sounds. Graphic communication students typically develop visual literacy through critique and production of studio-based projects for their portfolios. However, the abrupt switch to online learning during the COVID-19 pandemic has made it necessary to consider new strategies of visualization and planning to scaffold teaching and learning. This study, therefore, investigated how MIRO, a cloud-based visual collaboration platform, could be used to develop the visual literacy confidence of 30 diploma in graphic communication students attending a graphic design course at a Singapore arts institution. Due to COVID-19, the course was taught fully online throughout a 16-week semester. Guided by Kolb’s Experiential Learning Cycle, the two lecturers developed students’ engagement with visual literacy concepts through different activities that facilitated concrete experiences, reflective observation, abstract conceptualization, and active experimentation. Throughout the semester, students create, collaborate, and centralize communication in MIRO with infinite canvas, smart frameworks, a robust set of widgets (i.e., sticky notes, freeform pen, shapes, arrows, smart drawing, emoticons, etc.), and powerful platform capabilities that enable asynchronous and synchronous feedback and interaction. Students then drew upon these multimodal experiences to brainstorm, research, and develop their motion design project. A survey was used to examine students’ perceptions of engagement (E), confidence (C), learning strategies (LS). Using multiple regression, it¬ was found that the use of MIRO helped students develop confidence (C) with visual literacy, which predicted performance score (PS) that was measured against their application of visual literacy to the creation of their motion design project. While students’ learning strategies (LS) with MIRO did not directly predict confidence (C) or performance score (PS), it fostered positive perceptions of engagement (E) which in turn predicted confidence (C). Content analysis of students’ open-ended survey responses about their learning strategies (LS) showed that MIRO provides organization and structure in documenting learning progress, in tandem with establishing standards and expectations as a preparatory ground for generating feedback. With the clarity and sequence of the mentioned conditions set in place, these prerequisites then lead to the next level of personal action for self-reflection, self-directed learning, and time management. The study results show that the affordances of MIRO can develop visual literacy and make up for the potential pitfalls of student isolation, communication, and engagement during online learning. The context of how MIRO could be used by lecturers to orientate students for learning in visual literacy and studio-based projects for future development are discussed.Keywords: design education, graphic communication, online learning, visual literacy
Procedia PDF Downloads 114492 Thermoplastic-Intensive Battery Trays for Optimum Electric Vehicle Battery Pack Performance
Authors: Dinesh Munjurulimana, Anil Tiwari, Tingwen Li, Carlos Pereira, Sreekanth Pannala, John Waters
Abstract:
With the rapid transition to electric vehicles (EVs) across the globe, car manufacturers are in need of integrated and lightweight solutions for the battery packs of these vehicles. An integral part of a battery pack is the battery tray, which constitutes a significant portion of the pack’s overall weight. Based on the functional requirements, cost targets, and packaging space available, a range of materials –from metals, composites, and plastics– are often used to develop these battery trays. This paper considers the design and development of integrated thermoplastic-intensive battery trays, using the available packaging space from a representative EV battery pack. Presented as a proposed alternative are multiple concepts to integrate several connected systems such as cooling plates and underbody impact protection parts of a multi-piece incumbent battery pack. The resulting digital prototype was evaluated for several mechanical performance measures such as mechanical shock, drop, crush resistance, modal analysis, and torsional stiffness. The performance of this alternative design is then compared with the incumbent solution. In addition, insights are gleaned into how these novel approaches can be optimized to meet or exceed the performance of incumbent designs. Preliminary manufacturing feasibility of the optimal solution using injection molding and other commonly used manufacturing methods for thermoplastics is briefly explained. Then numerical and analytical evaluations are performed to show a representative Pareto front of cost vs. volume of the production parts. The proposed solution is observed to offer weight savings of up to 40% on a component level and part elimination of up to two systems in the battery pack of a typical battery EV while offering the potential to meet the required performance measures highlighted above. These conceptual solutions are also observed to potentially offer secondary benefits such as improved thermal and electrical isolations and be able to achieve complex geometrical features, thus demonstrating the ability to use the complete packaging space available in the vehicle platform considered. The detailed study presented in this paper serves as a valuable reference for researches across the globe working on the development of EV battery packs – especially those with an interest in the potential of employing alternate solutions as part of a mixed-material system to help capture untapped opportunities to optimize performance and meet critical application requirements.Keywords: thermoplastics, lightweighting, part integration, electric vehicle battery packs
Procedia PDF Downloads 205491 A Geochemical Perspective on A-Type Granites of Khanak and Devsar Areas, Haryana, India: Implications for Petrogenesis
Authors: Naresh Kumar, Radhika Sharma, A. K. Singh
Abstract:
Granites from Khanak and Devsar areas, a part of Malani Igneous Suite (MIS) were investigated for their geochemical characteristics to understand the petrogenetic aspect of the research area. Neoproterozoic rocks of MIS are well exposed in Jhunjhunu, Jodhpur, Pali, Barmer, Jalor, Jaisalmer districts of Rajasthan and Bhiwani district of Haryana and also occur at Kirana hills of Pakistan. The MIS predominantly consists of acidic volcanic with acidic plutonic (granite of various types), mafic volcanic, mafic intrusive and minor amount of pyroclasts. Based on the field and petrographical studies, 28 samples were selected and analyzed for geochemical analysis of major, trace and rare earth elements at the Wadia Institute of Himalayan Geology, Dehradun by X-Ray Fluorescence Spectrometer (XRF) and ICP-MS (Inductively Coupled Plasma- Mass Spectrometry). Granites from the studied areas are categorized as grey, green and pink. Khanak granites consist of quartz, k-feldspar, plagioclase, and biotite as essential minerals and hematite, zircon, annite, monazite & rutile as accessory minerals. In Devsar granites, plagioclase is replaced by perthite and occurs as dominantly. Geochemically, granites from Khanak and Devsar areas exhibit typical A-type granites characteristics with their enrichment in SiO2, Na2O+K2O, Fe/Mg, Rb, Zr, Y, Th, U, REE (except Eu) and significant depletion in MgO, CaO, Sr, P, Ti, Ni, Cr, V and Eu suggested about A-type affinities in Northwestern Peninsular India. The amount of heat production (HP) in green and grey granites of Devsar area varies upto 9.68 & 11.70 μWm-3 and total heat generation unit (HGU) i.e. 23.04 & 27.86 respectively. Pink granites of Khanak area display a higher enrichment of HP (16.53 μWm-3) and HGU (39.37) than the granites from Devsar area. Overall, they have much higher values of HP and HGU than the average value of continental crust (3.8 HGU), which imply a possible linear relationship among the surface heat flow and crustal heat generation in the rocks of MIS. Chondrite-normalized REE patterns show enriched LREE, moderate to strong negative Eu anomalies and more or less flat heavy REE. In primitive mantle-normalized multi-element variation diagrams, the granites show pronounced depletions in the high-field-strength elements (HFSE) Nb, Zr, Sr, P, and Ti. Geochemical characteristics (major, trace and REE) along with the use of various discrimination schemes revealed their probable correspondence to magma derived from the crustal origin by a different degree of partial melting.Keywords: A-type granite, neoproterozoic, Malani igneous suite, Khanak, Devsar
Procedia PDF Downloads 272490 Risk Factors for Determining Anti-HBcore to Hepatitis B Virus Among Blood Donors
Authors: Tatyana Savchuk, Yelena Grinvald, Mohamed Ali, Ramune Sepetiene, Dinara Sadvakassova, Saniya Saussakova, Kuralay Zhangazieva, Dulat Imashpayev
Abstract:
Introduction. The problem of viral hepatitis B (HBV) takes a vital place in the global health system. The existing risk of HBV transmission through blood transfusions is associated with transfusion of blood taken from infected individuals during the “serological window” period or from patients with latent HBV infection, the marker of which is anti-HBcore. In the absence of information about other markers of hepatitis B, the presence of anti-HBcore suggests that a person may be actively infected or has suffered hepatitis B in the past and has immunity. Aim. To study the risk factors influencing the positive anti-HBcore indicators among the donor population. Materials and Methods. The study was conducted in 2021 in the Scientific and Production Center of Transfusiology of the Ministry of Healthcare in Kazakhstan. The samples taken from blood donors were tested for anti-HBcore, by CLIA on the Architect i2000SR (ABBOTT). A special questionnaire was developed for the blood donors’ socio-demographic characteristics. Statistical analysis was conducted by the R software (version 4.1.1, USA, 2021). Results.5709 people aged 18 to 66 years were included in the study, the proportion of men and women was 68.17% and 31.83%, respectively. The average age of the participants was 35.7 years. A weighted multivariable mixed effects logistic regression analysis showed that age (p<0.001), ethnicity (p<0.05), and marital status (p<0.05) were statistically associated with anti-HBcore positivity. In particular, analysis adjusting for gender, nationality, education, marital status, family history of hepatitis, blood transfusion, injections, and surgical interventions, with a one-year increase in age (adjOR=1.06, 95%CI:1.05-1.07), showed an 6% growth in odds of having anti-HBcore positive results. Those who were russian ethnicity (adjOR=0.65, 95%CI:0.46-0.93) and representatives of other nationality groups (adjOR=0.56, 95%CI:0.37-0.85) had lower odds of having anti-HBcore when compared to Kazakhs when controlling for other covariant variables. Among singles, the odds of having a positive anti-HBcore were lower by 29% (adjOR = 0.71, 95%CI:0.57-0.89) compared to married participants when adjusting for other variables. Conclusions.Kazakhstan is one of the countries with medium endemicity of HBV prevalence (2%-7%). Results of the study demonstrated the possibility to form a profile of risk factors (age, nationality, marital status). Taking into account the data, it is recommended to increase attention to donor questionnaires by adding leading questions and to improve preventive measures to prevent HBV. Funding. This research was supported by a grant from Abbott Laboratories.Keywords: anti-HBcore, blood donor, donation, hepatitis B virus, occult hepatitis
Procedia PDF Downloads 109489 (Re)Processing of ND-Fe-B Permanent Magnets Using Electrochemical and Physical Approaches
Authors: Kristina Zuzek, Xuan Xu, Awais Ikram, Richard Sheridan, Allan Walton, Saso Sturm
Abstract:
Recycling of end-of-life REEs based Nd-Fe-B magnets is an important strategy for reducing the environmental dangers associated with rare-earth mining and overcoming the well-documented supply risks related to the REEs. However, challenges on their reprocessing still remain. We report on the possibility of direct electrochemical recycling and reprocessing of Nd-Fe(B)-based magnets. In this investigation, we were able first to electrochemically leach the end-of-life NdFeB magnet and to electrodeposit Nd–Fe using a 1-ethyl-3-methyl imidazolium dicyanamide ([EMIM][DCA]) ionic liquid-based electrolyte. We observed that Nd(III) could not be reduced independently. However, it can be co-deposited on a substrate with the addition of Fe(II). Using advanced TEM techniques of electron-energy-loss spectroscopy (EELS) it was shown that Nd(III) is reduced to Nd(0) during the electrodeposition process. This gave a new insight into determining the Nd oxidation state, as X-ray photoelectron spectroscopy (XPS) has certain limitations. This is because the binding energies of metallic Nd (Nd0) and neodymium oxide (Nd₂O₃) are very close, i. e., 980.5-981.5 eV and 981.7-982.3 eV, respectively, making it almost impossible to differentiate between the two states. These new insights into the electrodeposition process represent an important step closer to efficient recycling of rare piles of earth in metallic form at mild temperatures, thus providing an alternative to high-temperature molten-salt electrolysis and a step closer to deposit Nd-Fe-based magnetic materials. Further, we propose a new concept of recycling the sintered Nd-Fe-B magnets by direct recovering the 2:14:1 matrix phase. Via an electrochemical etching method, we are able to recover pure individual 2:14:1 grains that can be re-used for new types of magnet production. In the frame of physical reprocessing, we have successfully synthesized new magnets out of hydrogen (HDDR)-recycled stocks with a contemporary technique of pulsed electric current sintering (PECS). The optimal PECS conditions yielded fully dense Nd-Fe-B magnets with the coercivity Hc = 1060 kA/m, which was boosted to 1160 kA/m after the post-PECS thermal treatment. The Br and Hc were tackled further and increased applied pressures of 100 – 150 MPa resulted in Br = 1.01 T. We showed that with a fine tune of the PECS and post-annealing it is possible to revitalize the Nd-Fe-B end-of-life magnets. By applying advanced TEM, i.e. atomic-scale Z-contrast STEM combined with EDXS and EELS, the resulting magnetic properties were critically assessed against various types of structural and compositional discontinuities down to atomic-scale, which we believe control the microstructure evolution during the PECS processing route.Keywords: electrochemistry, Nd-Fe-B, pulsed electric current sintering, recycling, reprocessing
Procedia PDF Downloads 158488 Methods of Detoxification of Nuts With Aflatoxin B1 Contamination
Authors: Auteleyeva Laura, Maikanov Balgabai, Smagulova Ayana
Abstract:
In order to find and select detoxification methods, patent and information research was conducted, as a result of which 68 patents for inventions were found, among them from the near abroad - 14 (Russia), from far abroad: China – 27, USA - 6, South Korea–1, Germany - 2, Mexico – 4, Yugoslavia – 7, Austria, Taiwan, Belarus, Denmark, Italy, Japan, Canada for 1 security document. Aflatoxin B₁ in various nuts was determined by two methods: enzyme immunoassay "RIDASCREEN ® FAST Aflatoxin" with determination of optical density on a microplate spectrophotometer RIDA®ABSORPTION 96 with RIDASOFT® software Win.NET (Germany) and the method of high-performance liquid chromatography (HPLC Corporation Water, USA) according to GOST 307112001. For experimental contamination of nuts, the cultivation of strain A was carried out. flavus KWIK-STIK on the medium of Chapek (France) with subsequent infection of various nuts (peanuts, peanuts with shells, badam, walnuts with and without shells, pistachios).Based on our research, we have selected 2 detoxification methods: method 1 – combined (5% citric acid solution + microwave for 640 W for 3 min + UV for 20 min) and a chemical method with various leaves of plants: Artemisia terra-albae, Thymus vulgaris, Callogonum affilium, collected in the territory of Akmola region (Artemisia terra-albae, Thymus vulgaris) and Western Kazakhstan (Callogonum affilium). The first stage was the production of ethanol extracts of Artemisia terraea-albae, Thymus vulgaris, Callogonum affilium. To obtain them, 100 g of vegetable raw materials were taken, which was dissolved in 70% ethyl alcohol. Extraction was carried out for 2 hours at the boiling point of the solvent with a reverse refrigerator using an ultrasonic bath "Sapphire". The obtained extracts were evaporated on a rotary evaporator IKA RV 10. At the second stage, the three samples obtained were tested for antimicrobial and antifungal activity. Extracts of Thymus vulgaris and Callogonum affilium showed high antimicrobial and antifungal activity. Artemisia terraea-albae extract showed high antimicrobial activity and low antifungal activity. When testing method 1, it was found that in the first and third experimental groups there was a decrease in the concentration of aflatoxin B1 in walnut samples by 63 and 65%, respectively, but these values also exceeded the maximum permissible concentrations, while the nuts in the second and third experimental groups had a tart lemon flavor; When testing method 2, a decrease in the concentration of aflatoxin B1 to a safe level was observed by 91% (0.0038 mg/kg) in nuts of the 1st and 2nd experimental groups (Artemisia terra-albae, Thymus vulgaris), while in samples of the 2nd and 3rd experimental groups, a decrease in the amount of aflatoxin in 1 to a safe level was observed.Keywords: nuts, aflatoxin B1, my, mycotoxins
Procedia PDF Downloads 88487 Oxidative Stability of Corn Oil Supplemented with Natural Antioxidants from Cypriot Salvia fruticosa Extracts
Authors: Zoi Konsoula
Abstract:
Vegetable oils, which are rich in polyunsaturated fatty acids, are susceptible to oxidative deterioration. The lipid oxidation of oils results in the production of rancid odors and unpleasant flavors as well as the reduction of their nutritional quality and safety. Traditionally, synthetic antioxidants are employed for their retardation or prevention of oxidative deterioration of oils. However, these compounds are suspected to pose health hazards. Consequently, recently there has been a growing interest in the use of natural antioxidants of plant origin for improving the oxidative stability of vegetable oils. The genus Salvia (sage) is well known for its antioxidant activity. In the Cypriot flora Salvia fruticosa is the most distributed indigenous Salvia species. In the present study, extracts were prepared from S. fruticosa aerial parts using various solvents and their antioxidant activity was evaluated by the 1,1-diphenyl-2-picrylhydrazine (DPPH) radical scavenging and Ferric Reducing Antioxidant Power (FRAP) method. Moreover, the antioxidant efficacy of all extracts was assessed using corn oil as the oxidation substrate, which was subjected to accelerated aging (60 °C, 30 days). The progress of lipid oxidation was monitored by the determination of the peroxide, p-aniside, conjugated dienes and trienes value according to the official AOCS methods. Synthetic antioxidants (butylated hydroxytoluene-BHT and butylated hydroxyanisole-BHA) were employed at their legal limit (200 ppm) as reference. Finally, the total phenolic (TPC) and flavonoid content (TFC) of the prepared extracts was measured by the Folin-Ciocalteu and aluminum-flavonoid complex method, respectively. The results of the present study revealed that although all sage extracts prepared from S. fruticosa exhibited antioxidant activity, the highest antioxidant capacity was recorded in the methanolic extract, followed by the non-toxic, food grade ethanol. Furthermore, a positive correlation between the antioxidant potency and the TPC of extracts was observed in all cases. Interestingly, sage extracts prevented lipid oxidation in corn oil at all concentrations tested, however, the magnitude of stabilization was dose dependent. More specifically, results from the different oxidation parameters were in agreement with each other and indicated that the protection offered by the various extracts depended on their TPC. Among the extracts, the methanolic extract was more potent in inhibiting oxidative deterioration. Finally, both methanolic and ethanolic sage extracts at a concentration of 1000 ppm exerted a stabilizing effect comparable to that of the reference synthetic antioxidants. Based on the results of the present study, sage extracts could be used for minimizing or preventing lipid oxidation in oils and, thus, prolonging their shelf-life. In particular, given that the use of dietary alcohol, such as ethanol, is preferable than methanol in food applications, the ethanolic extract prepared from S. fruticosa could be used as an alternative natural antioxidant.Keywords: antioxidant activity, corn oil, oxidative deterioration, sage
Procedia PDF Downloads 208486 Narrating Atatürk Cultural Center as a Place of Memory and a Space of Politics
Authors: Birge Yildirim Okta
Abstract:
This paper aims to narrate the story of Atatürk Cultural Center in Taksim Square, which was demolished in 2018 and discuss its architectonic as a social place of memory and its existence and demolishment as the space of politics. The paper uses narrative discourse analysis to research Atatürk Cultural Center (AKM) as a place of memory and space of politics from the establishment of the Turkish Republic (1923) until today. After the establishment of the Turkish Republic, one of the most important implementations in Taksim Square, reflecting the internationalist style, was the construction of the Opera Building in Prost Plan. The first design of the opera building belonged to Aguste Perret, which could not be implemented due to economic hardship during World War II. Later the project was designed by architects Feridun Kip and Rüknettin Güney in 1946 but could not be completed due to the 1960 military coup. Later the project was shifted to another architect Hayati Tabanlıoglu, with a change in its function as a cultural center. Eventually, the construction of the building was completed in 1969 in a completely different design. AKM became a symbol of republican modernism not only with its modern architectural style but also with it is function as the first opera building of the Republic, reflecting the western, modern cultural heritage by professional groups, artists, and the intelligentsia. In 2005, Istanbul’s council for the protection of cultural heritage decided to list AKM as a grade 1 cultural heritage, ending a period of controversy which saw calls for the demolition of the center as it was claimed, it ended its useful lifespan. In 2008 the building was announced to be closed for repairs and restoration. Over the following years, the building was demolished piece by piece silently while the Taksim mosque has been built just in front of Atatürk Cultural Center. Belonging to the early republican period AKM was a representation of the cultural production of modern society for the emergence and westward looking, secular public space in Turkey. Its erasure from the Taksim scene under the rule of the conservative government, Justice, and Development Party, and the construction of the Taksim mosque in front of AKM’s parcel is also representational. The question of governing the city through space has always been an important aspect for governments, those holding political power since cities are the chaotic environments that are seen as a threat for the governments, carrying the tensions of the proletariat or the contradictory groups. The story of AKM as a dispositive or a regulatory apparatus demonstrates how space itself is becoming a political medium, to transform the socio-political condition. The paper narrates the existence and demolishment of the Atatürk Cultural Center by discussing the constructed and demolished building as a place of memory and space of politics.Keywords: space of politics, place of memory, Atatürk Cultural Center, Taksim square, collective memory
Procedia PDF Downloads 143485 Impact of Transitioning to Renewable Energy Sources on Key Performance Indicators and Artificial Intelligence Modules of Data Center
Authors: Ahmed Hossam ElMolla, Mohamed Hatem Saleh, Hamza Mostafa, Lara Mamdouh, Yassin Wael
Abstract:
Artificial intelligence (AI) is reshaping industries, and its potential to revolutionize renewable energy and data center operations is immense. By harnessing AI's capabilities, we can optimize energy consumption, predict fluctuations in renewable energy generation, and improve the efficiency of data center infrastructure. This convergence of technologies promises a future where energy is managed more intelligently, sustainably, and cost-effectively. The integration of AI into renewable energy systems unlocks a wealth of opportunities. Machine learning algorithms can analyze vast amounts of data to forecast weather patterns, solar irradiance, and wind speeds, enabling more accurate energy production planning. AI-powered systems can optimize energy storage and grid management, ensuring a stable power supply even during intermittent renewable generation. Moreover, AI can identify maintenance needs for renewable energy infrastructure, preventing costly breakdowns and maximizing system lifespan. Data centers, which consume substantial amounts of energy, are prime candidates for AI-driven optimization. AI can analyze energy consumption patterns, identify inefficiencies, and recommend adjustments to cooling systems, server utilization, and power distribution. Predictive maintenance using AI can prevent equipment failures, reducing energy waste and downtime. Additionally, AI can optimize data placement and retrieval, minimizing energy consumption associated with data transfer. As AI transforms renewable energy and data center operations, modified Key Performance Indicators (KPIs) will emerge. Traditional metrics like energy efficiency and cost-per-megawatt-hour will continue to be relevant, but additional KPIs focused on AI's impact will be essential. These might include AI-driven cost savings, predictive accuracy of energy generation and consumption, and the reduction of carbon emissions attributed to AI-optimized operations. By tracking these KPIs, organizations can measure the success of their AI initiatives and identify areas for improvement. Ultimately, the synergy between AI, renewable energy, and data centers holds the potential to create a more sustainable and resilient future. By embracing these technologies, we can build smarter, greener, and more efficient systems that benefit both the environment and the economy.Keywords: data center, artificial intelligence, renewable energy, energy efficiency, sustainability, optimization, predictive analytics, energy consumption, energy storage, grid management, data center optimization, key performance indicators, carbon emissions, resiliency
Procedia PDF Downloads 35484 Bending the Consciousnesses: Uncovering Environmental Issues Through Circuit Bending
Authors: Enrico Dorigatti
Abstract:
The growing pile of hazardous e-waste produced especially by those developed and wealthy countries gets relentlessly bigger, composed of the EEDs (Electric and Electronic Device) that are often thrown away although still well functioning, mainly due to (programmed) obsolescence. As a consequence, e-waste has taken, over the last years, the shape of a frightful, uncontrollable, and unstoppable phenomenon, mainly fuelled by market policies aiming to maximize sales—and thus profits—at any cost. Against it, governments and organizations put some efforts in developing ambitious frameworks and policies aiming to regulate, in some cases, the whole lifecycle of EEDs—from the design to the recycling. Incidentally, however, such regulations sometimes make the disposal of the devices economically unprofitable, which often translates into growing illegal e-waste trafficking—an activity usually undertaken by criminal organizations. It seems that nothing, at least in the near future, can stop the phenomenon of e-waste production and accumulation. But while, from a practical standpoint, a solution seems hard to find, much can be done regarding people's education, which translates into informing and promoting good practices such as reusing and repurposing. This research argues that circuit bending—an activity rooted in neo-materialist philosophy and post-digital aesthetic, and based on repurposing EEDs into novel music instruments and sound generators—could have a great potential in this. In particular, it asserts that circuit bending could expose ecological, environmental, and social criticalities related to the current market policies and economic model. Not only thanks to its practical side (e.g., sourcing and repurposing devices) but also to the artistic one (e.g., employing bent instruments for ecological-aware installations, performances). Currently, relevant literature and debate lack interest and information about the ecological aspects and implications of the practical and artistic sides of circuit bending. This research, therefore, although still at an early stage, aims to fill in this gap by investigating, on the one side, the ecologic potential of circuit bending and, on the other side, its capacity of sensitizing people, through artistic practice, about e-waste-related issues. The methodology will articulate in three main steps. Firstly, field research will be undertaken—with the purpose of understanding where and how to source, in an ecologic and sustainable way, (discarded) EEDs for circuit bending. Secondly, artistic installations and performances will be organized—to sensitize the audience about environmental concerns through sound art and music derived from bent instruments. Data, such as audiences' feedback, will be collected at this stage. The last step will consist in realising workshops to spread an ecologically-aware circuit bending practice. Additionally, all the data and findings collected will be made available and disseminated as resources.Keywords: circuit bending, ecology, sound art, sustainability
Procedia PDF Downloads 171483 Neural Synchronization - The Brain’s Transfer of Sensory Data
Authors: David Edgar
Abstract:
To understand how the brain’s subconscious and conscious functions, we must conquer the physics of Unity, which leads to duality’s algorithm. Where the subconscious (bottom-up) and conscious (top-down) processes function together to produce and consume intelligence, we use terms like ‘time is relative,’ but we really do understand the meaning. In the brain, there are different processes and, therefore, different observers. These different processes experience time at different rates. A sensory system such as the eyes cycles measurement around 33 milliseconds, the conscious process of the frontal lobe cycles at 300 milliseconds, and the subconscious process of the thalamus cycle at 5 milliseconds. Three different observers experience time differently. To bridge observers, the thalamus, which is the fastest of the processes, maintains a synchronous state and entangles the different components of the brain’s physical process. The entanglements form a synchronous cohesion between the brain components allowing them to share the same state and execute in the same measurement cycle. The thalamus uses the shared state to control the firing sequence of the brain’s linear subconscious process. Sharing state also allows the brain to cheat on the amount of sensory data that must be exchanged between components. Only unpredictable motion is transferred through the synchronous state because predictable motion already exists in the shared framework. The brain’s synchronous subconscious process is entirely based on energy conservation, where prediction regulates energy usage. So, the eyes every 33 milliseconds dump their sensory data into the thalamus every day. The thalamus is going to perform a motion measurement to identify the unpredictable motion in the sensory data. Here is the trick. The thalamus conducts its measurement based on the original observation time of the sensory system (33 ms), not its own process time (5 ms). This creates a data payload of synchronous motion that preserves the original sensory observation. Basically, a frozen moment in time (Flat 4D). The single moment in time can then be processed through the single state maintained by the synchronous process. Other processes, such as consciousness (300 ms), can interface with the synchronous state to generate awareness of that moment. Now, synchronous data traveling through a separate faster synchronous process creates a theoretical time tunnel where observation time is tunneled through the synchronous process and is reproduced on the other side in the original time-relativity. The synchronous process eliminates time dilation by simply removing itself from the equation so that its own process time does not alter the experience. To the original observer, the measurement appears to be instantaneous, but in the thalamus, a linear subconscious process generating sensory perception and thought production is being executed. It is all just occurring in the time available because other observation times are slower than thalamic measurement time. For life to exist in the physical universe requires a linear measurement process, it just hides by operating at a faster time relativity. What’s interesting is time dilation is not the problem; it’s the solution. Einstein said there was no universal time.Keywords: neural synchronization, natural intelligence, 99.95% IoT data transmission savings, artificial subconscious intelligence (ASI)
Procedia PDF Downloads 127482 Impact of Financial Performance Indicators on Share Price of Listed Pharmaceutical Companies in India
Authors: Amit Das
Abstract:
Background and significance of the study: Generally investors and market forecasters use financial statement for investigation while it awakens contribute to investing. The main vicinity of financial accounting and reporting practices recommends a few basic financial performance indicators, namely, return on capital employed, return on assets and earnings per share, which is associated considerably with share prices. It is principally true in case of Indian pharmaceutical companies also. Share investing is intriguing a financial risk in addition to investors look for those financial evaluations which have noteworthy shock on share price. A crucial intention of financial statement analysis and reporting is to offer information which is helpful predominantly to exterior clients in creating credit as well as investment choices. Sound financial performance attracts the investors automatically and it will increase the share price of the respective companies. Keeping in view of this, this research work investigates the impact of financial performance indicators on share price of pharmaceutical companies in India which is listed in the Bombay Stock Exchange. Methodology: This research work is based on secondary data collected from moneycontrol database on September 28, 2015 of top 101 pharmaceutical companies in India. Since this study selects four financial performance indicators purposively and availability in the database, that is, earnings per share, return on capital employed, return on assets and net profits as independent variables and one dependent variable, share price of 101 pharmaceutical companies. While analysing the data, correlation statistics, multiple regression technique and appropriate test of significance have been used. Major findings: Correlation statistics show that four financial performance indicators of 101 pharmaceutical companies are associated positively and negatively with its share price and it is very much significant that more than 80 companies’ financial performances are related positively. Multiple correlation test results indicate that financial performance indicators are highly related with share prices of the selected pharmaceutical companies. Furthermore, multiple regression test results illustrate that when financial performances are good, share prices have been increased steadily in the Bombay stock exchange and all results are statistically significant. It is more important to note that sensitivity indices were changed slightly through financial performance indicators of selected pharmaceutical companies in India. Concluding statements: The share prices of pharmaceutical companies depend on the sound financial performances. It is very clear that share prices are changed with the movement of two important financial performance indicators, that is, earnings per share and return on assets. Since 101 pharmaceutical companies are listed in the Bombay stock exchange and Sensex are changed with this, it is obvious that Government of India has to take important decisions regarding production and exports of pharmaceutical products so that financial performance of all the pharmaceutical companies are improved and its share price are increased positively.Keywords: financial performance indicators, share prices, pharmaceutical companies, India
Procedia PDF Downloads 306481 Predictive Pathogen Biology: Genome-Based Prediction of Pathogenic Potential and Countermeasures Targets
Authors: Debjit Ray
Abstract:
Horizontal gene transfer (HGT) and recombination leads to the emergence of bacterial antibiotic resistance and pathogenic traits. HGT events can be identified by comparing a large number of fully sequenced genomes across a species or genus, define the phylogenetic range of HGT, and find potential sources of new resistance genes. In-depth comparative phylogenomics can also identify subtle genome or plasmid structural changes or mutations associated with phenotypic changes. Comparative phylogenomics requires that accurately sequenced, complete and properly annotated genomes of the organism. Assembling closed genomes requires additional mate-pair reads or “long read” sequencing data to accompany short-read paired-end data. To bring down the cost and time required of producing assembled genomes and annotating genome features that inform drug resistance and pathogenicity, we are analyzing the performance for genome assembly of data from the Illumina NextSeq, which has faster throughput than the Illumina HiSeq (~1-2 days versus ~1 week), and shorter reads (150bp paired-end versus 300bp paired end) but higher capacity (150-400M reads per run versus ~5-15M) compared to the Illumina MiSeq. Bioinformatics improvements are also needed to make rapid, routine production of complete genomes a reality. Modern assemblers such as SPAdes 3.6.0 running on a standard Linux blade are capable in a few hours of converting mixes of reads from different library preps into high-quality assemblies with only a few gaps. Remaining breaks in scaffolds are generally due to repeats (e.g., rRNA genes) are addressed by our software for gap closure techniques, that avoid custom PCR or targeted sequencing. Our goal is to improve the understanding of emergence of pathogenesis using sequencing, comparative genomics, and machine learning analysis of ~1000 pathogen genomes. Machine learning algorithms will be used to digest the diverse features (change in virulence genes, recombination, horizontal gene transfer, patient diagnostics). Temporal data and evolutionary models can thus determine whether the origin of a particular isolate is likely to have been from the environment (could it have evolved from previous isolates). It can be useful for comparing differences in virulence along or across the tree. More intriguing, it can test whether there is a direction to virulence strength. This would open new avenues in the prediction of uncharacterized clinical bugs and multidrug resistance evolution and pathogen emergence.Keywords: genomics, pathogens, genome assembly, superbugs
Procedia PDF Downloads 198480 A Qualitative Study Identifying the Complexities of Early Childhood Professionals' Use and Production of Data
Authors: Sara Bonetti
Abstract:
The use of quantitative data to support policies and justify investments has become imperative in many fields including the field of education. However, the topic of data literacy has only marginally touched the early care and education (ECE) field. In California, within the ECE workforce, there is a group of professionals working in policy and advocacy that use quantitative data regularly and whose educational and professional experiences have been neglected by existing research. This study aimed at analyzing these experiences in accessing, using, and producing quantitative data. This study utilized semi-structured interviews to capture the differences in educational and professional backgrounds, policy contexts, and power relations. The participants were three key professionals from county-level organizations and one working at a State Department to allow for a broader perspective at systems level. The study followed Núñez’s multilevel model of intersectionality. The key in Núñez’s model is the intersection of multiple levels of analysis and influence, from the individual to the system level, and the identification of institutional power dynamics that perpetuate the marginalization of certain groups within society. In a similar manner, this study looked at the dynamic interaction of different influences at individual, organizational, and system levels that might intersect and affect ECE professionals’ experiences with quantitative data. At the individual level, an important element identified was the participants’ educational background, as it was possible to observe a relationship between that and their positionality, both with respect to working with data and also with respect to their power within an organization and at the policy table. For example, those with a background in child development were aware of how their formal education failed to train them in the skills that are necessary to work in policy and advocacy, and especially to work with quantitative data, compared to those with a background in administration and/or business. At the organizational level, the interviews showed a connection between the participants’ position within the organization and their organization’s position with respect to others and their degree of access to quantitative data. This in turn affected their sense of empowerment and agency in dealing with data, such as shaping what data is collected and available. These differences reflected on the interviewees’ perceptions and expectations for the ECE workforce. For example, one of the interviewees pointed out that many ECE professionals happen to use data out of the necessity of the moment. This lack of intentionality is a cause for, and at the same time translates into missed training opportunities. Another interviewee pointed out issues related to the professionalism of the ECE workforce by remarking the inadequacy of ECE students’ training in working with data. In conclusion, Núñez’s model helped understand the different elements that affect ECE professionals’ experiences with quantitative data. In particular, what was clear is that these professionals are not being provided with the necessary support and that we are not being intentional in creating data literacy skills for them, despite what is asked of them and their work.Keywords: data literacy, early childhood professionals, intersectionality, quantitative data
Procedia PDF Downloads 254479 Weapon-Being: Weaponized Design and Object-Oriented Ontology in Hypermodern Times
Authors: John Dimopoulos
Abstract:
This proposal attempts a refabrication of Heidegger’s classic thing-being and object-being analysis in order to provide better ontological tools for understanding contemporary culture, technology, and society. In his work, Heidegger sought to understand and comment on the problem of technology in an era of rampant innovation and increased perils for society and the planet. Today we seem to be at another crossroads in this course, coming after postmodernity, during which dreams and dangers of modernity augmented with critical speculations of the post-war era take shape. The new era which we are now living in, referred to as hypermodernity by researchers in various fields such as architecture and cultural theory, is defined by the horizontal implementation of digital technologies, cybernetic networks, and mixed reality. Technology today is rapidly approaching a turning point, namely the point of no return for humanity’s supervision over its creations. The techno-scientific civilization of the 21st century creates a series of problems, progressively more difficult and complex to solve and impossible to ignore, climate change, data safety, cyber depression, and digital stress being some of the most prevalent. Humans often have no other option than to address technology-induced problems with even more technology, as in the case of neuron networks, machine learning, and AI, thus widening the gap between creating technological artifacts and understanding their broad impact and possible future development. As all technical disciplines and particularly design, become enmeshed in a matrix of digital hyper-objects, a conceptual toolbox that allows us to handle the new reality becomes more and more necessary. Weaponized design, prevalent in many fields, such as social and traditional media, urban planning, industrial design, advertising, and the internet in general, hints towards an increase in conflicts. These conflicts between tech companies, stakeholders, and users with implications in politics, work, education, and production as apparent in the cases of Amazon workers’ strikes, Donald Trump’s 2016 campaign, Facebook and Microsoft data scandals, and more are often non-transparent to the wide public’s eye, thus consolidating new elites and technocratic classes and making the public scene less and less democratic. The new category proposed, weapon-being, is outlined in respect to the basic function of reducing complexity, subtracting materials, actants, and parameters, not strictly in favor of a humanistic re-orientation but in a more inclusive ontology of objects and subjects. Utilizing insights of Object-Oriented Ontology (OOO) and its schematization of technological objects, an outline for a radical ontology of technology is approached.Keywords: design, hypermodernity, object-oriented ontology, weapon-being
Procedia PDF Downloads 153478 Integration of Icf Walls as Diurnal Solar Thermal Storage with Microchannel Solar Assisted Heat Pump for Space Heating and Domestic Hot Water Production
Authors: Mohammad Emamjome Kashan, Alan S. Fung
Abstract:
In Canada, more than 32% of the total energy demand is related to the building sector. Therefore, there is a great opportunity for Greenhouse Gases (GHG) reduction by integrating solar collectors to provide building heating load and domestic hot water (DHW). Despite the cold winter weather, Canada has a good number of sunny and clear days that can be considered for diurnal solar thermal energy storage. Due to the energy mismatch between building heating load and solar irradiation availability, relatively big storage tanks are usually needed to store solar thermal energy during the daytime and then use it at night. On the other hand, water tanks occupy huge space, especially in big cities, space is relatively expensive. This project investigates the possibility of using a specific building construction material (ICF – Insulated Concrete Form) as diurnal solar thermal energy storage that is integrated with a heat pump and microchannel solar thermal collector (MCST). Not much literature has studied the application of building pre-existing walls as active solar thermal energy storage as a feasible and industrialized solution for the solar thermal mismatch. By using ICF walls that are integrated into the building envelope, instead of big storage tanks, excess solar energy can be stored in the concrete of the ICF wall that consists of EPS insulation layers on both sides to store the thermal energy. In this study, two solar-based systems are designed and simulated inTransient Systems Simulation Program(TRNSYS)to compare ICF wall thermal storage benefits over the system without ICF walls. In this study, the heating load and DHW of a Canadian single-family house located in London, Ontario, are provided by solar-based systems. The proposed system integrates the MCST collector, a water-to-water HP, a preheat tank, the main tank, fan coils (to deliver the building heating load), and ICF walls. During the day, excess solar energy is stored in the ICF walls (charging cycle). Thermal energy can be restored from the ICF walls when the preheat tank temperature drops below the ICF wall (discharging process) to increase the COP of the heat pump. The evaporator of the heat pump is taking is coupled with the preheat tank. The provided warm water by the heat pump is stored in the second tank. Fan coil units are in contact with the tank to provide a building heating load. DHW is also delivered is provided from the main tank. It is investigated that the system with ICF walls with an average solar fraction of 82%- 88% can cover the whole heating demand+DHW of nine months and has a 10-15% higher average solar fraction than the system without ICF walls. Sensitivity analysis for different parameters influencing the solar fraction is discussed in detail.Keywords: net-zero building, renewable energy, solar thermal storage, microchannel solar thermal collector
Procedia PDF Downloads 121477 Electron Bernstein Wave Heating in the Toroidally Magnetized System
Authors: Johan Buermans, Kristel Crombé, Niek Desmet, Laura Dittrich, Andrei Goriaev, Yurii Kovtun, Daniel López-Rodriguez, Sören Möller, Per Petersson, Maja Verstraeten
Abstract:
The International Thermonuclear Experimental Reactor (ITER) will rely on three sources of external heating to produce and sustain a plasma; Neutral Beam Injection (NBI), Ion Cyclotron Resonance Heating (ICRH), and Electron Cyclotron Resonance Heating (ECRH). ECRH is a way to heat the electrons in a plasma by resonant absorption of electromagnetic waves. The energy of the electrons is transferred indirectly to the ions by collisions. The electron cyclotron heating system can be directed to deposit heat in particular regions in the plasma (https://www.iter.org/mach/Heating). Electron Cyclotron Resonance Heating (ECRH) at the fundamental resonance in X-mode is limited by a low cut-off density. Electromagnetic waves cannot propagate in the region between this cut-off and the Upper Hybrid Resonance (UHR) and cannot reach the Electron Cyclotron Resonance (ECR) position. Higher harmonic heating is hence preferred in heating scenarios nowadays to overcome this problem. Additional power deposition mechanisms can occur above this threshold to increase the plasma density. This includes collisional losses in the evanescent region, resonant power coupling at the UHR, tunneling of the X-wave with resonant coupling at the ECR, and conversion to the Electron Bernstein Wave (EBW) with resonant coupling at the ECR. A more profound knowledge of these deposition mechanisms can help determine the optimal plasma production scenarios. Several ECRH experiments are performed on the TOroidally MAgnetized System (TOMAS) to identify the conditions for Electron Bernstein Wave (EBW) heating. Density and temperature profiles are measured with movable Triple Langmuir Probes in the horizontal and vertical directions. Measurements of the forwarded and reflected power allow evaluation of the coupling efficiency. Optical emission spectroscopy and camera images also contribute to plasma characterization. The influence of the injected power, magnetic field, gas pressure, and wave polarization on the different deposition mechanisms is studied, and the contribution of the Electron Bernstein Wave is evaluated. The TOMATOR 1D hydrogen-helium plasma simulator numerically describes the evolution of current less magnetized Radio Frequency plasmas in a tokamak based on Braginskii’s legal continuity and heat balance equations. This code was initially benchmarked with experimental data from TCV to determine the transport coefficients. The code is used to model the plasma parameters and the power deposition profiles. The modeling is compared with the data from the experiments.Keywords: electron Bernstein wave, Langmuir probe, plasma characterization, TOMAS
Procedia PDF Downloads 96476 Automated Computer-Vision Analysis Pipeline of Calcium Imaging Neuronal Network Activity Data
Authors: David Oluigbo, Erik Hemberg, Nathan Shwatal, Wenqi Ding, Yin Yuan, Susanna Mierau
Abstract:
Introduction: Calcium imaging is an established technique in neuroscience research for detecting activity in neural networks. Bursts of action potentials in neurons lead to transient increases in intracellular calcium visualized with fluorescent indicators. Manual identification of cell bodies and their contours by experts typically takes 10-20 minutes per calcium imaging recording. Our aim, therefore, was to design an automated pipeline to facilitate and optimize calcium imaging data analysis. Our pipeline aims to accelerate cell body and contour identification and production of graphical representations reflecting changes in neuronal calcium-based fluorescence. Methods: We created a Python-based pipeline that uses OpenCV (a computer vision Python package) to accurately (1) detect neuron contours, (2) extract the mean fluorescence within the contour, and (3) identify transient changes in the fluorescence due to neuronal activity. The pipeline consisted of 3 Python scripts that could both be easily accessed through a Python Jupyter notebook. In total, we tested this pipeline on ten separate calcium imaging datasets from murine dissociate cortical cultures. We next compared our automated pipeline outputs with the outputs of manually labeled data for neuronal cell location and corresponding fluorescent times series generated by an expert neuroscientist. Results: Our results show that our automated pipeline efficiently pinpoints neuronal cell body location and neuronal contours and provides a graphical representation of neural network metrics accurately reflecting changes in neuronal calcium-based fluorescence. The pipeline detected the shape, area, and location of most neuronal cell body contours by using binary thresholding and grayscale image conversion to allow computer vision to better distinguish between cells and non-cells. Its results were also comparable to manually analyzed results but with significantly reduced result acquisition times of 2-5 minutes per recording versus 10-20 minutes per recording. Based on these findings, our next step is to precisely measure the specificity and sensitivity of the automated pipeline’s cell body and contour detection to extract more robust neural network metrics and dynamics. Conclusion: Our Python-based pipeline performed automated computer vision-based analysis of calcium image recordings from neuronal cell bodies in neuronal cell cultures. Our new goal is to improve cell body and contour detection to produce more robust, accurate neural network metrics and dynamic graphs.Keywords: calcium imaging, computer vision, neural activity, neural networks
Procedia PDF Downloads 83475 Serological Evidence of Brucella spp, Coxiella burnetti, Chlamydophila abortus, and Toxoplasma gondii Infections in Sheep and Goat Herds in the United Arab Emirates
Authors: Nabeeha Hassan Abdel Jalil, Robert Barigye, Hamda Al Alawi, Afra Al Dhaheri, Fatma Graiban Al Muhairi, Maryam Al Khateri, Nouf Al Alalawi, Susan Olet, Khaja Mohteshamuddin, Ahmad Al Aiyan, Mohamed Elfatih Hamad
Abstract:
A serological survey was carried out to determine the seroprevalence of Brucella spp, Coxiella burnetii, Chlamydophila abortus, and Toxoplasma gondii in sheep and goat herds in the UAE. A total of 915 blood samples [n= 222, [sheep]; n= 215, [goats]) were collected from livestock farms in the Emirates of Abu Dhabi, Dubai, Sharjah and Ras Al-Khaimah (RAK). An additional 478 samples (n= 244, [sheep]; n= 234, (goats]) were collected from the Al Ain livestock central market and tested by indirect ELISA for pathogen-specific antibodies with the Brucella antibodies being further corroborated by the Rose-Bengal agglutination test. Seropositivity for the four pathogens is variably documented in sheep and goats from the study area. Respectively, the overall livestock farm prevalence rates for Brucella spp, C. burnetii, C. abortus, and T. gondii were 2.7%, 27.9%, 8.1%, and 16.7% for sheep, and 0.0%, 31.6%, 9.3%, and 5.1% for goats. Additionally, the seroprevalence rates Brucella spp, C. burnetii, C. abortus, and T. gondii in samples from the livestock market were 7.4%, 21.7%, 16.4%, and 7.0% for sheep, and 0.9%, 32.5%, 19.2%, and 11.1% for goats respectively. Overall, sheep had 12.59 more chances than goats of testing seropositive for Brucella spp (OR, 12.59 [95% CI 2.96-53.6]) but less likely to be positive for C. burnetii-antibodies (OR, 0.73 [95% CI 0.54-0.97]). Notably, the differences in the seroprevalence rates of C. abortus and T. gondii in sheep and goats were not statistically significant (p > 0.0500). The present data indicate that all the four study pathogens are present in sheep and goat populations in the UAE where coxiellosis is apparently the most seroprevalent followed by chlamydophilosis, toxoplasmosis, and brucellosis. While sheep from the livestock market were more likely than those from farms to be Brucella-seropositive than those, the overall exposure risk of C. burnetii appears to be greater for goats than sheep. As more animals from the livestock market were more likely to be seropositive to Chlamydophila spp, it is possible that under the UAE animal production conditions, at least, coxiellosis and chlamydophilosis are more likely to increase the culling rate of domesticated small ruminants than toxoplasmosis and brucellosis. While anecdotal reports have previously insinuated that brucellosis may be a significant animal health risk in the UAE, the present data suggest C. burnetii, C. abortus and T. gondii to be more significant pathogens of sheep and goats in the country. Despite this possibility, the extent to which these pathogens may nationally be contributing to reproductive failure in sheep and goat herds is not known and needs to be investigated. Potentially, these agents may also carry a potentially zoonotic risk that needs to be investigated in risk groups like farm workers, and slaughter house personnel. An ongoing study is evaluating the seroprevalence of bovine coxiellosis in the Emirate of Abu Dhabi and the data thereof will further elucidate on the broader epidemiological dynamics of the disease in the national herd.Keywords: Brucella spp, Chlamydophila abortus, goat, sheep, Toxoplasma gondii, UAE
Procedia PDF Downloads 205474 The Physiological Effects of Thyriod Disorders During the Gestatory Period on Fetal Neurological Development: A Descriptive Review
Authors: Vanessa Bennemann, Gabriela Laste, Márcia Inês Goettert
Abstract:
The gestational period is a phase in which the pregnant woman undergoes constant physiological and hormonal changes, which are part of the woman’s biological cycle, the development of the fetus, childbirth, and lactation. These are factors of response to the immunological adaptation of the human reproductive process that is directly related to the pregnancy’s well-being and development. Although most pregnancies occur without complications, about 15% of pregnant women will develop potentially fatal complications, implying maternal and fetal risk. Therefore, requiring specialized care for high-risk pregnant women (HRPW) with obstetric interventions for the survival of the mother and/or fetus. Among the risk factors that characterize HRPW are the women's age, gestational diabetes mellitus (GDM), autoimmune diseases, infectious diseases such as syphilis and HIV, hypertension (SAH), preeclampsia, eclampsia, HELLP syndrome, uterine contraction abnormalities, and premature placental detachment (PPD), thyroid disorders, among others. Thus, pregnancy has an impact on the thyroid gland causing changes in the functioning of the mother's thyroid gland, altering the thyroid hormone (TH) profiles and production as pregnancy progresses. Considering, throughout the gestational period, the interpretation of the results of the tests to evaluate the thyroid functioning depends on the stage in which the pregnancy is. Thyroid disorders are directly related to adverse obstetric outcomes and in child development. Therefore, the adequate release of TH is important for a pregnancy without complications and optimal fetal growth and development. Objective: Investigate the physiological effects caused by thyroid disorders in the gestational period. Methods: A search for articles indexed in PubMed, Scielo, and MDPI databases, was performed using the term “AND”, with the descriptors: Pregnancy, Thyroid. With several combinations that included: Melatonin, Thyroidopathy, Inflammatory processes, Cytokines, Anti-inflammatory, Antioxidant, High-risk pregnancy. Subsequently, the screening was performed through the analysis of titles and/or abstracts. The criteria were: including clinical studies in general, randomized or not, in the period of 10 years prior to the research, in the English literature; excluded: experimental studies, case reports, research in the development phase. Results: In the preliminary results, a total of studies (n=183) were found, (n=57) excluded, such as studies of cancer, diabetes, obesity, and skin diseases. Conclusion: To date, it has been identified that thyroid diseases can impair the fetus’s brain development. Further research is suggested on this matter to identify new substances that may have a potential therapeutic effect to aid the gestational period with thyroid diseases.Keywords: pregnancy, thyroid, melatonin, high-risk pregnancy
Procedia PDF Downloads 145473 Additive Manufacturing with Ceramic Filler
Authors: Irsa Wolfram, Boruch Lorenz
Abstract:
Innovative solutions with additive manufacturing applying material extrusion for functional parts necessitate innovative filaments with persistent quality. Uniform homogeneity and a consistent dispersion of particles embedded in filaments generally require multiple cycles of extrusion or well-prepared primal matter by injection molding, kneader machines, or mixing equipment. These technologies commit to dedicated equipment that is rarely at the disposal in production laboratories unfamiliar with research in polymer materials. This stands in contrast to laboratories that investigate complex material topics and technology science to leverage the potential of 3-D printing. Consequently, scientific studies in labs are often constrained to compositions and concentrations of fillersofferedfrom the market. Therefore, we introduce a prototypal laboratory methodology scalable to tailoredprimal matter for extruding ceramic composite filaments with fused filament fabrication (FFF) technology. - A desktop single-screw extruder serves as a core device for the experiments. Custom-made filaments encapsulate the ceramic fillers and serve with polylactide (PLA), which is a thermoplastic polyester, as primal matter and is processed in the melting area of the extruder, preserving the defined concentration of the fillers. Validated results demonstrate that this approach enables continuously produced and uniform composite filaments with consistent homogeneity. Itis 3-D printable with controllable dimensions, which is a prerequisite for any scalable application. Additionally, digital microscopy confirms the steady dispersion of the ceramic particles in the composite filament. - This permits a 2D reconstruction of the planar distribution of the embedded ceramic particles in the PLA matrices. The innovation of the introduced method lies in the smart simplicity of preparing the composite primal matter. It circumvents the inconvenience of numerous extrusion operations and expensive laboratory equipment. Nevertheless, it deliversconsistent filaments of controlled, predictable, and reproducible filler concentration, which is the prerequisite for any industrial application. The introduced prototypal laboratory methodology seems capable for other polymer matrices and suitable to further utilitarian particle types beyond and above ceramic fillers. This inaugurates a roadmap for supplementary laboratory development of peculiar composite filaments, providing value for industries and societies. This low-threshold entry of sophisticated preparation of composite filaments - enabling businesses to create their own dedicated filaments - will support the mutual efforts for establishing 3D printing to new functional devices.Keywords: additive manufacturing, ceramic composites, complex filament, industrial application
Procedia PDF Downloads 106472 A Lightweight Interlock Block from Foamed Concrete with Construction and Agriculture Waste in Malaysia
Authors: Nor Azian Binti Aziz, Muhammad Afiq Bin Tambichik, Zamri Bin Hashim
Abstract:
The rapid development of the construction industry has contributed to increased construction waste, with concrete waste being among the most abundant. This waste is generated from ready-mix batching plants after the concrete cube testing process is completed and disposed of in landfills, leading to increased solid waste management costs. This study aims to evaluate the engineering characteristics of foamed concrete with waste mixtures construction and agricultural waste to determine the usability of recycled materials in the construction of non-load-bearing walls. This study involves the collection of construction wastes, such as recycled aggregates (RCA) obtained from the remains of finished concrete cubes, which are then tested in the laboratory. Additionally, agricultural waste, such as rice husk ash, is mixed into foamed concrete interlock blocks to enhance their strength. The optimal density of foamed concrete for this study was determined by mixing mortar and foam-backed agents to achieve the minimum targeted compressive strength required for non-load-bearing walls. The tests conducted in this study involved two phases. In Phase 1, elemental analysis using an X-ray fluorescence spectrometer (XRF) was conducted on the materials used in the production of interlock blocks such as sand, recycled aggregate/recycled concrete aggregate (RCA), and husk ash paddy/rice husk ash (RHA), Phase 2 involved physical and thermal tests, such as compressive strength test, heat conductivity test, and fire resistance test, on foamed concrete mixtures. The results showed that foamed concrete can produce lightweight interlock blocks. X-ray fluorescence spectrometry plays a crucial role in the characterization, quality control, and optimization of foamed concrete mixes containing construction and agriculture waste. The unique composition mixer of foamed concrete and the resulting chemical and physical properties, as well as the nature of replacement (either as cement or fine aggregate replacement), the waste contributes differently to the performance of foamed concrete. Interlocking blocks made from foamed concrete can be advantageous due to their reduced weight, which makes them easier to handle and transport compared to traditional concrete blocks. Additionally, foamed concrete typically offers good thermal and acoustic insulation properties, making it suitable for a variety of building projects. Using foamed concrete to produce lightweight interlock blocks could contribute to more efficient and sustainable construction practices. Additionally, RCA derived from concrete cube waste can serve as a substitute for sand in producing lightweight interlock blocks.Keywords: construction waste, recycled aggregates (RCA), sustainable concrete, structure material
Procedia PDF Downloads 54471 The Impact of the Global Financial Crisis on the Performance of Czech Industrial Enterprises
Authors: Maria Reznakova, Michala Strnadova, Lukas Reznak
Abstract:
The global financial crisis that erupted in 2008 is associated mainly with the debt crisis. It quickly spread globally through financial markets, international banks and trade links, and affected many economic sectors. Measured by the index of the year-on-year change in GDP and industrial production, the consequences of the global financial crisis manifested themselves with some delay also in the Czech economy. This can be considered a result of the overwhelming export orientation of Czech industrial enterprises. These events offer an important opportunity to study how financial and macroeconomic instability affects corporate performance. Corporate performance factors have long been given considerable attention. It is therefore reasonable to ask whether the findings published in the past are also valid in the times of economic instability and subsequent recession. The decisive factor in effective corporate performance measurement is the existence of an appropriate system of indicators that are able to assess progress in achieving corporate goals. Performance measures may be based on non-financial as well as on financial information. In this paper, financial indicators are used in combination with other characteristics, such as the firm size and ownership structure. Financial performance is evaluated based on traditional performance indicators, namely, return on equity and return on assets, supplemented with indebtedness and current liquidity indices. As investments are a very important factor in corporate performance, their trends and importance were also investigated by looking at the ratio of investments to previous year’s sales and the rate of reinvested earnings. In addition to traditional financial performance indicators, the Economic Value Added was also used. Data used in the research were obtained from a questionnaire survey administered in industrial enterprises in the Czech Republic and from AMADEUS (Analyse Major Database from European Sources), from which accounting data of companies were obtained. Respondents were members of the companies’ senior management. Research results unequivocally confirmed that corporate performance dropped significantly in the 2010-2012 period, which can be considered a result of the global financial crisis and a subsequent economic recession. It was reflected mainly in the decreasing values of profitability indicators and the Economic Value Added. Although the total year-on-year indebtedness declined, intercompany indebtedness increased. This can be considered a result of impeded access of companies to bank loans due to the credit crunch. Comparison of the results obtained with the conclusions of previous research on a similar topic showed that the assumption that firms under foreign control achieved higher performance during the period investigated was not confirmed.Keywords: corporate performance, foreign control, intercompany indebtedness, ratio of investment
Procedia PDF Downloads 334470 An Integrated Approach to Handle Sour Gas Transportation Problems and Pipeline Failures
Authors: Venkata Madhusudana Rao Kapavarapu
Abstract:
The Intermediate Slug Catcher (ISC) facility was built to process nominally 234 MSCFD of export gas from the booster station on a day-to-day basis and to receive liquid slugs up to 1600 m³ (10,000 BBLS) in volume when the incoming 24” gas pipelines are pigged following upsets or production of non-dew-pointed gas from gathering centers. The maximum slug sizes expected are 812 m³ (5100 BBLS) in winter and 542 m³ (3400 BBLS) in summer after operating for a month or more at 100 MMSCFD of wet gas, being 60 MMSCFD of treated gas from the booster station, combined with 40 MMSCFD of untreated gas from gathering center. The water content is approximately 60% but may be higher if the line is not pigged for an extended period, owing to the relative volatility of the condensate compared to water. In addition to its primary function as a slug catcher, the ISC facility will receive pigged liquids from the upstream and downstream segments of the 14” condensate pipeline, returned liquids from the AGRP, pigged through the 8” pipeline, and blown-down fluids from the 14” condensate pipeline prior to maintenance. These fluids will be received in the condensate flash vessel or the condensate separator, depending on the specific operation, for the separation of water and condensate and settlement of solids scraped from the pipelines. Condensate meeting the colour and 200 ppm water specifications will be dispatched to the AGRP through the 14” pipeline, while off-spec material will be returned to BS-171 via the existing 10” condensate pipeline. When they are not in operation, the existing 24” export gas pipeline and the 10” condensate pipeline will be maintained under export gas pressure, ready for operation. The gas manifold area contains the interconnecting piping and valves needed to align the slug catcher with either of the 24” export gas pipelines from the booster station and to direct the gas to the downstream segment of either of these pipelines. The manifold enables the slug catcher to be bypassed if it needs to be maintained or if through-pigging of the gas pipelines is to be performed. All gas, whether bypassing the slug catcher or returning to the gas pipelines from it, passes through black powder filters to reduce the level of particulates in the stream. These items are connected to the closed drain vessel to drain the liquid collected. Condensate from the booster station is transported to AGRP through 14” condensate pipeline. The existing 10” condensate pipeline will be used as a standby and for utility functions such as returning condensate from AGRP to the ISC or booster station or for transporting off-spec fluids from the ISC back to booster station. The manifold contains block valves that allow the two condensate export lines to be segmented at the ISC, thus facilitating bi-directional flow independently in the upstream and downstream segments, which ensures complete pipeline integrity and facility integrity. Pipeline failures will be attended to with the latest technologies by remote techno plug techniques, and repair activities will be carried out as needed. Pipeline integrity will be evaluated with ili pigging to estimate the pipeline conditions.Keywords: integrity, oil & gas, innovation, new technology
Procedia PDF Downloads 73469 Using Pump as Turbine in Drinking Water Networks to Monitor and Control Water Processes Remotely
Authors: Sara Bahariderakhshan, Morteza Ahmadifar
Abstract:
Leakage is one of the most important problems that water distribution networks face which first reason is high-pressure existence. There are many approaches to control this excess pressure, which using pressure reducing valves (PRVs) or reducing pipe diameter are ones. In the other hand, Pumps are using electricity or fossil fuels to supply needed pressure in distribution networks but excess pressure are made in some branches due to topology problems and water networks’ variables therefore using pressure valves will be inevitable. Although using PRVs is inevitable but it leads to waste electricity or fuels used by pumps because PRVs just waste excess hydraulic pressure to lower it. Pumps working in reverse or Pumps as Turbine (called PaT in this article) are easily available and also effective sources of reducing the equipment cost in small hydropower plants. Urban areas of developing countries are facing increasing in area and maybe water scarcity in near future. These cities need wider water networks which make it hard to predict, control and have a better operation in the urban water cycle. Using more energy and, therefore, more pollution, slower repairing services, more user dissatisfaction and more leakage are these networks’ serious problems. Therefore, more effective systems are needed to monitor and act in these complicated networks than what is used now. In this article a new approach is proposed and evaluated: Using PAT to produce enough energy for remote valves and sensors in the water network. These sensors can be used to determine the discharge, pressure, water quality and other important network characteristics. With the help of remote valves pipeline discharge can be controlled so Instead of wasting excess hydraulic pressure which may be destructive in some cases, obtaining extra pressure from pipeline and producing clean electricity used by remote instruments is this articles’ goal. Furthermore due to increasing the area of the network there is unwanted high pressure in some critical points which is not destructive but lowering the pressure results to longer lifetime for pipeline networks without users’ dissatisfaction. This strategy proposed in this article, leads to use PaT widely for pressure containment and producing energy needed for remote valves and sensors like what happens in supervisory control and data acquisition (SCADA) systems which make it easy for us to monitor, receive data from urban water cycle and make any needed changes in discharge and pressure of pipelines easily and remotely. This is a clean project of energy production without significant environmental impacts and can be used in urban drinking water networks, without any problem for consumers which leads to a stable and dynamic network which lowers leakage and pollution.Keywords: new energies, pump as turbine, drinking water, distribution network, remote control equipments
Procedia PDF Downloads 464468 Using Pump as Turbine in Urban Water Networks to Control, Monitor, and Simulate Water Processes Remotely
Authors: Morteza Ahmadifar, Sarah Bahari Derakhshan
Abstract:
Leakage is one of the most important problems that water distribution networks face which first reason is high-pressure existence. There are many approaches to control this excess pressure, which using pressure reducing valves (PRVs) or reducing pipe diameter are ones. On the other hand, Pumps are using electricity or fossil fuels to supply needed pressure in distribution networks but excess pressure are made in some branches due to topology problems and water networks’ variables, therefore using pressure valves will be inevitable. Although using PRVs is inevitable but it leads to waste electricity or fuels used by pumps because PRVs just waste excess hydraulic pressure to lower it. Pumps working in reverse or Pumps as Turbine (called PAT in this article) are easily available and also effective sources of reducing the equipment cost in small hydropower plants. Urban areas of developing countries are facing increasing in area and maybe water scarcity in near future. These cities need wider water networks which make it hard to predict, control and have a better operation in the urban water cycle. Using more energy and therefore more pollution, slower repairing services, more user dissatisfaction and more leakage are these networks’ serious problems. Therefore, more effective systems are needed to monitor and act in these complicated networks than what is used now. In this article a new approach is proposed and evaluated: Using PAT to produce enough energy for remote valves and sensors in the water network. These sensors can be used to determine the discharge, pressure, water quality and other important network characteristics. With the help of remote valves pipeline discharge can be controlled so Instead of wasting excess hydraulic pressure which may be destructive in some cases, obtaining extra pressure from pipeline and producing clean electricity used by remote instruments is this articles’ goal. Furthermore, due to increasing the area of network there is unwanted high pressure in some critical points which is not destructive but lowering the pressure results to longer lifetime for pipeline networks without users’ dissatisfaction. This strategy proposed in this article, leads to use PAT widely for pressure containment and producing energy needed for remote valves and sensors like what happens in supervisory control and data acquisition (SCADA) systems which make it easy for us to monitor, receive data from urban water cycle and make any needed changes in discharge and pressure of pipelines easily and remotely. This is a clean project of energy production without significant environmental impacts and can be used in urban drinking water networks, without any problem for consumers which leads to a stable and dynamic network which lowers leakage and pollution.Keywords: clean energies, pump as turbine, remote control, urban water distribution network
Procedia PDF Downloads 396467 Mapping and Mitigation Strategy for Flash Flood Hazards: A Case Study of Bishoftu City
Authors: Berhanu Keno Terfa
Abstract:
Flash floods are among the most dangerous natural disasters that pose a significant threat to human existence. They occur frequently and can cause extensive damage to homes, infrastructure, and ecosystems while also claiming lives. Although flash floods can happen anywhere in the world, their impact is particularly severe in developing countries due to limited financial resources, inadequate drainage systems, substandard housing options, lack of early warning systems, and insufficient preparedness. To address these challenges, a comprehensive study has been undertaken to analyze and map flood inundation using Geographic Information System (GIS) techniques by considering various factors that contribute to flash flood resilience and developing effective mitigation strategies. Key factors considered in the analysis include slope, drainage density, elevation, Curve Number, rainfall patterns, land-use/cover classes, and soil data. These variables were computed using ArcGIS software platforms, and data from the Sentinel-2 satellite image (with a 10-meter resolution) were utilized for land-use/cover classification. Additionally, slope, elevation, and drainage density data were generated from the 12.5-meter resolution of the ALOS Palsar DEM, while other relevant data were obtained from the Ethiopian Meteorological Institute. By integrating and regularizing the collected data through GIS and employing the analytic hierarchy process (AHP) technique, the study successfully delineated flash flood hazard zones (FFHs) and generated a suitable land map for urban agriculture. The FFH model identified four levels of risk in Bishoftu City: very high (2106.4 ha), high (10464.4 ha), moderate (1444.44 ha), and low (0.52 ha), accounting for 15.02%, 74.7%, 10.1%, and 0.004% of the total area, respectively. The results underscore the vulnerability of many residential areas in Bishoftu City, particularly the central areas that have been previously developed. Accurate spatial representation of flood-prone areas and potential agricultural zones is crucial for designing effective flood mitigation and agricultural production plans. The findings of this study emphasize the importance of flood risk mapping in raising public awareness, demonstrating vulnerability, strengthening financial resilience, protecting the environment, and informing policy decisions. Given the susceptibility of Bishoftu City to flash floods, it is recommended that the municipality prioritize urban agriculture adaptation, proper settlement planning, and drainage network design.Keywords: remote sensing, flush flood hazards, Bishoftu, GIS.
Procedia PDF Downloads 38466 The Effect of Zeolite and Fertilizers on Yield and Qualitative Characteristics of Cabbage in the Southeast of Kazakhstan
Authors: Tursunay Vassilina, Aigerim Shibikeyeva, Adilet Sakhbek
Abstract:
Research has been carried out to study the influence of modified zeolite fertilizers on the quantitative and qualitative indicators of cabbage variety Nezhenka. The use of zeolite and mineral fertilizers had a positive effect on both the yield and quality indicators of the studied crop. The maximum increase in yield from fertilizers was 16.5 t/ha. Application of both zeolite and fertilizer increased the dry matter, sugar and vitamin C content of cabbage heads. It was established that the cabbage contains an amount of nitrates that is safe for human health. Among vegetable crops, cabbage has both food and feed value. One of the limiting factors in the sale of vegetable crops is the degradation of soil fertility due to depletion of nutrient reserves and erosion processes, and non-compliance with fertilizer application technologies. Natural zeolites are used as additives to mineral fertilizers for application in the field, which makes it possible to reduce their doses to minimal quantities. Zeolites improve the agrophysical and agrochemical properties of the soil and the quality of plant products. The research was carried out in a field experiment, carried out in 3 repetitions, on dark chestnut soil in 2023. The soil (pH = 7.2-7.3) of the experimental plot is dark chestnut, the humus content in the arable layer is 2.15%, gross nitrogen 0.098%, phosphorus, potassium 0.225 and 2.4%, respectively. The object of the study was the late cabbage variety Nezhenka. Scheme for applying fertilizers to cabbage: 1. Control (without fertilizers); 2. Zeolite 2t/ha; 3. N45P45K45; 4. N90P90K90; 5. Zeolite, 2 t/ha + N45P45K45; 6. Zeolite, 2 t/ha + N90P90K90. Yield accounting was carried out on a plot-by-plot basis manually. In plant samples, the following was determined: dry matter content by thermostatic method (at 105ºC); sugar content by Bertrand titration method, nitrate content by 1% diphenylamine solution, vitamin C by titrimetric method with acid solution. According to the results, it was established that the yield of cabbage was high – 42.2 t/ha in the treatment Zeolite, 2 t/ha + N90P90K90. When determining the biochemical composition of white cabbage, it was found that the dry matter content was 9.5% and increased with fertilized treatments. The total sugar content increased slightly with the use of zeolite (5.1%) and modified zeolite fertilizer (5.5%), the vitamin C content ranged from 17.5 to 18.16%, while in the control, it was 17.21%. The amount of nitrates in products also increased with increasing doses of nitrogen fertilizers and decreased with the use of zeolite and modified zeolite fertilizer but did not exceed the maximum permissible concentration. Based on the research conducted, it can be concluded that the application of zeolite and fertilizers leads to a significant increase in yield compared to the unfertilized treatment; contribute to the production of cabbage with good and high quality indicators.Keywords: cabbage, dry matter, nitrates, total sugar, yield, vitamin C
Procedia PDF Downloads 73