Search results for: slow sand filter
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2208

Search results for: slow sand filter

228 Using Soil Texture Field Observations as Ordinal Qualitative Variables for Digital Soil Mapping

Authors: Anne C. Richer-De-Forges, Dominique Arrouays, Songchao Chen, Mercedes Roman Dobarco

Abstract:

Most of the digital soil mapping (DSM) products rely on machine learning (ML) prediction models and/or the use or pedotransfer functions (PTF) in which calibration data come from soil analyses performed in labs. However, many other observations (often qualitative, nominal, or ordinal) could be used as proxies of lab measurements or as input data for ML of PTF predictions. DSM and ML are briefly described with some examples taken from the literature. Then, we explore the potential of an ordinal qualitative variable, i.e., the hand-feel soil texture (HFST) estimating the mineral particle distribution (PSD): % of clay (0-2µm), silt (2-50µm) and sand (50-2000µm) in 15 classes. The PSD can also be measured by lab measurements (LAST) to determine the exact proportion of these particle-sizes. However, due to cost constraints, HFST are much more numerous and spatially dense than LAST. Soil texture (ST) is a very important soil parameter to map as it is controlling many of the soil properties and functions. Therefore, comes an essential question: is it possible to use HFST as a proxy of LAST for calibration and/or validation of DSM predictions of ST? To answer this question, the first step is to compare HFST with LAST on a representative set where both information are available. This comparison was made on ca 17,400 samples representative of a French region (34,000 km2). The accuracy of HFST was assessed, and each HFST class was characterized by a probability distribution function (PDF) of its LAST values. This enables to randomly replace HFST observations by LAST values while respecting the PDF previously calculated and results in a very large increase of observations available for the calibration or validation of PTF and ML predictions. Some preliminary results are shown. First, the comparison between HFST classes and LAST analyses showed that accuracies could be considered very good when compared to other studies. The causes of some inconsistencies were explored and most of them were well explained by other soil characteristics. Then we show some examples applying these relationships and the increase of data to several issues related to DSM. The first issue is: do the PDF functions that were established enable to use HSFT class observations to improve the LAST soil texture prediction? For this objective, we replaced all HFST for topsoil by values from the PDF 100 time replicates). Results were promising for the PTF we tested (a PTF predicting soil water holding capacity). For the question related to the ML prediction of LAST soil texture on the region, we did the same kind of replacement, but we implemented a 10-fold cross-validation using points where we had LAST values. We obtained only preliminary results but they were rather promising. Then we show another example illustrating the potential of using HFST as validation data. As in numerous countries, the HFST observations are very numerous; these promising results pave the way to an important improvement of DSM products in all the countries of the world.

Keywords: digital soil mapping, improvement of digital soil mapping predictions, potential of using hand-feel soil texture, soil texture prediction

Procedia PDF Downloads 201
227 Nascent Federalism in Nepal: An Observational Review in its Evolution

Authors: C. Shekhar Parajulee

Abstract:

Nepal practiced a centralized unitary governing system for a long and has gone through the federal system after the promulgation of the new constitution on 20 September 2015. There is a big paradigm shift in terms of governance after it. Now, there are three levels of governments, one federal government in the center, seven provincial governments and 753 local governments. Federalism refers to a political governing system with multiple tiers of government working together with coordination. It is preferred for self and shared rule. Though it has opened the door for rights of the people, political stability, state restructuring, and sustainable peace and development, there are many prospects and challenges for its proper implementation. This research analyzes the discourses of federalism implementation in Nepal with special reference to one of seven provinces, Gandaki. Federalism is a new phenomenon in Nepali politics and informed debates on it are required for its right evolution. This research will add value in this regard. Moreover, tracking its evolution and the exploration of the attitudes and behaviors of key actors and stakeholders in a new experiment of a new governing system is also important. The administrative and political system of Gandaki province in terms of service delivery and development will critically be examined. Besides demonstrating the performances of the provincial government and assembly, it will analyze the inter-governmental relation of Gandaki with the other two tiers of government. For this research, people from provincial and local governments (elected representatives and government employees), provincial assembly members, academicians, civil society leaders and journalists are being interviewed. The interview findings will be analyzed by supplementing with published documents. Just going into the federal structure is not the solution. As in the case of other provincial governments, Gandaki had also to start from scratch. It gradually took a shape of government and has been functioning sluggishly. The provincial government has many challenges ahead, which has badly hindered its plans and actions. Additionally, fundamental laws, infrastructures and human resources are found to be insufficient at the sub-national level. Lack of clarity in the jurisdiction is another main challenge. The Nepali Constitution assumes cooperation, coexistence and coordination as the fundamental principles of federalism which, unfortunately, appear to be lacking among the three tiers of government despite their efforts. Though the devolution of power to sub-national governments is essential for the successful implementation of federalism, it has apparently been delayed due to the centralized mentality of bureaucracy as well as a political leader. This research will highlight the reasons for the delay in the implementation of federalism. There might be multiple underlying reasons for the slow pace of implementation of federalism and identifying them is very tough. Moreover, the federal spirit is found to be absent in the main players of today's political system, which is a big irony. So, there are some doubts about whether the federal system in Nepal is just a keepsake or a substantive.

Keywords: federalism, inter-governmental relations, Nepal, provincial government

Procedia PDF Downloads 176
226 Polypyrrole as Bifunctional Materials for Advanced Li-S Batteries

Authors: Fang Li, Jiazhao Wang, Jianmin Ma

Abstract:

The practical application of Li-S batteries is hampered due to poor cycling stability caused by electrolyte-dissolved lithium polysulfides. Dual functionalities such as strong chemical adsorption stability and high conductivity are highly desired for an ideal host material for a sulfur-based cathode. Polypyrrole (PPy), as a conductive polymer, was widely studied as matrixes for sulfur cathode due to its high conductivity and strong chemical interaction with soluble polysulfides. Thus, a novel cathode structure consisting of a free-standing sulfur-polypyrrole cathode and a polypyrrole coated separator was designed for flexible Li-S batteries. The PPy materials show strong interaction with dissoluble polysulfides, which could suppress the shuttle effect and improve the cycling stability. In addition, the synthesized PPy film with a rough surface acts as a current collector, which improves the adhesion of sulfur materials and restrain the volume expansion, enhancing the structural stability during the cycling process. For further enhancing the cycling stability, a PPy coated separator was also applied, which could make polysulfides into the cathode side to alleviate the shuttle effect. Moreover, the PPy layer coated on commercial separator is much lighter than other reported interlayers. A soft-packaged flexible Li-S battery has been designed and fabricated for testing the practical application of the designed cathode and separator, which could power a device consisting of 24 light-emitting diode (LED) lights. Moreover, the soft-packaged flexible battery can still show relatively stable cycling performance after repeated bending, indicating the potential application in flexible batteries. A novel vapor phase deposition method was also applied to prepare uniform polypyrrole layer coated sulfur/graphene aerogel composite. The polypyrrole layer simultaneously acts as host and adsorbent for efficient suppression of polysulfides dissolution through strong chemical interaction. The density functional theory (DFT) calculations reveal that the polypyrrole could trap lithium polysulfides through stronger bonding energy. In addition, the deflation of sulfur/graphene hydrogel during the vapor phase deposition process enhances the contact of sulfur with matrixes, resulting in high sulfur utilization and good rate capability. As a result, the synthesized polypyrrole coated sulfur/graphene aerogel composite delivers a specific discharge capacity of 1167 mAh g⁻¹ and 409.1 mAh g⁻¹ at 0.2 C and 5 C respectively. The capacity can maintain at 698 mAh g⁻¹ at 0.5 C after 500 cycles, showing an ultra-slow decay rate of 0.03% per cycle.

Keywords: polypyrrole, strong chemical interaction, long-term stability, Li-S batteries

Procedia PDF Downloads 122
225 Extrudable Foamed Concrete: General Benefits in Prefabrication and Comparison in Terms of Fresh Properties and Compressive Strength with Classic Foamed Concrete

Authors: D. Falliano, G. Ricciardi, E. Gugliandolo

Abstract:

Foamed concrete belongs to the category of lightweight concrete. It is characterized by a density which is generally ranging from 200 to 2000 kg/m³ and typically comprises cement, water, preformed foam, fine sand and eventually fine particles such as fly ash or silica fume. The foam component mixed with the cement paste give rise to the development of a system of air-voids in the cementitious matrix. The peculiar characteristics of foamed concrete elements are summarized in the following aspects: 1) lightness which allows reducing the dimensions of the resisting frame structure and is advantageous in the scope of refurbishment or seismic retrofitting in seismically vulnerable areas; 2) thermal insulating properties, especially in the case of low densities; 3) the good resistance against fire as compared to ordinary concrete; 4) the improved workability; 5) cost-effectiveness due to the usage of rather simple constituting elements that are easily available locally. Classic foamed concrete cannot be extruded, as the dimensional stability is not permitted in the green state and this severely limits the possibility of industrializing them through a simple and cost-effective process, characterized by flexibility and high production capacity. In fact, viscosity enhancing agents (VEA) used to extrude traditional concrete, in the case of foamed concrete cause the collapsing of air bubbles, so that it is impossible to extrude a lightweight product. These requirements have suggested the study of a particular additive that modifies the rheology of foamed concrete fresh paste by increasing cohesion and viscosity and, at the same time, stabilizes the bubbles into the cementitious matrix, in order to allow the dimensional stability in the green state and, consequently, the extrusion of a lightweight product. There are plans to submit the additive’s formulation to patent. In addition to the general benefits of using the extrusion process, extrudable foamed concrete allow other limits to be exceeded: elimination of formworks, expanded application spectrum, due to the possibility of extrusion in a range varying between 200 and 2000 kg/m³, which allows the prefabrication of both structural and non-structural constructive elements. Besides, this contribution aims to present the significant differences regarding extrudable and classic foamed concrete fresh properties in terms of slump. Plastic air content, plastic density, hardened density and compressive strength have been also evaluated. The outcomes show that there are no substantial differences between extrudable and classic foamed concrete compression resistances.

Keywords: compressive strength, extrusion, foamed concrete, fresh properties, plastic air content, slump.

Procedia PDF Downloads 156
224 Microplastics Accumulation and Abundance Standardization for Fluvial Sediments: Case Study for the Tena River

Authors: Mishell E. Cabrera, Bryan G. Valencia, Anderson I. Guamán

Abstract:

Human dependence on plastic products has led to global pollution, with plastic particles ranging in size from 0.001 to 5 millimeters, which are called microplastics (hereafter, MPs). The abundance of microplastics is used as an indicator of pollution. However, reports of pollution (abundance of MPs) in river sediments do not consider that the accumulation of sediments and MPs depends on the energy of the river. That is, the abundance of microplastics will be underestimated if the sediments analyzed come from places where the river flows with a lot of energy, and the abundance will be overestimated if the sediment analyzed comes from places where the river flows with less energy. This bias can generate an error greater than 300% of the MPs value reported for the same river and should increase when comparisons are made between 2 rivers with different characteristics. Sections where the river flows with higher energy allow sands to be deposited and limit the accumulation of MPs, while sections, where the same river has lower energy, allow fine sediments such as clays and silts to be deposited and should facilitate the accumulation of MPs particles. That is, the abundance of MPs in the same river is underrepresented when the sediment analyzed is sand, and the abundance of MPs is overrepresented if the sediment analyzed is silt or clay. The present investigation establishes a protocol aimed at incorporating sample granulometry to calibrate MPs quantification and eliminate over- or under-representation bias (hereafter granulometric bias). A total of 30 samples were collected by taking five samples within six work zones. The slope of the sampling points was less than 8 degrees, referred to as low slope areas, according to the Van Zuidam slope classification. During sampling, blanks were used to estimate possible contamination by MPs during sampling. Samples were dried at 60 degrees Celsius for three days. A flotation technique was employed to isolate the MPs using sodium metatungstate with a density of 2 gm/l. For organic matter digestion, 30% hydrogen peroxide and Fenton were used at a ratio of 6:1 for 24 hours. The samples were stained with rose bengal at a concentration of 200 mg/L and were subsequently dried in an oven at 60 degrees Celsius for 1 hour to be identified and photographed in a stereomicroscope with the following conditions: Eyepiece magnification: 10x, Zoom magnification (zoom knob): 4x, Objective lens magnification: 0.35x for analysis in ImageJ. A total of 630 fibers of MPs were identified, mainly red, black, blue, and transparent colors, with an overall average length of 474,310 µm and an overall median length of 368,474 µm. The particle size of the 30 samples was calculated using 100 g per sample using sieves with the following apertures: 2 mm, 1 mm, 500 µm, 250 µm, 125 µm and 0.63 µm. This sieving allowed a visual evaluation and a more precise quantification of the microplastics present. At the same time, the weight of sediment in each fraction was calculated, revealing an evident magnitude: as the presence of sediment in the < 63 µm fraction increases, a significant increase in the number of MPs particles is observed.

Keywords: microplastics, pollution, sediments, Tena River

Procedia PDF Downloads 54
223 In vitro Susceptibility of Isolated Shigella flexneri and Shigella dysenteriae to the Ethanolic Extracts of Trachyspermum ammi and Peganum harmala

Authors: Ibrahim Siddig Hamid, Ikram Mohamed Eltayeb

Abstract:

Trachyspermum ammi belongs to the family Apiaceae, is used traditionally for the treatment of gastrointestinal ailments, lack of appetite and bronchial problems as well used as antiseptic, antimicrobial, antipyretic, febrifugal and in the treatment of typhoid fever. Peganum harmala belongs to the family Zygophyllaceae it has been reported to have an antibacterial activity and used to treat depression and recurring fevers. It also used to kill algae, bacteria, intestinal parasites and molds. In Sudan, the combination of two plants are traditionally used for the treatment of bacillary dysentery. Bacillary dysentery is caused by one or more types of Shigella species bacteria mainly Shigella dysenteri and shigella flexneri. Bacillary dysentery is mainly found in hot countries like Sudan with poor hygiene and sanitation. Bacillary dysentery causes sudden onset of high fever and chills, abdominal pain, cramps and bloating, urgency to pass stool, weight loss, and dehydration and if left untreated it can lead to serious complications including delirium, convulsions and coma. A serious infection like this can be fatal within 24 hours. The objective of this study is to investigate the in vitro susceptibility of Sh. flexneri and Sh. dysenteriae to the T. ammi and P. harmala. T. ammi and P. harmala were extracted by 96% ethanol using Soxhlet apparatus. The antimicrobial activity of the extracts was investigated according to the disc diffusion method. The discs were prepared by soaking sterilized filter paper discs in 20 microliter of serially diluted solutions of each plant extract with the concentrations (100, 50, 25, 12.5, 6.25mg/dl) then placing them on Muller Hinton Agar plates that were inoculated with bacterial suspension separately, the plates were incubated for 24 hours at 37c and the minimum inhibitory concentration of the extract which was the least concentration of the extract to inhibit fungal growth was determined. The results showed the high antimicrobial activity of T. ammi extract with an average diameter zone ranging from 18-20 mm and its minimum inhibitory concentration was found to be 25 mg/ml against the two shigella species. P. harmala extract was found to have slight antibacterial effect against the two bacteria. This result justified the Sudanese traditional use of Trachyspermum ammi plant for the treatment of bacillary dysentery.

Keywords: harmala, peganum, shigella, trachyspermum

Procedia PDF Downloads 216
222 Human Identification Using Local Roughness Patterns in Heartbeat Signal

Authors: Md. Khayrul Bashar, Md. Saiful Islam, Kimiko Yamashita, Yano Midori

Abstract:

Despite having some progress in human authentication, conventional biometrics (e.g., facial features, fingerprints, retinal scans, gait, voice patterns) are not robust against falsification because they are neither confidential nor secret to an individual. As a non-invasive tool, electrocardiogram (ECG) has recently shown a great potential in human recognition due to its unique rhythms characterizing the variability of human heart structures (chest geometry, sizes, and positions). Moreover, ECG has a real-time vitality characteristic that signifies the live signs, which ensure legitimate individual to be identified. However, the detection accuracy of the current ECG-based methods is not sufficient due to a high variability of the individual’s heartbeats at a different instance of time. These variations may occur due to muscle flexure, the change of mental or emotional states, and the change of sensor positions or long-term baseline shift during the recording of ECG signal. In this study, a new method is proposed for human identification, which is based on the extraction of the local roughness of ECG heartbeat signals. First ECG signal is preprocessed using a second order band-pass Butterworth filter having cut-off frequencies of 0.00025 and 0.04. A number of local binary patterns are then extracted by applying a moving neighborhood window along the ECG signal. At each instant of the ECG signal, the pattern is formed by comparing the ECG intensities at neighboring time points with the central intensity in the moving window. Then, binary weights are multiplied with the pattern to come up with the local roughness description of the signal. Finally, histograms are constructed that describe the heartbeat signals of individual subjects in the database. One advantage of the proposed feature is that it does not depend on the accuracy of detecting QRS complex, unlike the conventional methods. Supervised recognition methods are then designed using minimum distance to mean and Bayesian classifiers to identify authentic human subjects. An experiment with sixty (60) ECG signals from sixty adult subjects from National Metrology Institute of Germany (NMIG) - PTB database, showed that the proposed new method is promising compared to a conventional interval and amplitude feature-based method.

Keywords: human identification, ECG biometrics, local roughness patterns, supervised classification

Procedia PDF Downloads 383
221 Characterization of Volatiles Botrytis cinerea in Blueberry Using Solid Phase Micro Extraction, Gas Chromatography Mass Spectrometry

Authors: Ahmed Auda, Manjree Agarwala, Giles Hardya, Yonglin Rena

Abstract:

Botrytis cinerea is a major pest for many plants. It can attack a wide range of plant parts. It can attack buds, flowers, and leaves, stems, and fruit. However, B. cinerea can be mixed with other diseases that cause the same damage. There are many species of botrytis and more than one different strains of each. Botrytis might infect the foliage of nursery stock stored through winter in damp conditions. There are no known resistant plants. Botrytis must have nutrients or food source before it infests the plant. Nutrients leaking from wounded plant parts or dying tissue like old flower petals give the required nutrients. From this food, the fungus becomes more attackers and invades healthy tissue. Dark to light brown rot forms in the ill tissue. High humidity conditions support the growth of this fungus. However, we suppose that selection pressure can act on the morphological and neurophysiologic filter properties of the receiver and on both the biochemical and the physiological regulation of the signal. Communication is implied when signal and receiver evolves toward more and more specific matching, culminating. In other hand, receivers respond to portions of a body odor bouquet which is released to the environment not as an (intentional) signal but as an unavoidable consequence of metabolic activity or tissue damage. Each year Botrytis species can cause considerable economic losses to plant crops. Even with the application of strict quarantine and control measures, these fungi can still find their way into crops and cause the imposition of onerous restrictions on exports. Blueberry fruit mould caused by a fungal infection usually results in major losses during post-harvest storage. Therefore, the management of infection in early stages of disease development is necessary to minimize losses. The overall purpose of this study will develop sensitive, cheap, quick and robust diagnostic techniques for the detection of B. cinerea in blueberry. The specific aim was designed to investigate the performance of volatile organic compounds (VOCs) in the detection and discrimination of blueberry fruits infected by fungal pathogens with an emphasis on Botrytis in the early storage stage of post-harvest.

Keywords: botrytis cinerea, blueberry, GC/MS, VOCs

Procedia PDF Downloads 219
220 Evaluation of Rhizobia for Nodulation, Shoot and Root Biomass from Host Range Studies Using Soybean, Common Bean, Bambara Groundnut and Mung Bean

Authors: Sharon K. Mahlangu, Mustapha Mohammed, Felix D. Dakora

Abstract:

Rural households in Africa depend largely on legumes as a source of high-protein food due to N₂-fixation by rhizobia when they infect plant roots. However, the legume/rhizobia symbiosis can exhibit some level of specificity such that some legumes may be selectively nodulated by only a particular group of rhizobia. In contrast, some legumes are highly promiscuous and are nodulated by a wide range of rhizobia. Little is known about the nodulation promiscuity of bacterial symbionts from wild legumes such as Aspalathus linearis, especially if they can nodulate cultivated grain legumes such as cowpea and Kersting’s groundnut. Determining the host range of the symbionts of wild legumes can potentially reveal novel rhizobial strains that can be used to increase nitrogen fixation in cultivated legumes. In this study, bacteria were isolated and tested for their ability to induce root nodules on their homologous hosts. Seeds were surface-sterilized with alcohol and sodium hypochlorite and planted in sterile sand contained in plastic pots. The pot surface was covered with sterile non-absorbent cotton wool to avoid contamination. The plants were watered with nitrogen-free nutrient solution and sterile water in alternation. Three replicate pots were used per isolate. The plants were grown for 90 days in a naturally-lit glasshouse and assessed for nodulation (nodule number and nodule biomass) and shoot biomass. Seven isolates from each of Kersting’s groundnut and cowpea and two from Rooibos tea plants were tested for their ability to nodulate soybean, mung bean, common bean and Bambara groundnut. The results showed that of the isolates from cowpea, where VUSA55 and VUSA42 could nodulate all test host plants, followed by VUSA48 which nodulated cowpea, Bambara groundnut and soybean. The two isolates from Rooibos tea plants nodulated Bambara groundnut, soybean and common bean. However, isolate L1R3.3.1 also nodulated mung bean. There was a greater accumulation of shoot biomass when cowpea isolate VUSA55 nodulated common bean. Isolate VUSA55 produced the highest shoot biomass, followed by VUSA42 and VUSA48. The two Kersting’s groundnut isolates, MGSA131 and MGSA110, accumulated average shoot biomass. In contrast, the two Rooibos tea isolates induced a higher accumulation of biomass in Bambara groundnut, followed by common bean. The results suggest that inoculating these agriculturally important grain legumes with cowpea isolates can contribute to improved soil fertility, especially soil nitrogen levels.

Keywords: legumes, nitrogen fixation, nodulation, rhizobia

Procedia PDF Downloads 194
219 Grain Size Statistics and Depositional Pattern of the Ecca Group Sandstones, Karoo Supergroup in the Eastern Cape Province, South Africa

Authors: Christopher Baiyegunhi, Kuiwu Liu, Oswald Gwavava

Abstract:

Grain size analysis is a vital sedimentological tool used to unravel the hydrodynamic conditions, mode of transportation and deposition of detrital sediments. In this study, detailed grain-size analysis was carried out on thirty-five sandstone samples from the Ecca Group in the Eastern Cape Province of South Africa. Grain-size statistical parameters, bivariate analysis, linear discriminate functions, Passega diagrams and log-probability curves were used to reveal the depositional processes, sedimentation mechanisms, hydrodynamic energy conditions and to discriminate different depositional environments. The grain-size parameters show that most of the sandstones are very fine to fine grained, moderately well sorted, mostly near-symmetrical and mesokurtic in nature. The abundance of very fine to fine grained sandstones indicates the dominance of low energy environment. The bivariate plots that the samples are mostly grouped, except for the Prince Albert samples that show scattered trend, which is due to the either mixture of two modes in equal proportion in bimodal sediments or good sorting in unimodal sediments. The linear discriminant function (LDF) analysis is dominantly indicative of turbidity current deposits under shallow marine environments for samples from the Prince Albert, Collingham and Ripon Formations, while those samples from the Fort Brown Formation are fluvial (deltaic) deposits. The graphic mean value shows the dominance of fine sand-size particles, which point to relatively low energy conditions of deposition. In addition, the LDF results point to low energy conditions during the deposition of the Prince Albert, Collingham and part of the Ripon Formation (Pluto Vale and Wonderfontein Shale Members), whereas the Trumpeters Member of the Ripon Formation and the overlying Fort Brown Formation accumulated under high energy conditions. The CM pattern shows a clustered distribution of sediments in the PQ and QR segments, indicating that the sediments were deposited mostly by suspension and rolling/saltation, and graded suspension. Furthermore, the plots also show that the sediments are mainly deposited by turbidity currents. Visher diagrams show the variability of hydraulic depositional conditions for the Permian Ecca Group sandstones. Saltation is the major process of transportation, although suspension and traction also played some role during deposition of the sediments. The sediments were mainly in saltation and suspension before being deposited.

Keywords: grain size analysis, hydrodynamic condition, depositional environment, Ecca Group, South Africa

Procedia PDF Downloads 456
218 Product Life Cycle Assessment of Generatively Designed Furniture for Interiors Using Robot Based Additive Manufacturing

Authors: Andrew Fox, Qingping Yang, Yuanhong Zhao, Tao Zhang

Abstract:

Furniture is a very significant subdivision of architecture and its inherent interior design activities. The furniture industry has developed from an artisan-driven craft industry, whose forerunners saw themselves manifested in their crafts and treasured a sense of pride in the creativity of their designs, these days largely reduced to an anonymous collective mass-produced output. Although a very conservative industry, there is great potential for the implementation of collaborative digital technologies allowing a reconfigured artisan experience to be reawakened in a new and exciting form. The furniture manufacturing industry, in general, has been slow to adopt new methodologies for a design using artificial and rule-based generative design. This tardiness has meant the loss of potential to enhance its capabilities in producing sustainable, flexible, and mass customizable ‘right first-time’ designs. This paper aims to demonstrate the concept methodology for the creation of alternative and inspiring aesthetic structures for robot-based additive manufacturing (RBAM). These technologies can enable the economic creation of previously unachievable structures, which traditionally would not have been commercially economic to manufacture. The integration of these technologies with the computing power of generative design provides the tools for practitioners to create concepts which are well beyond the insight of even the most accomplished traditional design teams. This paper aims to address the problem by introducing generative design methodologies employing the Autodesk Fusion 360 platform. Examination of the alternative methods for its use has the potential to significantly reduce the estimated 80% contribution to environmental impact at the initial design phase. Though predominantly a design methodology, generative design combined with RBAM has the potential to leverage many lean manufacturing and quality assurance benefits, enhancing the efficiency and agility of modern furniture manufacturing. Through a case study examination of a furniture artifact, the results will be compared to a traditionally designed and manufactured product employing the Ecochain Mobius product life cycle analysis (LCA) platform. This will highlight the benefits of both generative design and robot-based additive manufacturing from an environmental impact and manufacturing efficiency standpoint. These step changes in design methodology and environmental assessment have the potential to revolutionise the design to manufacturing workflow, giving momentum to the concept of conceiving a pre-industrial model of manufacturing, with the global demand for a circular economy and bespoke sustainable design at its heart.

Keywords: robot, manufacturing, generative design, sustainability, circular econonmy, product life cycle assessment, furniture

Procedia PDF Downloads 120
217 From Mimetic to Mnemonic: On the Simultaneous Rise of Language and Religion

Authors: Dmitry Usenco

Abstract:

The greatest paradox about the origin of language is the fact that, while language is always taught by adults to children, it can never be learnt properly unless its acquisition occurs during childhood. The question that naturally arises in that respect is as follows: How could language be taught for the first time by a non-speaker, i.e., by someone who did not have the opportunity to master it as a child? Yet the above paradox will appear less unresolvable if we hypothesise that language was originally introduced not as a means of communication but as a relatively modest training/playing technique that was used to develop the learners’ mimetic skills. Its communicative and expressive properties could have been discovered and exploited later – upon the learners’ reaching their adulthood. The importance of mimesis in children’s development is universally recognised. The most common forms of it are onomatopoeia and mime, which consist in reproducing sounds and imitating shapes/movements of externally observed objects. However, in some cases, neither of these exercises can be adequate to the task. An object, especially an inanimate one, may emit no characteristic sounds, making onomatopoeia problematic. In other cases, it may have no easily reproduceable shape, while its movements may depend on the specific way of our interacting with it. On such occasions, onomatopoeia and mime can perhaps be supplemented, or even replaced, by movements of the tongue which can metonymically represent certain aspects of our interaction with the object. This is especially evident with consonants: e.g., a fricative sound can designate the subject’s relatively slow approach to the object or vice versa, while a plosive one can express the relatively abrupt process of grabbing/sticking or parrying/bouncing. From that point of view, a protoword can be regarded as a sophisticated gesture of the tongue but also as a mnemonic sequence that contains encoded instructions about the way to handle the object. When this originally subjective link between the object and its mimetic/mnemonic representation eventually installs itself in the collective mind (however small at first the community might be), the initially nameless object acquires a name, and the first word is created. (Discussing the difference between proper and common names is out of the scope of this paper). In its very beginning, this word has two major applications. It can be used for interhuman communication because it allows us to invoke the presence of a currently absent object. It can also be used for designing, expressing, and memorising our interaction with the object itself. The first usage gives rise to language, the second to religion. By the act of naming, we attach to the object a mental (‘spiritual’) dimension which has an independent existence in our collective mind. By referring to the name (idea/demon/soul) of the object, we perform our first act of spirituality, our first religious observance. This is the beginning of animism – arguably, the most ancient form of religion. To conclude: the rise of religion is simultaneous with the the emergence of language in human evolution.

Keywords: language, religion, origin, acquisition, childhood, adulthood, play, represntation, onomatopoeia, mime, gesture, consonant, simultaneity, spirituality, animism

Procedia PDF Downloads 56
216 Chemical Pollution of Water: Waste Water, Sewage Water, and Pollutant Water

Authors: Nabiyeva Jamala

Abstract:

We divide water into drinking, mineral, industrial, technical and thermal-energetic types according to its use and purpose. Drinking water must comply with sanitary requirements and norms according to organoleptic devices and physical and chemical properties. Mineral water - must comply with the norms due to some components having therapeutic properties. Industrial water must fulfill its normative requirements by being used in the industrial field. Technical water should be suitable for use in the field of agriculture, household, and irrigation, and the normative requirements should be met. Heat-energy water is used in the national economy, and it consists of thermal and energy water. Water is a filter-accumulator of all types of pollutants entering the environment. This is explained by the fact that it has the property of dissolving compounds of mineral and gaseous water and regular water circulation. Environmentally clean, pure, non-toxic water is vital for the normal life activity of humans, animals and other living beings. Chemical pollutants enter water basins mainly with wastewater from non-ferrous and ferrous metallurgy, oil, gas, chemical, stone, coal, pulp and paper and forest materials processing industries and make them unusable. Wastewater from the chemical, electric power, woodworking and machine-building industries plays a huge role in the pollution of water sources. Chlorine compounds, phenols, and chloride-containing substances have a strong lethal-toxic effect on organisms when mixed with water. Heavy metals - lead, cadmium, mercury, nickel, copper, selenium, chromium, tin, etc. water mixed with ingredients cause poisoning in humans, animals and other living beings. Thus, the mixing of selenium with water causes liver diseases in people, the mixing of mercury with the nervous system, and the mixing of cadmium with kidney diseases. Pollution of the World's ocean waters and other water basins with oil and oil products is one of the most dangerous environmental problems facing humanity today. So, mixing even the smallest amount of oil and its products in drinking water gives it a bad, unpleasant smell. Mixing one ton of oil with water creates a special layer that covers the water surface in an area of 2.6 km2. As a result, the flood of light, photosynthesis and oxygen supply of water is getting weak and there is a great danger to the lives of living beings.

Keywords: chemical pollutants, wastewater, SSAM, polyacrylamide

Procedia PDF Downloads 51
215 Polymer Nanocomposite Containing Silver Nanoparticles for Wound Healing

Authors: Patrícia Severino, Luciana Nalone, Daniele Martins, Marco Chaud, Classius Ferreira, Cristiane Bani, Ricardo Albuquerque

Abstract:

Hydrogels produced with polymers have been used in the development of dressings for wound treatment and tissue revitalization. Our study on polymer nanocomposites containing silver nanoparticles shows antimicrobial activity and applications in wound healing. The effects are linked with the slow oxidation and Ag⁺ liberation to the biological environment. Furthermore, bacterial cell membrane penetration and metabolic disruption through cell cycle disarrangement also contribute to microbial cell death. The silver antimicrobial activity has been known for many years, and previous reports show that low silver concentrations are safe for human use. This work aims to develop a hydrogel using natural polymers (sodium alginate and gelatin) combined with silver nanoparticles for wound healing and with antimicrobial properties in cutaneous lesions. The hydrogel development utilized different sodium alginate and gelatin proportions (20:80, 50:50 and 80:20). The silver nanoparticles incorporation was evaluated at the concentrations of 1.0, 2.0 and 4.0 mM. The physico-chemical properties of the formulation were evaluated using ultraviolet-visible (UV-Vis) absorption spectroscopy, Fourier transform infrared (FTIR) spectroscopy, differential scanning calorimetry (DSC), and thermogravimetric (TG) analysis. The morphological characterization was made using transmission electron microscopy (TEM). Human fibroblast (L2929) viability assay was performed with a minimum inhibitory concentration (MIC) assessment as well as an in vivo cicatrizant test. The results suggested that sodium alginate and gelatin in the (80:20) proportion with 4 mM of AgNO₃ in the (UV-Vis) exhibited a better hydrogel formulation. The nanoparticle absorption spectra of this analysis showed a maximum band around 430 - 450 nm, which suggests a spheroidal form. The TG curve exhibited two weight loss events. DSC indicated one endothermic peak at 230-250 °C, due to sample fusion. The polymers acted as stabilizers of a nanoparticle, defining their size and shape. Human fibroblast viability assay L929 gave 105 % cell viability with a negative control, while gelatin presented 96% viability, alginate: gelatin (80:20) 96.66 %, and alginate 100.33 % viability. The sodium alginate:gelatin (80:20) exhibited significant antimicrobial activity, with minimal bacterial growth at a ratio of 1.06 mg.mL⁻¹ in Pseudomonas aeruginosa and 0.53 mg.mL⁻¹ in Staphylococcus aureus. The in vivo results showed a significant reduction in wound surface area. On the seventh day, the hydrogel-nanoparticle formulation reduced the total area of injury by 81.14 %, while control reached a 45.66 % reduction. The results suggest that silver-hydrogel nanoformulation exhibits potential for wound dressing therapeutics.

Keywords: nanocomposite, wound healing, hydrogel, silver nanoparticle

Procedia PDF Downloads 85
214 Reduction of Residual Stress by Variothermal Processing and Validation via Birefringence Measurement Technique on Injection Molded Polycarbonate Samples

Authors: Christoph Lohr, Hanna Wund, Peter Elsner, Kay André Weidenmann

Abstract:

Injection molding is one of the most commonly used techniques in the industrial polymer processing. In the conventional process of injection molding, the liquid polymer is injected into the cavity of the mold, where the polymer directly starts hardening at the cooled walls. To compensate the shrinkage, which is caused predominantly by the immediate cooling, holding pressure is applied. Through that whole process, residual stresses are produced by the temperature difference of the polymer melt and the injection mold and the relocation of the polymer chains, which were oriented by the high process pressures and injection speeds. These residual stresses often weaken or change the structural behavior of the parts or lead to deformation of components. One solution to reduce the residual stresses is the use of variothermal processing. Hereby the mold is heated – i.e. near/over the glass transition temperature of the polymer – the polymer is injected and before opening the mold and ejecting the part the mold is cooled. For the next cycle, the mold gets heated again and the procedure repeats. The rapid heating and cooling of the mold are realized indirectly by convection of heated and cooled liquid (here: water) which is pumped through fluid channels underneath the mold surface. In this paper, the influences of variothermal processing on the residual stresses are analyzed with samples in a larger scale (500 mm x 250 mm x 4 mm). In addition, the influence on functional elements, such as abrupt changes in wall thickness, bosses, and ribs, on the residual stress is examined. Therefore the polycarbonate samples are produced by variothermal and isothermal processing. The melt is injected into a heated mold, which has in our case a temperature varying between 70 °C and 160 °C. After the filling of the cavity, the closed mold is cooled down varying from 70 °C to 100 °C. The pressure and temperature inside the mold are monitored and evaluated with cavity sensors. The residual stresses of the produced samples are illustrated by birefringence where the effect on the refractive index on the polymer under stress is used. The colorful spectrum can be uncovered by placing the sample between a polarized light source and a second polarization filter. To show the achievement and processing effects on the reduction of residual stress the birefringence images of the isothermal and variothermal produced samples are compared and evaluated. In this comparison to the variothermal produced samples have a lower amount of maxima of each color spectrum than the isothermal produced samples, which concludes that the residual stress of the variothermal produced samples is lower.

Keywords: birefringence, injection molding, polycarbonate, residual stress, variothermal processing

Procedia PDF Downloads 261
213 An Optimal Control Method for Reconstruction of Topography in Dam-Break Flows

Authors: Alia Alghosoun, Nabil El Moçayd, Mohammed Seaid

Abstract:

Modeling dam-break flows over non-flat beds requires an accurate representation of the topography which is the main source of uncertainty in the model. Therefore, developing robust and accurate techniques for reconstructing topography in this class of problems would reduce the uncertainty in the flow system. In many hydraulic applications, experimental techniques have been widely used to measure the bed topography. In practice, experimental work in hydraulics may be very demanding in both time and cost. Meanwhile, computational hydraulics have served as an alternative for laboratory and field experiments. Unlike the forward problem, the inverse problem is used to identify the bed parameters from the given experimental data. In this case, the shallow water equations used for modeling the hydraulics need to be rearranged in a way that the model parameters can be evaluated from measured data. However, this approach is not always possible and it suffers from stability restrictions. In the present work, we propose an adaptive optimal control technique to numerically identify the underlying bed topography from a given set of free-surface observation data. In this approach, a minimization function is defined to iteratively determine the model parameters. The proposed technique can be interpreted as a fractional-stage scheme. In the first stage, the forward problem is solved to determine the measurable parameters from known data. In the second stage, the adaptive control Ensemble Kalman Filter is implemented to combine the optimality of observation data in order to obtain the accurate estimation of the topography. The main features of this method are on one hand, the ability to solve for different complex geometries with no need for any rearrangements in the original model to rewrite it in an explicit form. On the other hand, its achievement of strong stability for simulations of flows in different regimes containing shocks or discontinuities over any geometry. Numerical results are presented for a dam-break flow problem over non-flat bed using different solvers for the shallow water equations. The robustness of the proposed method is investigated using different numbers of loops, sensitivity parameters, initial samples and location of observations. The obtained results demonstrate high reliability and accuracy of the proposed techniques.

Keywords: erodible beds, finite element method, finite volume method, nonlinear elasticity, shallow water equations, stresses in soil

Procedia PDF Downloads 113
212 Extraction of Scandium (Sc) from an Ore with Functionalized Nanoporous Silicon Adsorbent

Authors: Arezoo Rahmani, Rinez Thapa, Juha-Matti Aalto, Petri Turhanen, Jouko Vepsalainen, Vesa-PekkaLehto, Joakim Riikonen

Abstract:

Production of Scandium (Sc) is a complicated process because Sc is found only in low concentrations in ores and the concentration of Sc is very low compared with other metals. Therefore, utilization of typical extraction processes such as solvent extraction is problematic in scandium extraction. The Adsorption/desorption method can be used, but it is challenging to prepare materials, which have good selectivity, high adsorption capacity, and high stability. Therefore, efficient and environmentally friendly methods for Sc extraction are needed. In this study, the nanoporous composite material was developed for extracting Sc from an Sc ore. The nanoporous composite material offers several advantageous properties such as large surface area, high chemical and mechanical stability, fast diffusion of the metals in the material and possibility to construct a filter out of the material with good flow-through properties. The nanoporous silicon material was produced by first stabilizing the surfaces with a silicon carbide layer and then functionalizing the surface with bisphosphonates that act as metal chelators. The surface area and porosity of the material were characterized by N₂ adsorption and the morphology was studied by scanning electron microscopy (SEM). The bisphosphonate content of the material was studied by thermogravimetric analysis (TGA). The concentration of metal ions in the adsorption/desorption experiments was measured with inductively coupled plasma mass spectrometry (ICP-MS). The maximum capacity of the material was 25 µmol/g Sc at pH=1 and 45 µmol/g Sc at pH=3, obtained from adsorption isotherm. The selectivity of the material towards Sc in artificial solutions containing several metal ions was studied at pH one and pH 3. The result shows good selectivity of the nanoporous composite towards adsorption of Sc. Scandium was less efficiently adsorbed from solution leached from the ore of Sc because of excessive amounts of iron (Fe), aluminum (Al) and titanium (Ti) which disturbed the adsorption process. For example, the concentration of Fe was more than 4500 ppm, while the concentration of Sc was only three ppm, approximately 1500 times lower. Precipitation methods were developed to lower the concentration of the metals other than Sc. Optimal pH for precipitation was found to be pH 4. The concentration of Fe, Al and Ti were decreased by 99, 70, 99.6%, respectively, while the concentration of Sc decreased only 22%. Despite the large reduction in the concentration of other metals, more work is needed to further increase the relative concentration of Sc compared with other metals to efficiently extract it using the developed nanoporous composite material. Nevertheless, the developed material may provide an affordable, efficient and environmentally friendly method to extract Sc on a large scale.

Keywords: adsorption, nanoporous silicon, ore solution, scandium

Procedia PDF Downloads 124
211 The Design of a Computer Simulator to Emulate Pathology Laboratories: A Model for Optimising Clinical Workflows

Authors: M. Patterson, R. Bond, K. Cowan, M. Mulvenna, C. Reid, F. McMahon, P. McGowan, H. Cormican

Abstract:

This paper outlines the design of a simulator to allow for the optimisation of clinical workflows through a pathology laboratory and to improve the laboratory’s efficiency in the processing, testing, and analysis of specimens. Often pathologists have difficulty in pinpointing and anticipating issues in the clinical workflow until tests are running late or in error. It can be difficult to pinpoint the cause and even more difficult to predict any issues which may arise. For example, they often have no indication of how many samples are going to be delivered to the laboratory that day or at a given hour. If we could model scenarios using past information and known variables, it would be possible for pathology laboratories to initiate resource preparations, e.g. the printing of specimen labels or to activate a sufficient number of technicians. This would expedite the clinical workload, clinical processes and improve the overall efficiency of the laboratory. The simulator design visualises the workflow of the laboratory, i.e. the clinical tests being ordered, the specimens arriving, current tests being performed, results being validated and reports being issued. The simulator depicts the movement of specimens through this process, as well as the number of specimens at each stage. This movement is visualised using an animated flow diagram that is updated in real time. A traffic light colour-coding system will be used to indicate the level of flow through each stage (green for normal flow, orange for slow flow, and red for critical flow). This would allow pathologists to clearly see where there are issues and bottlenecks in the process. Graphs would also be used to indicate the status of specimens at each stage of the process. For example, a graph could show the percentage of specimen tests that are on time, potentially late, running late and in error. Clicking on potentially late samples will display more detailed information about those samples, the tests that still need to be performed on them and their urgency level. This would allow any issues to be resolved quickly. In the case of potentially late samples, this could help to ensure that critically needed results are delivered on time. The simulator will be created as a single-page web application. Various web technologies will be used to create the flow diagram showing the workflow of the laboratory. JavaScript will be used to program the logic, animate the movement of samples through each of the stages and to generate the status graphs in real time. This live information will be extracted from an Oracle database. As well as being used in a real laboratory situation, the simulator could also be used for training purposes. ‘Bots’ would be used to control the flow of specimens through each step of the process. Like existing software agents technology, these bots would be configurable in order to simulate different situations, which may arise in a laboratory such as an emerging epidemic. The bots could then be turned on and off to allow trainees to complete the tasks required at that step of the process, for example validating test results.

Keywords: laboratory-process, optimization, pathology, computer simulation, workflow

Procedia PDF Downloads 263
210 Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation

Authors: Tomasz Grzes, Maciej Kopczynski, Jaroslaw Stepaniuk

Abstract:

The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.

Keywords: data reduction, digital systems design, field programmable gate array (FPGA), reduct, rough set

Procedia PDF Downloads 196
209 Advancing Food System Resilience by Pseudocereals Utilization

Authors: Yevheniia Varyvoda, Douglas Taren

Abstract:

At the aggregate level, climate variability, the rising number of active violent conflicts, globalization and industrialization of agriculture, the loss in diversity of crop species, the increase in demand for agricultural production, and the adoption of healthy and sustainable dietary patterns are exacerbating factors of food system destabilization. The importance of pseudocereals to fuel and sustain resilient food systems is recognized by leading organizations working to end hunger, particularly for their critical capability to diversify livelihood portfolios and provide plant-sourced healthy nutrition in the face of systemic shocks and stresses. Amaranth, buckwheat, and quinoa are the most promising and used pseudocereals for ensuring food system resilience in the reality of climate change due to their high nutritional profile, good digestibility, palatability, medicinal value, abiotic stress tolerance, pest and disease resistance, rapid growth rate, adaptability to marginal and degraded lands, high genetic variability, low input requirements, and income generation capacity. The study provides the rationale and examples of advancing local and regional food systems' resilience by scaling up the utilization of amaranth, buckwheat, and quinoa along all components of food systems to architect indirect nutrition interventions and climate-smart approaches. Thus, this study aims to explore the drivers for ancient pseudocereal utilization, the potential resilience benefits that can be derived from using them, and the challenges and opportunities for pseudocereal utilization within the food system components. The PSALSAR framework regarding the method for conducting systematic review and meta-analysis for environmental science research was used to answer these research questions. Nevertheless, the utilization of pseudocereals has been slow for a number of reasons, namely the increased production of commercial and major staples such as maize, rice, wheat, soybean, and potato, the displacement due to pressure from imported crops, lack of knowledge about value-adding practices in food supply chain, limited technical knowledge and awareness about nutritional and health benefits, absence of marketing channels and limited access to extension services and information about resilient crops. The success of climate-resilient pathways based on pseudocereal utilization underlines the importance of co-designed activities that use modern technologies, high-value traditional knowledge of underutilized crops, and a strong acknowledgment of cultural norms to increase community-level economic and food system resilience.

Keywords: resilience, pseudocereals, food system, climate change

Procedia PDF Downloads 59
208 Indirect Genotoxicity of Diesel Engine Emission: An in vivo Study Under Controlled Conditions

Authors: Y. Landkocz, P. Gosset, A. Héliot, C. Corbière, C. Vendeville, V. Keravec, S. Billet, A. Verdin, C. Monteil, D. Préterre, J-P. Morin, F. Sichel, T. Douki, P. J. Martin

Abstract:

Air Pollution produced by automobile traffic is one of the main sources of pollutants in urban atmosphere and is largely due to exhausts of the diesel engine powered vehicles. The International Agency for Research on Cancer, which is part of the World Health Organization, classified in 2012 diesel engine exhaust as carcinogenic to humans (Group 1), based on sufficient evidence that exposure is associated with an increased risk for lung cancer. Amongst the strategies aimed at limiting exhausts in order to take into consideration the health impact of automobile pollution, filtration of the emissions and use of biofuels are developed, but their toxicological impact is largely unknown. Diesel exhausts are indeed complex mixtures of toxic substances difficult to study from a toxicological point of view, due to both the necessary characterization of the pollutants, sampling difficulties, potential synergy between the compounds and the wide variety of biological effects. Here, we studied the potential indirect genotoxicity of emission of Diesel engines through on-line exposure of rats in inhalation chambers to a subchronic high but realistic dose. Following exposure to standard gasoil +/- rapeseed methyl ester either upstream or downstream of a particle filter or control treatment, rats have been sacrificed and their lungs collected. The following indirect genotoxic parameters have been measured: (i) telomerase activity and telomeres length associated with rTERT and rTERC gene expression by RT-qPCR on frozen lungs, (ii) γH2AX quantification, representing double-strand DNA breaks, by immunohistochemistry on formalin fixed-paraffin embedded (FFPE) lung samples. These preliminary results will be then associated with global cellular response analyzed by pan-genomic microarrays, monitoring of oxidative stress and the quantification of primary DNA lesions in order to identify biological markers associated with a potential pro-carcinogenic response of diesel or biodiesel, with or without filters, in a relevant system of in vivo exposition.

Keywords: diesel exhaust exposed rats, γH2AX, indirect genotoxicity, lung carcinogenicity, telomerase activity, telomeres length

Procedia PDF Downloads 372
207 Metabolic Profiling in Breast Cancer Applying Micro-Sampling of Biological Fluids and Analysis by Gas Chromatography – Mass Spectrometry

Authors: Mónica P. Cala, Juan S. Carreño, Roland J.W. Meesters

Abstract:

Recently, collection of biological fluids on special filter papers has become a popular micro-sampling technique. Especially, the dried blood spot (DBS) micro-sampling technique has gained much attention and is momently applied in various life sciences reserach areas. As a result of this popularity, DBS are not only intensively competing with the venous blood sampling method but are at this moment widely applied in numerous bioanalytical assays. In particular, in the screening of inherited metabolic diseases, pharmacokinetic modeling and in therapeutic drug monitoring. Recently, microsampling techniques were also introduced in “omics” areas, whereunder metabolomics. For a metabolic profiling study we applied micro-sampling of biological fluids (blood and plasma) from healthy controls and from women with breast cancer. From blood samples, dried blood and plasma samples were prepared by spotting 8uL sample onto pre-cutted 5-mm paper disks followed by drying of the disks for 100 minutes. Dried disks were then extracted by 100 uL of methanol. From liquid blood and plasma samples 40 uL were deproteinized with methanol followed by centrifugation and collection of supernatants. Supernatants and extracts were evaporated until dryness by nitrogen gas and residues derivated by O-methyxyamine and MSTFA. As internal standard C17:0-methylester in heptane (10 ppm) was used. Deconvolution and alignment of and full scan (m/z 50-500) MS data were done by AMDIS and SpectConnect (http://spectconnect.mit.edu) software, respectively. Statistical Data analysis was done by Principal Component Analysis (PCA) using R software. The results obtained from our preliminary study indicate that the use of dried blood/plasma on paper disks could be a powerful new tool in metabolic profiling. Many of the metabolites observed in plasma (liquid/dried) were also positively identified in whole blood samples (liquid/dried). Whole blood could be a potential substitute matrix for plasma in Metabolomic profiling studies as well also micro-sampling techniques for the collection of samples in clinical studies. It was concluded that the separation of the different sample methodologies (liquid vs. dried) as observed by PCA was due to different sample treatment protocols applied. More experiments need to be done to confirm obtained observations as well also a more rigorous validation .of these micro-sampling techniques is needed. The novelty of our approach can be found in the application of different biological fluid micro-sampling techniques for metabolic profiling.

Keywords: biofluids, breast cancer, metabolic profiling, micro-sampling

Procedia PDF Downloads 391
206 Leadership and Entrepreneurship in Higher Education: Fostering Innovation and Sustainability

Authors: Naziema Begum Jappie

Abstract:

Leadership and entrepreneurship in higher education have become critical components in navigating the evolving landscape of academia in the 21st century. This abstract explores the multifaceted relationship between leadership and entrepreneurship within the realm of higher education, emphasizing their roles in fostering innovation and sustainability. Higher education institutions, often characterized as slow-moving and resistant to change, are facing unprecedented challenges. Globalization, rapid technological advancements, changing student demographics, and financial constraints necessitate a reimagining of traditional models. Leadership in higher education must embrace entrepreneurial thinking to effectively address these challenges. Entrepreneurship in higher education involves cultivating a culture of innovation, risk-taking, and adaptability. Visionary leaders who promote entrepreneurship within their institutions empower faculty and staff to think creatively, seek new opportunities, and engage with external partners. These entrepreneurial efforts lead to the development of novel programs, research initiatives, and sustainable revenue streams. Innovation in curriculum and pedagogy is a central aspect of leadership and entrepreneurship in higher education. Forward-thinking leaders encourage faculty to experiment with teaching methods and technology, fostering a dynamic learning environment that prepares students for an ever-changing job market. Entrepreneurial leadership also facilitates the creation of interdisciplinary programs that address emerging fields and societal challenges. Collaboration is key to entrepreneurship in higher education. Leaders must establish partnerships with industry, government, and non-profit organizations to enhance research opportunities, secure funding, and provide real-world experiences for students. Entrepreneurial leaders leverage their institutions' resources to build networks that extend beyond campus boundaries, strengthening their positions in the global knowledge economy. Financial sustainability is a pressing concern for higher education institutions. Entrepreneurial leadership involves diversifying revenue streams through innovative fundraising campaigns, partnerships, and alternative educational models. Leaders who embrace entrepreneurship are better equipped to navigate budget constraints and ensure the long-term viability of their institutions. In conclusion, leadership and entrepreneurship are intertwined elements essential to the continued relevance and success of higher education institutions. Visionary leaders who champion entrepreneurship foster innovation, enhance the student experience, and secure the financial future of their institutions. As academia continues to evolve, leadership and entrepreneurship will remain indispensable tools in shaping the future of higher education. This abstract underscores the importance of these concepts and their potential to drive positive change within the higher education landscape.

Keywords: entrepreneurship, higher education, innovation, leadership

Procedia PDF Downloads 47
205 Terrestrial Laser Scans to Assess Aerial LiDAR Data

Authors: J. F. Reinoso-Gordo, F. J. Ariza-López, A. Mozas-Calvache, J. L. García-Balboa, S. Eddargani

Abstract:

The DEMs quality may depend on several factors such as data source, capture method, processing type used to derive them, or the cell size of the DEM. The two most important capture methods to produce regional-sized DEMs are photogrammetry and LiDAR; DEMs covering entire countries have been obtained with these methods. The quality of these DEMs has traditionally been evaluated by the national cartographic agencies through punctual sampling that focused on its vertical component. For this type of evaluation there are standards such as NMAS and ASPRS Positional Accuracy Standards for Digital Geospatial Data. However, it seems more appropriate to carry out this evaluation by means of a method that takes into account the superficial nature of the DEM and, therefore, its sampling is superficial and not punctual. This work is part of the Research Project "Functional Quality of Digital Elevation Models in Engineering" where it is necessary to control the quality of a DEM whose data source is an experimental LiDAR flight with a density of 14 points per square meter to which we call Point Cloud Product (PCpro). In the present work it is described the capture data on the ground and the postprocessing tasks until getting the point cloud that will be used as reference (PCref) to evaluate the PCpro quality. Each PCref consists of a patch 50x50 m size coming from a registration of 4 different scan stations. The area studied was the Spanish region of Navarra that covers an area of 10,391 km2; 30 patches homogeneously distributed were necessary to sample the entire surface. The patches have been captured using a Leica BLK360 terrestrial laser scanner mounted on a pole that reached heights of up to 7 meters; the position of the scanner was inverted so that the characteristic shadow circle does not exist when the scanner is in direct position. To ensure that the accuracy of the PCref is greater than that of the PCpro, the georeferencing of the PCref has been carried out with real-time GNSS, and its accuracy positioning was better than 4 cm; this accuracy is much better than the altimetric mean square error estimated for the PCpro (<15 cm); The kind of DEM of interest is the corresponding to the bare earth, so that it was necessary to apply a filter to eliminate vegetation and auxiliary elements such as poles, tripods, etc. After the postprocessing tasks the PCref is ready to be compared with the PCpro using different techniques: cloud to cloud or after a resampling process DEM to DEM.

Keywords: data quality, DEM, LiDAR, terrestrial laser scanner, accuracy

Procedia PDF Downloads 83
204 Multi-Objective Optimization (Pareto Sets) and Multi-Response Optimization (Desirability Function) of Microencapsulation of Emamectin

Authors: Victoria Molina, Wendy Franco, Sergio Benavides, José M. Troncoso, Ricardo Luna, Jose R. PéRez-Correa

Abstract:

Emamectin Benzoate (EB) is a crystal antiparasitic that belongs to the avermectin family. It is one of the most common treatments used in Chile to control Caligus rogercresseyi in Atlantic salmon. However, the sea lice acquired resistance to EB when it is exposed at sublethal EB doses. The low solubility rate of EB and its degradation at the acidic pH in the fish digestive tract are the causes of the slow absorption of EB in the intestine. To protect EB from degradation and enhance its absorption, specific microencapsulation technologies must be developed. Amorphous Solid Dispersion techniques such as Spray Drying (SD) and Ionic Gelation (IG) seem adequate for this purpose. Recently, Soluplus® (SOL) has been used to increase the solubility rate of several drugs with similar characteristics than EB. In addition, alginate (ALG) is a widely used polymer in IG for biomedical applications. Regardless of the encapsulation technique, the quality of the obtained microparticles is evaluated with the following responses, yield (Y%), encapsulation efficiency (EE%) and loading capacity (LC%). In addition, it is important to know the percentage of EB released from the microparticles in gastric (GD%) and intestinal (ID%) digestions. In this work, we microencapsulated EB with SOL (EB-SD) and with ALG (EB-IG) using SD and IG, respectively. Quality microencapsulation responses and in vitro gastric and intestinal digestions at pH 3.35 and 7.8, respectively, were obtained. A central composite design was used to find the optimum microencapsulation variables (amount of EB, amount of polymer and feed flow). In each formulation, the behavior of these variables was predicted with statistical models. Then, the response surface methodology was used to find the best combination of the factors that allowed a lower EB release in gastric conditions, while permitting a major release at intestinal digestion. Two approaches were used to determine this. The desirability approach (DA) and multi-objective optimization (MOO) with multi-criteria decision making (MCDM). Both microencapsulation techniques allowed to maintain the integrity of EB in acid pH, given the small amount of EB released in gastric medium, while EB-IG microparticles showed greater EB release at intestinal digestion. For EB-SD, optimal conditions obtained with MOO plus MCDM yielded a good compromise among the microencapsulation responses. In addition, using these conditions, it is possible to reduce microparticles costs due to the reduction of 60% of BE regard the optimal BE proposed by (DA). For EB-GI, the optimization techniques used (DA and MOO) yielded solutions with different advantages and limitations. Applying DA costs can be reduced 21%, while Y, GD and ID showed 9.5%, 84.8% and 2.6% lower values than the best condition. In turn, MOO yielded better microencapsulation responses, but at a higher cost. Overall, EB-SD with operating conditions selected by MOO seems the best option, since a good compromise between costs and encapsulation responses was obtained.

Keywords: microencapsulation, multiple decision-making criteria, multi-objective optimization, Soluplus®

Procedia PDF Downloads 106
203 Reactivities of Turkish Lignites during Oxygen Enriched Combustion

Authors: Ozlem Uguz, Ali Demirci, Hanzade Haykiri-Acma, Serdar Yaman

Abstract:

Lignitic coal holds its position as Turkey’s most important indigenous energy source to generate energy in thermal power plants. Hence, efficient and environmental-friendly use of lignite in electricity generation is of great importance. Thus, clean coal technologies have been planned to mitigate emissions and provide more efficient burning in power plants. In this context, oxygen enriched combustion (oxy-combustion) is regarded as one of the clean coal technologies, which based on burning with oxygen concentrations higher than that in air. As it is known that the most of the Turkish coals are low rank with high mineral matter content, unburnt carbon trapped in ash is, unfortunately, high, and it leads significant losses in the overall efficiencies of the thermal plants. Besides, the necessity of burning huge amounts of these low calorific value lignites to get the desired amount of energy also results in the formation of large amounts of ash that is rich in unburnt carbon. Oxygen enriched combustion technology enables to increase the burning efficiency through the complete burning of almost all of the carbon content of the fuel. This also contributes to the protection of air quality and emission levels drop reasonably. The aim of this study is to investigate the unburnt carbon content and the burning reactivities of several different lignite samples under oxygen enriched conditions. For this reason, the combined effects of temperature and oxygen/nitrogen ratios in the burning atmosphere were investigated and interpreted. To do this, Turkish lignite samples from Adıyaman-Gölbaşı and Kütahya-Tunçbilek regions were characterized first by proximate and ultimate analyses and the burning profiles were derived using DTA (Differential Thermal Analysis) curves. Then, these lignites were subjected to slow burning process in a horizontal tube furnace at different temperatures (200ºC, 400ºC, 600ºC for Adıyaman-Gölbaşı lignite and 200ºC, 450ºC, 800ºC for Kütahya-Tunçbilek lignite) under atmospheres having O₂+N₂ proportions of 21%O₂+79%N₂, 30%O₂+70%N₂, 40%O₂+60%N₂, and 50%O₂+50%N₂. These burning temperatures were specified based on the burning profiles derived from the DTA curves. The residues obtained from these burning tests were also analyzed by proximate and ultimate analyses to detect the unburnt carbon content along with the unused energy potential. Reactivity of these lignites was calculated using several methodologies. Burning yield under air condition (21%O₂+79%N₂) was used a benchmark value to compare the effectiveness of oxygen enriched conditions. It was concluded that oxygen enriched combustion method enhanced the combustion efficiency and lowered the unburnt carbon content of ash. Combustion of low-rank coals under oxygen enriched conditions was found to be a promising way to improve the efficiency of the lignite-firing energy systems. However, cost-benefit analysis should be considered for a better justification of this method since the use of more oxygen brings an unignorable additional cost.

Keywords: coal, energy, oxygen enriched combustion, reactivity

Procedia PDF Downloads 256
202 Clastic Sequence Stratigraphy of Late Jurassic to Early Cretaceous Formations of Jaisalmer Basin, Rajasthan

Authors: Himanshu Kumar Gupta

Abstract:

The Jaisalmer Basin is one of the parts of the Rajasthan basin in northwestern India. The presence of five major unconformities/hiatuses of varying span i.e. at the top of Archean basement, Cambrian, Jurassic, Cretaceous, and Eocene have created the foundation for constructing a sequence stratigraphic framework. Based on basin formative tectonic events and their impact on sedimentation processes three first-order sequences have been identified in Rajasthan Basin. These are Proterozoic-Early Cambrian rift sequence, Permian to Middle-Late Eocene shelf sequence and Pleistocene - Recent sequence related to Himalayan Orogeny. The Permian to Middle Eocene I order sequence is further subdivided into three-second order sequences i.e. Permian to Late Jurassic II order sequence, Early to Late Cretaceous II order sequence and Paleocene to Middle-Late Eocene II order sequence. In this study, Late Jurassic to Early Cretaceous sequence was identified and log-based interpretation of smaller order T-R cycles have been carried out. A log profile from eastern margin to western margin (up to Shahgarh depression) has been taken. The depositional environment penetrated by the wells interpreted from log signatures gave three major facies association. The blocky and coarsening upward (funnel shape), the blocky and fining upward (bell shape) and the erratic (zig-zag) facies representing distributary mouth bar, distributary channel and marine mud facies respectively. Late Jurassic Formation (Baisakhi-Bhadasar) and Early Cretaceous Formation (Pariwar) shows a lesser number of T-R cycles in shallower and higher number of T-R cycles in deeper bathymetry. Shallowest well has 3 T-R cycles in Baisakhi-Bhadasar and 2 T-R cycles in Pariwar, whereas deeper well has 4 T-R cycles in Baisakhi-Bhadasar and 8 T-R cycles in Pariwar Formation. The Maximum Flooding surfaces observed from the stratigraphy analysis indicate major shale break (high shale content). The study area is dominated by the alternation of shale and sand lithologies, which occurs in an approximate ratio of 70:30. A seismo-geological cross section has been prepared to understand the stratigraphic thickness variation and structural disposition of the strata. The formations are quite thick to the west, the thickness of which reduces as we traverse towards the east. The folded and the faulted strata indicated the compressional tectonics followed by the extensional tectonics. Our interpretation is supported with seismic up to second order sequence indicates - Late Jurassic sequence is a Highstand Systems Tract (Baisakhi - Bhadasar formations), and the Early Cretaceous sequence is Regressive to Lowstand System Tract (Pariwar Formation).

Keywords: Jaisalmer Basin, sequence stratigraphy, system tract, T-R cycle

Procedia PDF Downloads 113
201 Quality by Design in the Optimization of a Fast HPLC Method for Quantification of Hydroxychloroquine Sulfate

Authors: Pedro J. Rolim-Neto, Leslie R. M. Ferraz, Fabiana L. A. Santos, Pablo A. Ferreira, Ricardo T. L. Maia-Jr., Magaly A. M. Lyra, Danilo A F. Fonte, Salvana P. M. Costa, Amanda C. Q. M. Vieira, Larissa A. Rolim

Abstract:

Initially developed as an antimalarial agent, hydroxychloroquine (HCQ) sulfate is often used as a slow-acting antirheumatic drug in the treatment of disorders of connective tissue. The United States Pharmacopeia (USP) 37 provides a reversed-phase HPLC method for quantification of HCQ. However, this method was not reproducible, producing asymmetric peaks in a long analysis time. The asymmetry of the peak may cause an incorrect calculation of the concentration of the sample. Furthermore, the analysis time is unacceptable, especially regarding the routine of a pharmaceutical industry. The aiming of this study was to develop a fast, easy and efficient method for quantification of HCQ sulfate by High Performance Liquid Chromatography (HPLC) based on the Quality by Design (QbD) methodology. This method was optimized in terms of peak symmetry using the surface area graphic as the Design of Experiments (DoE) and the tailing factor (TF) as an indicator to the Design Space (DS). The reference method used was that described at USP 37 to the quantification of the drug. For the optimized method, was proposed a 33 factorial design, based on the QbD concepts. The DS was created with the TF (in a range between 0.98 and 1.2) in order to demonstrate the ideal analytical conditions. Changes were made in the composition of the USP mobile-phase (USP-MP): USP-MP: Methanol (90:10 v/v, 80:20 v/v and 70:30 v/v), in the flow (0.8, 1.0 and 1.2 mL) and in the oven temperature (30, 35, and 40ºC). The USP method allowed the quantification of drug in a long time (40-50 minutes). In addition, the method uses a high flow rate (1,5 mL.min-1) which increases the consumption of expensive solvents HPLC grade. The main problem observed was the TF value (1,8) that would be accepted if the drug was not a racemic mixture, since the co-elution of the isomers can become an unreliable peak integration. Therefore, the optimization was suggested in order to reduce the analysis time, aiming a better peak resolution and TF. For the optimization method, by the analysis of the surface-response plot it was possible to confirm the ideal setting analytical condition: 45 °C, 0,8 mL.min-1 and 80:20 USP-MP: Methanol. The optimized HPLC method enabled the quantification of HCQ sulfate, with a peak of high resolution, showing a TF value of 1,17. This promotes good co-elution of isomers of the HCQ, ensuring an accurate quantification of the raw material as racemic mixture. This method also proved to be 18 times faster, approximately, compared to the reference method, using a lower flow rate, reducing even more the consumption of the solvents and, consequently, the analysis cost. Thus, an analytical method for the quantification of HCQ sulfate was optimized using QbD methodology. This method proved to be faster and more efficient than the USP method, regarding the retention time and, especially, the peak resolution. The higher resolution in the chromatogram peaks supports the implementation of the method for quantification of the drug as racemic mixture, not requiring the separation of isomers.

Keywords: analytical method, hydroxychloroquine sulfate, quality by design, surface area graphic

Procedia PDF Downloads 615
200 Clinical Manifestations, Pathogenesis and Medical Treatment of Stroke Caused by Basic Mitochondrial Abnormalities (Mitochondrial Encephalopathy, Lactic Acidosis, and Stroke-like Episodes, MELAS)

Authors: Wu Liching

Abstract:

Aim This case aims to discuss the pathogenesis, clinical manifestations and medical treatment of strokes caused by mitochondrial gene mutations. Methods Diagnosis of ischemic stroke caused by mitochondrial gene defect by means of "next-generation sequencing mitochondrial DNA gene variation detection", imaging examination, neurological examination, and medical history; this study took samples from the neurology ward of a medical center in northern Taiwan cases diagnosed with acute cerebral infarction as the research objects. Result This case is a 49-year-old married woman with a rare disease, mitochondrial gene mutation inducing ischemic stroke. She has severe hearing impairment and needs to use hearing aids, and has a history of diabetes. During the patient’s hospitalization, the blood test showed that serum Lactate: 7.72 mmol/L, Lactate (CSF) 5.9 mmol/L. Through the collection of relevant medical history, neurological evaluation showed changes in consciousness and cognition, slow response in language expression, and brain magnetic resonance imaging examination showed subacute bilateral temporal lobe infarction, which was an atypical type of stroke. The lineage DNA gene has m.3243A>G known pathogenic mutation point, and its heteroplasmic level is 24.6%. This pathogenic point is located in MITOMAP and recorded as Mitochondrial Encephalopathy, Lactic Acidosis, and Stroke-like episodes (MELAS) , Leigh Syndrome and other disease-related pathogenic loci, this mutation is located in ClinVar and recorded as Pathogenic (dbSNP: rs199474657), so it is diagnosed as a case of stroke caused by a rare disease mitochondrial gene mutation. After medical treatment, there was no more seizure during hospitalization. After interventional rehabilitation, the patient's limb weakness, poor language function, and cognitive impairment have all improved significantly. Conclusion Mitochondrial disorders can also be associated with abnormalities in psychological, neurological, cerebral cortical function, and autonomic functions, as well as problems with internal medical diseases. Therefore, the differential diagnoses cover a wide range and are not easy to be diagnosed. After neurological evaluation, medical history collection, imaging and rare disease serological examination, atypical ischemic stroke caused by rare mitochondrial gene mutation was diagnosed. We hope that through this case, the diagnosis of rare disease mitochondrial gene variation leading to cerebral infarction will be more familiar to clinical medical staff, and this case report may help to improve the clinical diagnosis and treatment for patients with similar clinical symptoms in the future.

Keywords: acute stroke, MELAS, lactic acidosis, mitochondrial disorders

Procedia PDF Downloads 50
199 To Access the Knowledge, Awareness and Factors Associated With Diabetes Mellitus in Buea, Cameroon

Authors: Franck kem Acho

Abstract:

This is a chronic metabolic disorder which is a fast-growing global problem with a huge social, health, and economic consequences. It is estimated that in 2010 there were globally 285 million people (approximately 6.4% of the adult population) suffering from this disease. This number is estimated to increase to 430 million in the absence of better control or cure. An ageing population and obesity are two main reasons for the increase. Diabetes mellitus is a chronic heterogeneous metabolic disorder with a complex pathogenesis. It is characterized by elevated blood glucose levels or hyperglycemia, which results from abnormalities in either insulin secretion or insulin action or both. Hyperglycemia manifests in various forms with a varied presentation and results in carbohydrate, fat, and protein metabolic dysfunctions. Long-term hyperglycemia often leads to various microvascular and macrovascular diabetic complications, which are mainly responsible for diabetes-associated morbidity and mortality. Hyperglycemia serves as the primary biomarker for the diagnosis of diabetes as well. Furthermore, it has been shown that almost 50% of the putative diabetics are not diagnosed until 10 years after onset of the disease, hence the real prevalence of global diabetes must be astronomically high. This study was conducted in a locality to access the level of knowledge, awareness and risk factors associated with people leaving with diabetes mellitus. A month before the screening was to be conducted, a health screening in some selected churches and on the local community radio as well as on relevant WhatsApp groups were advertised. A general health talk was delivered by the head of the screening unit to all attendees who were all educated on the procedure to be carried out with benefits and any possible discomforts after which the attendee’s consent was obtained. Evaluation of the participants for any leads to the diabetes selected for the screening was done by taking adequate history and physical examinations such as excessive thirst, increased urination, tiredness, hunger, unexplained weight loss, feeling irritable or having other mood changes, having blurry vision, having slow-healing sores, getting a lot of infections, such as gum, skin and vaginal infections. Out of the 94 participants the finding show that 78 were females and 16 were males, 70.21% of participants with diabetes were between the ages of 60-69yrs.The study found that only 10.63% of respondents declared a good level of knowledge of diabetes. Out of 3 symptoms of diabetes analyzed in this study, high blood sugar (58.5%) and chronic fatigue (36.17%) were the most recognized. Out of 4 diabetes risk factors analyzed in this study, obesity (21.27%) and unhealthy diet (60.63%) were the most recognized diabetes risk factors, while only 10.6% of respondents indicated tobacco use. The diabetic foot was the most recognized diabetes complication (50.57%), but some the participants indicated vision problems (30.8%),or cardiovascular diseases (20.21%) as diabetes complications.

Keywords: diabetes mellitus, non comunicable disease, general health talk, hyperglycemia

Procedia PDF Downloads 40