Search results for: surface flow
544 The One, the Many, and the Doctrine of Divine Simplicity: Variations on Simplicity in Essentialist and Existentialist Metaphysics
Authors: Mark Wiebe
Abstract:
One of the tasks contemporary analytic philosophers have focused on (e.g., Wolterstorff, Alston, Plantinga, Hasker, and Crisp) is the analysis of certain medieval metaphysical frameworks. This growing body of scholarship has helped clarify and prevent distorted readings of medieval and ancient writers. However, as scholars like Dolezal, Duby, and Brower have pointed out, these analyses have been incomplete or inaccurate in some instances, e.g., with regard to analogical speech or the doctrine of divine simplicity (DDS). Additionally, contributors to this work frequently express opposing claims or fail to note substantial differences between ancient and medieval thinkers. This is the case regarding the comparison between Thomas Aquinas and others. Anton Pegis and Étienne Gilson have argued along this line that Thomas’ metaphysical framework represents a fundamental shift. Gilson describes Thomas’ metaphysics as a turn from a form of “essentialism” to “existentialism.” One should argue that this shift distinguishes Thomas from many Analytic philosophers as well as from other classical defenders of the DDS. Moreover, many of the objections Analytic Philosophers make against Thomas presume the same metaphysical principles undergirding the above-mentioned form of essentialism. This weakens their force against Thomas’ positions. In order to demonstrate these claims, it will be helpful to consider Thomas’ metaphysical outlook alongside that of two other prominent figures: Augustine and Ockham. One area of their thinking which brings their differences to the surface has to do with how each relates to Platonic and Neo-Platonic thought. More specifically, it is illuminating to consider whether and how each distinguishes or conceives essence and existence. It is also useful to see how each approaches the Platonic conflicts between essence and individuality, unity and intelligibility. In both of these areas, Thomas stands out from Augustine and Ockham. Although Augustine and Ockham diverge in many ways, both ultimately identify being with particularity and pit particularity against both unity and intelligibility. Contrastingly, Thomas argues that being is distinct from and prior to essence. Being (i.e., Being in itself) rather than essence or form must therefore serve as the ground and ultimate principle for the existence of everything in which being and essence are distinct. Additionally, since change, movement, and addition improve and give definition to finite being, multitude and distinction are, therefore, principles of being rather than non-being. Consequently, each creature imitates and participates in God’s perfect Being in its own way; the perfection of each genus exists pre-eminently in God without being at odds with God’s simplicity, God has knowledge, power, and will, and these and the many other terms assigned to God refer truly to the being of God without being either meaningless or synonymous. The existentialist outlook at work in these claims distinguishes Thomas in a noteworthy way from his contemporaries and predecessors as much as it does from many of the analytic philosophers who have objected to his thought. This suggests that at least these kinds of objections do not apply to Thomas’ thought.Keywords: theology, philosophy of religion, metaphysics, philosophy
Procedia PDF Downloads 74543 Nutrition Budgets in Uganda: Research to Inform Implementation
Authors: Alexis D'Agostino, Amanda Pomeroy
Abstract:
Background: Resource availability is essential to effective implementation of national nutrition policies. To this end, the SPRING Project has collected and analyzed budget data from government ministries in Uganda, international donors, and other nutrition implementers to provide data for the first time on what funding is actually allocated to implement nutrition activities named in the national nutrition plan. Methodology: USAID’s SPRING Project used the Uganda Nutrition Action Plan (UNAP) as the starting point for budget analysis. Thorough desk reviews of public budgets from government, donors, and NGOs were mapped to activities named in the UNAP and validated by key informants (KIs) across the stakeholder groups. By relying on nationally-recognized and locally-created documents, SPRING provided a familiar basis for discussions to increase credibility and local ownership of findings. Among other things, the KIs validated the amount, source, and type (specific or sensitive) of funding. When only high-level budget data were available, KIs provided rough estimates of the percentage of allocations that were actually nutrition-relevant, allowing creation of confidence intervals around some funding estimates. Results: After validating data and narrowing in on estimates of funding to nutrition-relevant programming, researchers applied a formula to estimate overall nutrition allocations. In line with guidance by the SUN Movement and its three-step process, nutrition-specific funding was counted at 100% of its allocation amount, while nutrition sensitive funding was counted at 25%. The vast majority of nutrition funding in Uganda is off-budget, with over 90 percent of all nutrition funding is provided outside of the government system. Overall allocations are split nearly evenly between nutrition-specific and –sensitive activities. In FY 2013/14, the two-year study’s baseline year, on- and off-budget funding for nutrition was estimated to be around 60 million USD. While the 60 million USD allocations compare favorably to the 66 million USD estimate of the cost of the UNAP, not all activities are sufficiently funded. Those activities with a focus on behavior change were the most underfunded. In addition, accompanying qualitative research suggested that donor funding for nutrition activities may shift government funding into other areas of work, making it difficult to estimate the sustainability of current nutrition investments.Conclusions: Beyond providing figures, these estimates can be used together with the qualitative results of the study to explain how and why these amounts were allocated for particular activities and not others, examine the negotiation process that occurred, and suggest options for improving the flow of finances to UNAP activities for the remainder of the policy tenure. By the end of the PBN study, several years of nutrition budget estimates will be available to compare changes in funding over time. Halfway through SPRING’s work, there is evidence that country stakeholders have begun to feel ownership over the ultimate findings and some ministries are requesting increased technical assistance in nutrition budgeting. Ultimately, these data can be used within organization to advocate for more and improved nutrition funding and to improve targeting of nutrition allocations.Keywords: budget, nutrition, financing, scale-up
Procedia PDF Downloads 446542 Broad Survey of Fine Root Traits to Investigate the Root Economic Spectrum Hypothesis and Plant-Fire Dynamics Worldwide
Authors: Jacob Lewis Watts, Adam F. A. Pellegrini
Abstract:
Prairies, grasslands, and forests cover an expansive portion of the world’s surface and contribute significantly to Earth’s carbon cycle. The largest driver of carbon dynamics in some of these ecosystems is fire. As the global climate changes, most fire-dominated ecosystems will experience increased fire frequency and intensity, leading to increased carbon flux into the atmosphere and soil nutrient depletion. The plant communities associated with different fire regimes are important for reassimilation of carbon lost during fire and soil recovery. More frequent fires promote conservative plant functional traits aboveground; however, belowground fine root traits are poorly explored and arguably more important drivers of ecosystem function as the primary interface between the soil and plant. The root economic spectrum (RES) hypothesis describes single-dimensional covariation between important fine-root traits along a range of plant strategies from acquisitive to conservative – parallel to the well-established leaf economic spectrum (LES). However, because of the paucity of root trait data, the complex nature of the rhizosphere, and the phylogenetic conservatism of root traits, it is unknown whether the RES hypothesis accurately describes plant nutrient and water acquisition strategies. This project utilizesplants grown in common garden conditions in the Cambridge University Botanic Garden and a meta-analysis of long-term fire manipulation experiments to examine the belowground physiological traits of fire-adapted and non-fire-adapted herbaceous species to 1) test the RES hypothesis and 2) describe the effect of fire regimes on fine root functional traits – which in turn affect carbon and nutrient cycling. A suite of morphological, chemical, and biological root traits (e.g. root diameter, specific root length, percent N, percent mycorrhizal colonization, etc.) of 50 herbaceous species were measuredand tested for phylogenetic conservatism and RES dimensionality. Fire-adapted and non-fire-adapted plants traits were compared using phylogenetic PCA techniques. Preliminary evidence suggests that phylogenetic conservatism may weaken the single-dimensionality of the RES, suggesting that there may not be a single way that plants optimize nutrient and water acquisition and storage in the complex rhizosphere; additionally, fire-adapted species are expected to be more conservative than non-fire-adapted species, which may be indicative of slower carbon cycling with increasing fire frequency and intensity.Keywords: climate change, fire regimes, root economic spectrum, fine roots
Procedia PDF Downloads 123541 Acceleration of Adsorption Kinetics by Coupling Alternating Current with Adsorption Process onto Several Adsorbents
Authors: A. Kesraoui, M. Seffen
Abstract:
Applications of adsorption onto activated carbon for water treatment are well known. The process has been demonstrated to be widely effective for removing dissolved organic substances from wastewaters, but this treatment has a major drawback is the high operating cost. The main goal of our research work is to improve the retention capacity of Tunisian biomass for the depollution of industrial wastewater and retention of pollutants considered toxic. The biosorption process is based on the retention of molecules and ions onto a solid surface composed of biological materials. The evaluation of the potential use of these materials is important to propose as an alternative to the adsorption process generally expensive, used to remove organic compounds. Indeed, these materials are very abundant in nature and are low cost. Certainly, the biosorption process is effective to remove the pollutants, but it presents a slow kinetics. The improvement of the biosorption rates is a challenge to make this process competitive with respect to oxidation and adsorption onto lignocellulosic fibers. In this context, the alternating current appears as a new alternative, original and a very interesting phenomenon in the acceleration of chemical reactions. Our main goal is to increase the retention acceleration of dyes (indigo carmine, methylene blue) and phenol by using a new alternative: alternating current. The adsorption experiments have been performed in a batch reactor by adding some of the adsorbents in 150 mL of pollutants solution with the desired concentration and pH. The electrical part of the mounting comprises a current source which delivers an alternating current voltage of 2 to 15 V. It is connected to a voltmeter that allows us to read the voltage. In a 150 mL capacity cell, we plunged two zinc electrodes and the distance between two Zinc electrodes has been 4 cm. Thanks to alternating current, we have succeeded to improve the performance of activated carbon by increasing the speed of the indigo carmine adsorption process and reducing the treatment time. On the other hand, we have studied the influence of the alternating current on the biosorption rate of methylene blue onto Luffa cylindrica fibers and the hybrid material (Luffa cylindrica-ZnO). The results showed that the alternating current accelerated the biosorption rate of methylene blue onto the Luffa cylindrica and the Luffa cylindrica-ZnO hybrid material and increased the adsorbed amount of methylene blue on both adsorbents. In order to improve the removal of phenol, we performed the coupling between the alternating current and the biosorption onto two adsorbents: Luffa cylindrica and the hybrid material (Luffa cylindrica-ZnO). In fact, the alternating current has succeeded to improve the performance of adsorbents by increasing the speed of the adsorption process and the adsorption capacity and reduce the processing time.Keywords: adsorption, alternating current, dyes, modeling
Procedia PDF Downloads 160540 Existential and Possessive Constructions in Modern Standard Arabic Two Strategies Reflecting the Ontological (Non-)Autonomy of Located or Possessed Entities
Authors: Fayssal Tayalati
Abstract:
Although languages use very divergent constructional strategies, all existential constructions appear to invariably involve an implicit or explicit locative constituent. This locative constituent either surface as a true locative phrase or are realized as a possessor noun phrase. However, while much research focuses on the supposed underlying syntactic relation of locative and possessive existential constructions, not much is known about possible semantic factors that could govern the choice between these constructions. The main question that we address in this talk concerns the choice between the two related constructions in Modern Standard Arabic (MAS). Although both are used to express the existence of something somewhere, we can distinguish three contexts: First, for some types of entities, only the EL construction is possible (e.g. (1a) ṯammata raǧulun fī l-ḥadīqati vs. (1b) *(kāna) ladā l-ḥadīqati raǧulun). Second, for other types of entities, only the possessive construction is possible (e.g. (2a) ladā ṭ-ṭawilati aklun dāʾiriyyun vs. (2b) *ṯammata šaklun dāʾiriyyun ladā/fī ṭ-ṭawilati). Finally, for still other entities, both constructions can be found (e.g. (3a) ṯammata ḥubbun lā yūṣafu ladā ǧārī li-zawǧati-hi and (3b) ladā ǧārī ḥubbun lā yūṣafu li-zawǧati-hi). The data covering a range of ontologically different entities (concrete objects, events, body parts, dimensions, essential qualities, feelings, etc.) shows that the choice between the existential locative and the possessive constructions is closely linked to the conceptual autonomy of the existential theme with respect to its location or to the whole that it is a part of. The construction with ṯammata is the only possible one to express the existence of a fully autonomous (i.e. nondependent) entity (concrete objects (e.g.1) and abstract objects such as events, especially the ones that Grimshaw called ‘simple events’). The possessive construction with (kāna) ladā is the only one used to express the existence of fully non-autonomous (i.e. fully dependent on a whole) entities (body parts, dimensions (e.g. 2), essential qualities). The two constructions alternate when the existential theme is conceptually dependent but separable of the whole, either because it has an autonomous (independent) existence of the given whole (spare parts of an object), or because it receives a relative autonomy in the speech through a modifier (accidental qualities, feelings (e.g. 3a, 3b), psychological states, among some other kinds of themes). In this case, the modifier expresses an approximate boundary on a scale, and provides relative autonomy to the entity. Finally, we will show that kinship terms (e.g. son), which at first sight may seem to constitute counterexamples to our hypothesis, are nonetheless supported by it. The ontological (non-)autonomy of located or possessed entities is also reflected by morpho-syntactic properties, among them the use and the choice of determiners, pluralisation and the behavior of entities in the context of associative anaphora.Keywords: existence, possession, autonomous entities, non-autonomous entities
Procedia PDF Downloads 350539 Frankie Adams’s Sexuality in the Member of the Wedding: Focusing on Musical References
Authors: Saori Iwatsuka
Abstract:
In The Member of the Wedding, Carson McCullers starts with the words, “It happened,” without telling the reader what happens to a twelve-year-old protagonist, Frankie Adams. The reader feels confused and incomprehensible. However, he or she later realizes that the confusing phrase is connected to the scene where Frankie feels “the thing happened” after listening to the melodic lines of jazz and blues. Yet, the reader cannot really comprehend what happens to Frankie and feels puzzled till the end. And the story ends with Frankie’s words, “I am simply mad about . . .” Implying her queer desire for her new friend Mary Littlejohn, McCullers never tells the reader whom Frankie is mad about. Despite McCullers’s ambiguous way of depicting Frankie’s sexuality, recent critics and reviewers have come to discuss her sexuality as anti-heterosexual because Frankie expresses her hatred for Barney, whom she has had some type of sexual encounter, and feels wrong with her brother Jarvis’s wedding. After giving up her sexual desire for Jarvis’s bride, Janice, Frankie changes her name to Frances, becomes engrossed with Michelangelo, and enjoys reading Tennyson’s poetry with Mary. Michelangelo and Tennyson are well-known homosexual artists, which suggests that Frankie has an anti-heterosexual orientation. As McCullers does not precisely describe Frankie’s sexuality, the reader can only assume it by connecting fragmentary descriptions. However, this discussion is more clarified to show Frankie’s sexuality because analyzing the musical references of jazz and blues and interpreting them from a musicological viewpoint will illuminate it. In her works, McCullers frequently uses musical references and descriptions, which have a significant and psychological impact on the protagonists and portrays their bodily reactions to the impact to reveal what the reader cannot see on the surface. Thus, in this story, too, Frankie’s bodily reaction to music is portrayed to cue her feelings. After seeing the chimney swifts, known as monogamous birds, Frankie feels “a jazz sadness,” quivers her nerves and stiffens her heart. After listening to Berenice’s “dark jazz voice,” Frankie feels dizzy and throws a knife because Berenice’s voice jazzes (excites) her heart that beats in her head. Calming herself, she fantasizes that Jarvis, Jarvis’s bride, Janice, and herself are members of “the we of me.” Then in the evening, listening to the blues and jazz being played by a black horn player somewhere in her neighborhood, Frankie realizes “the thing happened” and discovers “a new feeling.” Following the musical references “jazz” and “blues” and examining them from the viewpoint of musicology and terminology leads the reader to explore what “it” is in “it happened” and what her “new feeling” is when “the thing happened” with the blues tune breaking off. Those discussions will illuminate Frankie’s sexuality. As McCullers does not clearly name her sexuality, this paper uses the word queer to express Frankie’s anti-sexual orientation.Keywords: jazz and blues, musical references, queer sexuality, “we of me”
Procedia PDF Downloads 89538 Machine Learning Techniques for Estimating Ground Motion Parameters
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this study is to evaluate the advantages and disadvantages of various machine learning techniques in forecasting ground-motion intensity measures given source characteristics, source-to-site distance, and local site condition. Intensity measures such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Estimating these variables for future earthquake events is a key step in seismic hazard assessment and potentially subsequent risk assessment of different types of structures. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as a statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The algorithms are adjusted to quantify event-to-event and site-to-site variability of the ground motions by implementing them as random effects in the proposed models to reduce the aleatory uncertainty. All the algorithms are trained using a selected database of 4,528 ground-motions, including 376 seismic events with magnitude 3 to 5.8, recorded over the hypocentral distance range of 4 to 500 km in Oklahoma, Kansas, and Texas since 2005. The main reason of the considered database stems from the recent increase in the seismicity rate of these states attributed to petroleum production and wastewater disposal activities, which necessities further investigation in the ground motion models developed for these states. Accuracy of the models in predicting intensity measures, generalization capability of the models for future data, as well as usability of the models are discussed in the evaluation process. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available.Keywords: artificial neural network, ground-motion models, machine learning, random forest, support vector machine
Procedia PDF Downloads 122537 Decarbonising Urban Building Heating: A Case Study on the Benefits and Challenges of Fifth-Generation District Heating Networks
Authors: Mazarine Roquet, Pierre Dewallef
Abstract:
The building sector, both residential and tertiary, accounts for a significant share of greenhouse gas emissions. In Belgium, partly due to poor insulation of the building stock, but certainly because of the massive use of fossil fuels for heating buildings, this share reaches almost 30%. To reduce carbon emissions from urban building heating, district heating networks emerge as a promising solution as they offer various assets such as improving the load factor, integrating combined heat and power systems, and enabling energy source diversification, including renewable sources and waste heat recovery. However, mainly for sake of simple operation, most existing district heating networks still operate at high or medium temperatures ranging between 120°C and 60°C (the socalled second and third-generations district heating networks). Although these district heating networks offer energy savings in comparison with individual boilers, such temperature levels generally require the use of fossil fuels (mainly natural gas) with combined heat and power. The fourth-generation district heating networks improve the transport and energy conversion efficiency by decreasing the operating temperature between 50°C and 30°C. Yet, to decarbonise the building heating one must increase the waste heat recovery and use mainly wind, solar or geothermal sources for the remaining heat supply. Fifth-generation networks operating between 35°C and 15°C offer the possibility to decrease even more the transport losses, to increase the share of waste heat recovery and to use electricity from renewable resources through the use of heat pumps to generate low temperature heat. The main objective of this contribution is to exhibit on a real-life test case the benefits of replacing an existing third-generation network by a fifth-generation one and to decarbonise the heat supply of the building stock. The second objective of the study is to highlight the difficulties resulting from the use of a fifth-generation, low-temperature, district heating network. To do so, a simulation model of the district heating network including its regulation is implemented in the modelling language Modelica. This model is applied to the test case of the heating network on the University of Liège's Sart Tilman campus, consisting of around sixty buildings. This model is validated with monitoring data and then adapted for low-temperature networks. A comparison of primary energy consumptions as well as CO2 emissions is done between the two cases to underline the benefits in term of energy independency and GHG emissions. To highlight the complexity of operating a lowtemperature network, the difficulty of adapting the mass flow rate to the heat demand is considered. This shows the difficult balance between the thermal comfort and the electrical consumption of the circulation pumps. Several control strategies are considered and compared to the global energy savings. The developed model can be used to assess the potential for energy and CO2 emissions savings retrofitting an existing network or when designing a new one.Keywords: building simulation, fifth-generation district heating network, low-temperature district heating network, urban building heating
Procedia PDF Downloads 83536 Identification of Hub Genes in the Development of Atherosclerosis
Authors: Jie Lin, Yiwen Pan, Li Zhang, Zhangyong Xia
Abstract:
Atherosclerosis is a chronic inflammatory disease characterized by the accumulation of lipids, immune cells, and extracellular matrix in the arterial walls. This pathological process can lead to the formation of plaques that can obstruct blood flow and trigger various cardiovascular diseases such as heart attack and stroke. The underlying molecular mechanisms still remain unclear, although many studies revealed the dysfunction of endothelial cells, recruitment and activation of monocytes and macrophages, and the production of pro-inflammatory cytokines and chemokines in atherosclerosis. This study aimed to identify hub genes involved in the progression of atherosclerosis and to analyze their biological function in silico, thereby enhancing our understanding of the disease’s molecular mechanisms. Through the analysis of microarray data, we examined the gene expression in media and neo-intima from plaques, as well as distant macroscopically intact tissue, across a cohort of 32 hypertensive patients. Initially, 112 differentially expressed genes (DEGs) were identified. Subsequent immune infiltration analysis indicated a predominant presence of 27 immune cell types in the atherosclerosis group, particularly noting an increase in monocytes and macrophages. In the Weighted gene co-expression network analysis (WGCNA), 10 modules with a minimum of 30 genes were defined as key modules, with blue, dark, Oliver green and sky-blue modules being the most significant. These modules corresponded respectively to monocyte, activated B cell, and activated CD4 T cell gene patterns, revealing a strong morphological-genetic correlation. From these three gene patterns (modules morphology), a total of 2509 key genes (Gene Significance >0.2, module membership>0.8) were extracted. Six hub genes (CD36, DPP4, HMOX1, PLA2G7, PLN2, and ACADL) were then identified by intersecting 2509 key genes, 102 DEGs with lipid-related genes from the Genecard database. The bio-functional analysis of six hub genes was estimated by a robust classifier with an area under the curve (AUC) of 0.873 in the ROC plot, indicating excellent efficacy in differentiating between the disease and control group. Moreover, PCA visualization demonstrated clear separation between the groups based on these six hub genes, suggesting their potential utility as classification features in predictive models. Protein-protein interaction (PPI) analysis highlighted DPP4 as the most interconnected gene. Within the constructed key gene-drug network, 462 drugs were predicted, with ursodeoxycholic acid (UDCA) being identified as a potential therapeutic agent for modulating DPP4 expression. In summary, our study identified critical hub genes implicated in the progression of atherosclerosis through comprehensive bioinformatic analyses. These findings not only advance our understanding of the disease but also pave the way for applying similar analytical frameworks and predictive models to other diseases, thereby broadening the potential for clinical applications and therapeutic discoveries.Keywords: atherosclerosis, hub genes, drug prediction, bioinformatics
Procedia PDF Downloads 67535 Single Cell Rna Sequencing Operating from Benchside to Bedside: An Interesting Entry into Translational Genomics
Authors: Leo Nnamdi Ozurumba-Dwight
Abstract:
Single-cell genomic analytical systems have proved to be a platform to isolate bulk cells into selected single cells for genomic, proteomic, and related metabolomic studies. This is enabling systematic investigations of the level of heterogeneity in a diverse and wide pool of cell populations. Single cell technologies, embracing techniques such as high parameter flow cytometry, single-cell sequencing, and high-resolution images are playing vital roles in these investigations on messenger ribonucleic acid (mRNA) molecules and related gene expressions in tracking the nature and course of disease conditions. This entails targeted molecular investigations on unit cells that help us understand cell behavoiur and expressions, which can be examined for their health implications on the health state of patients. One of the vital good sides of single-cell RNA sequencing (scRNA seq) is its probing capacity to detect deranged or abnormal cell populations present within homogenously perceived pooled cells, which would have evaded cursory screening on the pooled cell populations of biological samples obtained as part of diagnostic procedures. Despite conduction of just single-cell transcriptome analysis, scRNAseq now permits comparison of the transcriptome of the individual cells, which can be evaluated for gene expressional patterns that depict areas of heterogeneity with pharmaceutical drug discovery and clinical treatment applications. It is vital to strictly work through the tools of investigations from wet lab to bioinformatics and computational tooled analyses. In the precise steps for scRNAseq, it is critical to do thorough and effective isolation of viable single cells from the tissues of interest using dependable techniques (such as FACS) before proceeding to lysis, as this enhances the appropriate picking of quality mRNA molecules for subsequent sequencing (such as by the use of Polymerase Chain Reaction machine). Interestingly, scRNAseq can be deployed to analyze various types of biological samples such as embryos, nervous systems, tumour cells, stem cells, lymphocytes, and haematopoietic cells. In haematopoietic cells, it can be used to stratify acute myeloid leukemia patterns in patients, sorting them out into cohorts that enable re-modeling of treatment regimens based on stratified presentations. In immunotherapy, it can furnish specialist clinician-immunologist with tools to re-model treatment for each patient, an attribute of precision medicine. Finally, the good predictive attribute of scRNAseq can help reduce the cost of treatment for patients, thus attracting more patients who would have otherwise been discouraged from seeking quality clinical consultation help due to perceived high cost. This is a positive paradigm shift for patients’ attitudes primed towards seeking treatment.Keywords: immunotherapy, transcriptome, re-modeling, mRNA, scRNA-seq
Procedia PDF Downloads 176534 The Impact of Shifting Trading Pattern from Long-Haul to Short-Sea to the Car Carriers’ Freight Revenues
Authors: Tianyu Wang, Nikita Karandikar
Abstract:
The uncertainty around cost, safety, and feasibility of the decarbonized shipping fuels has made it increasingly complex for the shipping companies to set pricing strategies and forecast their freight revenues going forward. The increase in the green fuel surcharges will ultimately influence the automobile’s consumer prices. The auto shipping demand (ton-miles) has been gradually shifting from long-haul to short-sea trade over the past years following the relocation of the original equipment manufacturer (OEM) manufacturing to regions such as South America and Southeast Asia. The objective of this paper is twofold: 1) to investigate the car-carriers freight revenue development over the years when the trade pattern is gradually shifting towards short-sea exports 2) to empirically identify the quantitative impact of such trade pattern shifting to mainly freight rate, but also vessel size, fleet size as well as Green House Gas (GHG) emission in Roll on-Roll Off (Ro-Ro) shipping. In this paper, a model of analyzing and forecasting ton-miles and freight revenues for the trade routes of AS-NA (Asia to North America), EU-NA (Europe to North America), and SA-NA (South America to North America) is established by deploying Automatic Identification System (AIS) data and the financial results of a selected car carrier company. More specifically, Wallenius Wilhelmsen Logistics (WALWIL), the Norwegian Ro-Ro carrier listed on Oslo Stock Exchange, is selected as the case study company in this paper. AIS-based ton-mile datasets of WALWIL vessels that are sailing into North America region from three different origins (Asia, Europe, and South America), together with WALWIL’s quarterly freight revenues as reported in trade segments, will be investigated and compared for the past five years (2018-2022). Furthermore, ordinary‐least‐square (OLS) regression is utilized to construct the ton-mile demand and freight revenue forecasting. The determinants of trade pattern shifting, such as import tariffs following the China-US trade war and fuel prices following the 0.1% Emission Control Areas (ECA) zone requirement after IMO2020 will be set as key variable inputs to the machine learning model. The model will be tested on another newly listed Norwegian Car Carrier, Hoegh Autoliner, to forecast its 2022 financial results and to validate the accuracy based on its actual results. GHG emissions on the three routes will be compared and discussed based on a constant emission per mile assumption and voyage distances. Our findings will provide important insights about 1) the trade-off evaluation between revenue reduction and energy saving with the new ton-mile pattern and 2) how the trade flow shifting would influence the future need for the vessel and fleet size.Keywords: AIS, automobile exports, maritime big data, trade flows
Procedia PDF Downloads 121533 Enhanced Field Emission from Plasma Treated Graphene and 2D Layered Hybrids
Authors: R. Khare, R. V. Gelamo, M. A. More, D. J. Late, Chandra Sekhar Rout
Abstract:
Graphene emerges out as a promising material for various applications ranging from complementary integrated circuits to optically transparent electrode for displays and sensors. The excellent conductivity and atomic sharp edges of unique two-dimensional structure makes graphene a propitious field emitter. Graphene analogues of other 2D layered materials have emerged in material science and nanotechnology due to the enriched physics and novel enhanced properties they present. There are several advantages of using 2D nanomaterials in field emission based devices, including a thickness of only a few atomic layers, high aspect ratio (the ratio of lateral size to sheet thickness), excellent electrical properties, extraordinary mechanical strength and ease of synthesis. Furthermore, the presence of edges can enhance the tunneling probability for the electrons in layered nanomaterials similar to that seen in nanotubes. Here we report electron emission properties of multilayer graphene and effect of plasma (CO2, O2, Ar and N2) treatment. The plasma treated multilayer graphene shows an enhanced field emission behavior with a low turn on field of 0.18 V/μm and high emission current density of 1.89 mA/cm2 at an applied field of 0.35 V/μm. Further, we report the field emission studies of layered WS2/RGO and SnS2/RGO composites. The turn on field required to draw a field emission current density of 1μA/cm2 is found to be 3.5, 2.3 and 2 V/μm for WS2, RGO and the WS2/RGO composite respectively. The enhanced field emission behavior observed for the WS2/RGO nanocomposite is attributed to a high field enhancement factor of 2978, which is associated with the surface protrusions of the single-to-few layer thick sheets of the nanocomposite. The highest current density of ~800 µA/cm2 is drawn at an applied field of 4.1 V/μm from a few layers of the WS2/RGO nanocomposite. Furthermore, first-principles density functional calculations suggest that the enhanced field emission may also be due to an overlap of the electronic structures of WS2 and RGO, where graphene-like states are dumped in the region of the WS2 fundamental gap. Similarly, the turn on field required to draw an emission current density of 1µA/cm2 is significantly low (almost half the value) for the SnS2/RGO nanocomposite (2.65 V/µm) compared to pristine SnS2 (4.8 V/µm) nanosheets. The field enhancement factor β (~3200 for SnS2 and ~3700 for SnS2/RGO composite) was calculated from Fowler-Nordheim (FN) plots and indicates emission from the nanometric geometry of the emitter. The field emission current versus time plot shows overall good emission stability for the SnS2/RGO emitter. The DFT calculations reveal that the enhanced field emission properties of SnS2/RGO composites are because of a substantial lowering of work function of SnS2 when supported by graphene, which is in response to p-type doping of the graphene substrate. Graphene and 2D analogue materials emerge as a potential candidate for future field emission applications.Keywords: graphene, layered material, field emission, plasma, doping
Procedia PDF Downloads 361532 Unsupervised Detection of Burned Area from Remote Sensing Images Using Spatial Correlation and Fuzzy Clustering
Authors: Tauqir A. Moughal, Fusheng Yu, Abeer Mazher
Abstract:
Land-cover and land-use change information are important because of their practical uses in various applications, including deforestation, damage assessment, disasters monitoring, urban expansion, planning, and land management. Therefore, developing change detection methods for remote sensing images is an important ongoing research agenda. However, detection of change through optical remote sensing images is not a trivial task due to many factors including the vagueness between the boundaries of changed and unchanged regions and spatial dependence of the pixels to its neighborhood. In this paper, we propose a binary change detection technique for bi-temporal optical remote sensing images. As in most of the optical remote sensing images, the transition between the two clusters (change and no change) is overlapping and the existing methods are incapable of providing the accurate cluster boundaries. In this regard, a methodology has been proposed which uses the fuzzy c-means clustering to tackle the problem of vagueness in the changed and unchanged class by formulating the soft boundaries between them. Furthermore, in order to exploit the neighborhood information of the pixels, the input patterns are generated corresponding to each pixel from bi-temporal images using 3×3, 5×5 and 7×7 window. The between images and within image spatial dependence of the pixels to its neighborhood is quantified by using Pearson product moment correlation and Moran’s I statistics, respectively. The proposed technique consists of two phases. At first, between images and within image spatial correlation is calculated to utilize the information that the pixels at different locations may not be independent. Second, fuzzy c-means technique is used to produce two clusters from input feature by not only taking care of vagueness between the changed and unchanged class but also by exploiting the spatial correlation of the pixels. To show the effectiveness of the proposed technique, experiments are conducted on multispectral and bi-temporal remote sensing images. A subset (2100×1212 pixels) of a pan-sharpened, bi-temporal Landsat 5 thematic mapper optical image of Los Angeles, California, is used in this study which shows a long period of the forest fire continued from July until October 2009. Early forest fire and later forest fire optical remote sensing images were acquired on July 5, 2009 and October 25, 2009, respectively. The proposed technique is used to detect the fire (which causes change on earth’s surface) and compared with the existing K-means clustering technique. Experimental results showed that proposed technique performs better than the already existing technique. The proposed technique can be easily extendable for optical hyperspectral images and is suitable for many practical applications.Keywords: burned area, change detection, correlation, fuzzy clustering, optical remote sensing
Procedia PDF Downloads 169531 Curcumin Nanomedicine: A Breakthrough Approach for Enhanced Lung Cancer Therapy
Authors: Shiva Shakori Poshteh
Abstract:
Lung cancer is a highly prevalent and devastating disease, representing a significant global health concern with profound implications for healthcare systems and society. Its high incidence, mortality rates, and late-stage diagnosis contribute to its formidable nature. To address these challenges, nanoparticle-based drug delivery has emerged as a promising therapeutic strategy. Curcumin (CUR), a natural compound derived from turmeric, has garnered attention as a potential nanomedicine for lung cancer treatment. Nanoparticle formulations of CUR offer several advantages, including improved drug delivery efficiency, enhanced stability, controlled release kinetics, and targeted delivery to lung cancer cells. CUR exhibits a diverse array of effects on cancer cells. It induces apoptosis by upregulating pro-apoptotic proteins, such as Bax and Bak, and downregulating anti-apoptotic proteins, such as Bcl-2. Additionally, CUR inhibits cell proliferation by modulating key signaling pathways involved in cancer progression. It suppresses the PI3K/Akt pathway, crucial for cell survival and growth, and attenuates the mTOR pathway, which regulates protein synthesis and cell proliferation. CUR also interferes with the MAPK pathway, which controls cell proliferation and survival, and modulates the Wnt/β-catenin pathway, which plays a role in cell proliferation and tumor development. Moreover, CUR exhibits potent antioxidant activity, reducing oxidative stress and protecting cells from DNA damage. Utilizing CUR as a standalone treatment is limited by poor bioavailability, lack of targeting, and degradation susceptibility. Nanoparticle-based delivery systems can overcome these challenges. They enhance CUR’s bioavailability, protect it from degradation, and improve absorption. Further, Nanoparticles enable targeted delivery to lung cancer cells through surface modifications or ligand-based targeting, ensuring sustained release of CUR to prolong therapeutic effects, reduce administration frequency, and facilitate penetration through the tumor microenvironment, thereby enhancing CUR’s access to cancer cells. Thus, nanoparticle-based CUR delivery systems promise to improve lung cancer treatment outcomes. This article provides an overview of lung cancer, explores CUR nanoparticles as a treatment approach, discusses the benefits and challenges of nanoparticle-based drug delivery, and highlights prospects for CUR nanoparticles in lung cancer treatment. Future research aims to optimize these delivery systems for improved efficacy and patient prognosis in lung cancer.Keywords: lung cancer, curcumin, nanomedicine, nanoparticle-based drug delivery
Procedia PDF Downloads 72530 Efficient Computer-Aided Design-Based Multilevel Optimization of the LS89
Authors: A. Chatel, I. S. Torreguitart, T. Verstraete
Abstract:
The paper deals with a single point optimization of the LS89 turbine using an adjoint optimization and defining the design variables within a CAD system. The advantage of including the CAD model in the design system is that higher level constraints can be imposed on the shape, allowing the optimized model or component to be manufactured. However, CAD-based approaches restrict the design space compared to node-based approaches where every node is free to move. In order to preserve a rich design space, we develop a methodology to refine the CAD model during the optimization and to create the best parameterization to use at each time. This study presents a methodology to progressively refine the design space, which combines parametric effectiveness with a differential evolutionary algorithm in order to create an optimal parameterization. In this manuscript, we show that by doing the parameterization at the CAD level, we can impose higher level constraints on the shape, such as the axial chord length, the trailing edge radius and G2 geometric continuity between the suction side and pressure side at the leading edge. Additionally, the adjoint sensitivities are filtered out and only smooth shapes are produced during the optimization process. The use of algorithmic differentiation for the CAD kernel and grid generator allows computing the grid sensitivities to machine accuracy and avoid the limited arithmetic precision and the truncation error of finite differences. Then, the parametric effectiveness is computed to rate the ability of a set of CAD design parameters to produce the design shape change dictated by the adjoint sensitivities. During the optimization process, the design space is progressively enlarged using the knot insertion algorithm which allows introducing new control points whilst preserving the initial shape. The position of the inserted knots is generally assumed. However, this assumption can hinder the creation of better parameterizations that would allow producing more localized shape changes where the adjoint sensitivities dictate. To address this, we propose using a differential evolutionary algorithm to maximize the parametric effectiveness by optimizing the location of the inserted knots. This allows the optimizer to gradually explore larger design spaces and to use an optimal CAD-based parameterization during the course of the optimization. The method is tested on the LS89 turbine cascade and large aerodynamic improvements in the entropy generation are achieved whilst keeping the exit flow angle fixed. The trailing edge and axial chord length, which are kept fixed as manufacturing constraints. The optimization results show that the multilevel optimizations were more efficient than the single level optimization, even though they used the same number of design variables at the end of the multilevel optimizations. Furthermore, the multilevel optimization where the parameterization is created using the optimal knot positions results in a more efficient strategy to reach a better optimum than the multilevel optimization where the position of the knots is arbitrarily assumed.Keywords: adjoint, CAD, knots, multilevel, optimization, parametric effectiveness
Procedia PDF Downloads 110529 Plasma Chemical Gasification of Solid Fuel with Mineral Mass Processing
Authors: V. E. Messerle, O. A. Lavrichshev, A. B. Ustimenko
Abstract:
Currently and in the foreseeable future (up to 2100), the global economy is oriented to the use of organic fuel, mostly, solid fuels, the share of which constitutes 40% in the generation of electric power. Therefore, the development of technologies for their effective and environmentally friendly application represents a priority problem nowadays. This work presents the results of thermodynamic and experimental investigations of plasma technology for processing of low-grade coals. The use of this technology for producing target products (synthesis gas, hydrogen, technical carbon, and valuable components of mineral mass of coals) meets the modern environmental and economic requirements applied to basic industrial sectors. The plasma technology of coal processing for the production of synthesis gas from the coal organic mass (COM) and valuable components from coal mineral mass (CMM) is highly promising. Its essence is heating the coal dust by reducing electric arc plasma to the complete gasification temperature, when the COM converts into synthesis gas, free from particles of ash, nitrogen oxides and sulfur. At the same time, oxides of the CMM are reduced by the carbon residue, producing valuable components, such as technical silicon, ferrosilicon, aluminum and carbon silicon, as well as microelements of rare metals, such as uranium, molybdenum, vanadium, titanium. Thermodynamic analysis of the process was made using a versatile computation program TERRA. Calculations were carried out in the temperature range 300 - 4000 K and a pressure of 0.1 MPa. Bituminous coal with the ash content of 40% and the heating value 16,632 kJ/kg was taken for the investigation. The gaseous phase of coal processing products includes, basically, a synthesis gas with a concentration of up to 99 vol.% at 1500 K. CMM components completely converts from the condensed phase into the gaseous phase at a temperature above 2600 K. At temperatures above 3000 K, the gaseous phase includes, basically, Si, Al, Ca, Fe, Na, and compounds of SiO, SiH, AlH, and SiS. The latter compounds dissociate into relevant elements with increasing temperature. Complex coal conversion for the production of synthesis gas from COM and valuable components from CMM was investigated using a versatile experimental plant the main element of which was plug and flow plasma reactor. The material and thermal balances helped to find the integral indicators for the process. Plasma-steam gasification of the low-grade coal with CMM processing gave the synthesis gas yield 95.2%, the carbon gasification 92.3%, and coal desulfurization 95.2%. The reduced material of the CMM was found in the slag in the form of ferrosilicon as well as silicon and iron carbides. The maximum reduction of the CMM oxides was observed in the slag from the walls of the plasma reactor in the areas with maximum temperatures, reaching 47%. The thusly produced synthesis gas can be used for synthesis of methanol, or as a high-calorific reducing gas instead of blast-furnace coke as well as power gas for thermal power plants. Reduced material of CMM can be used in metallurgy.Keywords: gasification, mineral mass, organic mass, plasma, processing, solid fuel, synthesis gas, valuable components
Procedia PDF Downloads 608528 Facies Sedimentology and Astronomic Calibration of the Reinech Member (Lutetian)
Authors: Jihede Haj Messaoud, Hamdi Omar, Hela Fakhfakh Ben Jemia, Chokri Yaich
Abstract:
The Upper Lutetian alternating marl–limestone succession of Reineche Member was deposited over a warm shallow carbonate platform that permits Nummulites proliferation. High-resolution studies of 30 meters thick Nummulites-bearing Reineche Member, cropping out in Central Tunisia (Jebel Siouf), have been undertaken, regarding pronounced cyclical sedimentary sequences, in order to investigate the periodicity of cycles and their related orbital-scale oceanic and climatic changes. The palaeoenvironmental and palaeoclimatic data are preserved in several proxies obtainable through high-resolution sampling and laboratories measurement and analysis as magnetic susceptibility (MS) and carbonates contents in conjunction with a wireline logging tools. The time series analysis of proxies permits to establish cyclicity orders present in the studied intervals which could be linked to the orbital cycles. MS records provide high-resolution proxies for relative sea level change in Late Lutetian strata. The spectral analysis of MS fluctuations confirmed the orbital forcing by the presence of the complete suite of orbital frequencies in the precession of 23 ka, the obliquity of 41 ka, and notably the two modes of eccentricity of 100 and 405 ka. Regarding the two periodic sedimentary cycles detected by wavelet analysis of proxy fluctuations which coincide with the long-term 405 ka eccentricity cycle, the Reineche Member spanned 0,8 Myr. Wireline logging tools as gamma ray and sonic were used as a proxies to decipher cyclicity and trends in sedimentation and contribute to identifying and correlate units. There are used to constraint the highest frequency cyclicity modulated by a long term wavelength cycling apparently controlled by clay content. Interpreted as a result of variations in carbonate productivity, it has been suggested that the marl-limestone couplets, represent the sedimentary response to the orbital forcing. The calculation of cycle durations through Reineche Member, is used as a geochronometer and permit the astronomical calibration of the geologic time scale. Furthermore, MS coupled with carbonate contents, and fossil occurrences provide strong evidence for combined detrital inputs and marine surface carbonate productivity cycles. These two synchronous processes were driven by the precession index and ‘fingerprinted’ in the basic marl–limestone couplets, modulated by orbital eccentricity.Keywords: magnetic susceptibility, cyclostratigraphy, orbital forcing, spectral analysis, Lutetian
Procedia PDF Downloads 294527 Effects of Temperature and the Use of Bacteriocins on Cross-Contamination from Animal Source Food Processing: A Mathematical Model
Authors: Benjamin Castillo, Luis Pastenes, Fernando Cerdova
Abstract:
The contamination of food by microbial agents is a common problem in the industry, especially regarding the elaboration of animal source products. Incorrect manipulation of the machinery or on the raw materials can cause a decrease in production or an epidemiological outbreak due to intoxication. In order to improve food product quality, different methods have been used to reduce or, at least, to slow down the growth of the pathogens, especially deteriorated, infectious or toxigenic bacteria. These methods are usually carried out under low temperatures and short processing time (abiotic agents), along with the application of antibacterial substances, such as bacteriocins (biotic agents). This, in a controlled and efficient way that fulfills the purpose of bacterial control without damaging the final product. Therefore, the objective of the present study is to design a secondary mathematical model that allows the prediction of both the biotic and abiotic factor impact associated with animal source food processing. In order to accomplish this objective, the authors propose a three-dimensional differential equation model, whose components are: bacterial growth, release, production and artificial incorporation of bacteriocins and changes in pH levels of the medium. These three dimensions are constantly being influenced by the temperature of the medium. Secondly, this model adapts to an idealized situation of cross-contamination animal source food processing, with the study agents being both the animal product and the contact surface. Thirdly, the stochastic simulations and the parametric sensibility analysis are compared with referential data. The main results obtained from the analysis and simulations of the mathematical model were to discover that, although bacterial growth can be stopped in lower temperatures, even lower ones are needed to eradicate it. However, this can be not only expensive, but counterproductive as well in terms of the quality of the raw materials and, on the other hand, higher temperatures accelerate bacterial growth. In other aspects, the use and efficiency of bacteriocins are an effective alternative in the short and medium terms. Moreover, an indicator of bacterial growth is a low-level pH, since lots of deteriorating bacteria are lactic acids. Lastly, the processing times are a secondary agent of concern when the rest of the aforementioned agents are under control. Our main conclusion is that when acclimating a mathematical model within the context of the industrial process, it can generate new tools that predict bacterial contamination, the impact of bacterial inhibition, and processing method times. In addition, the mathematical modeling proposed logistic input of broad application, which can be replicated on non-meat food products, other pathogens or even on contamination by crossed contact of allergen foods.Keywords: bacteriocins, cross-contamination, mathematical model, temperature
Procedia PDF Downloads 144526 Numerical Analysis and Parametric Study of Granular Anchor Pile on Expansive Soil Using Finite Element Method: Case of Addis Ababa, Bole Sub-City
Authors: Abdurahman Anwar Shfa
Abstract:
Addis Ababa is among the fastest-growing urban areas in the country. There are many new constructions of public and private condominiums and large new low rising residential buildings for residents. But the wide range of heaving problems of expansive soil in the city become a major difficulty for the construction sector, especially in low rising buildings, by causing different problems such as distortion and cracking of floor slabs, cracks in grade beams, and walls, jammed or misaligned Doors and Windows; failure of blocks supporting grade beams. Hence an attractive and economical design solution may be required for such type of problem. Therefore, this research works to publicize a recent innovation called the Granular Anchor Pile system for the reduction of the heave effect of expansive soil. This research is written for the objective of numerical investigation of the behavior of Granular Anchor Pile under the heave using Finite element analysis PLAXIS 3D program by means of studying the effect of different parameters like length of the pile, diameter of pile, and pile group by applying prescribed displacement of 10% of pile diameter at the center of granular pile anchor. An additional objective is examining the suitability of Granular Anchor Pile as an alternative solution for heave problems in expansive soils mostly for low rising buildings found in Addis Ababa City, especially in Bole Sub-City, by considering different factors such as the local availability of construction materials, economy for the construction, installation process condition, environmental benefit, time consumption and performance of the pile. Accordingly, the performance of the pile improves when the length of the pile increases. This is due to an increase in the self-weight of the pile and friction mobilized between the pile and soil interface. Additionally, the uplift capacity of the pile decreases when increasing the pile diameter and spacing between the piles in the group due to a reduction in the number of piles in the group. But, few cases show that the uplift capacity of the pile increases with increasing the pile diameter for a constant number of piles in the group and increasing the spacing between the pile and in the case of single pile capacity. This is due to the increment of piles' self-weight and surface area of the pile group and also the decrement of stress overlap in the soil caused by piles respectively. According to the suitability analysis, it is observed that Granular Anchor Pile is sensible or practical to apply for the actual problem of Expansive soil in a low rising building constructed in the country because of its convenience for all considerations.Keywords: expansive soil, granular anchor pile, PLAXIS, suitability analysis
Procedia PDF Downloads 35525 Selective Extraction of Lithium from Native Geothermal Brines Using Lithium-ion Sieves
Authors: Misagh Ghobadi, Rich Crane, Karen Hudson-Edwards, Clemens Vinzenz Ullmann
Abstract:
Lithium is recognized as the critical energy metal of the 21st century, comparable in importance to coal in the 19th century and oil in the 20th century, often termed 'white gold'. Current global demand for lithium, estimated at 0.95-0.98 million metric tons (Mt) of lithium carbonate equivalent (LCE) annually in 2024, is projected to rise to 1.87 Mt by 2027 and 3.06 Mt by 2030. Despite anticipated short-term stability in supply and demand, meeting the forecasted 2030 demand will require the lithium industry to develop an additional capacity of 1.42 Mt of LCE annually, exceeding current planned and ongoing efforts. Brine resources constitute nearly 65% of global lithium reserves, underscoring the importance of exploring lithium recovery from underutilized sources, especially geothermal brines. However, conventional lithium extraction from brine deposits faces challenges due to its time-intensive process, low efficiency (30-50% lithium recovery), unsuitability for low lithium concentrations (<300 mg/l), and notable environmental impacts. Addressing these challenges, direct lithium extraction (DLE) methods have emerged as promising technologies capable of economically extracting lithium even from low-concentration brines (>50 mg/l) with high recovery rates (75-98%). However, most studies (70%) have predominantly focused on synthetic brines instead of native (natural/real), with limited application of these approaches in real-world case studies or industrial settings. This study aims to bridge this gap by investigating a geothermal brine sample collected from a real case study site in the UK. A Mn-based lithium-ion sieve (LIS) adsorbent was synthesized and employed to selectively extract lithium from the sample brine. Adsorbents with a Li:Mn molar ratio of 1:1 demonstrated superior lithium selectivity and adsorption capacity. Furthermore, the pristine Mn-based adsorbent was modified through transition metals doping, resulting in enhanced lithium selectivity and adsorption capacity. The modified adsorbent exhibited a higher separation factor for lithium over major co-existing cations such as Ca, Mg, Na, and K, with separation factors exceeding 200. The adsorption behaviour was well-described by the Langmuir model, indicating monolayer adsorption, and the kinetics followed a pseudo-second-order mechanism, suggesting chemisorption at the solid surface. Thermodynamically, negative ΔG° values and positive ΔH° and ΔS° values were observed, indicating the spontaneity and endothermic nature of the adsorption process.Keywords: adsorption, critical minerals, DLE, geothermal brines, geochemistry, lithium, lithium-ion sieves
Procedia PDF Downloads 46524 Evaluation of Iron Application Method to Remediate Coastal Marine Sediment
Authors: Ahmad Seiar Yasser
Abstract:
Sediment is an important habitat for organisms and act as a store house for nutrients in aquatic ecosystems. Hydrogen sulfide is produced by microorganisms in the water columns and sediments, which is highly toxic and fatal to benthic organisms. However, the irons have the capacity to regulate the formation of sulfide by poising the redox sequence and to form insoluble iron sulfide and pyrite compounds. Therefore, we conducted two experiments aimed to evaluate the remediation efficiency of iron application to organically enrich and improve sediments environment. Experiments carried out in the laboratory using intact sediment cores taken from Mikawa Bay, Japan at every month from June to September 2017 and October 2018. In Experiment 1, after cores were collected, the iron powder or iron hydroxide were applied to the surface sediment with 5 g/ m2 or 5.6 g/ m2, respectively. In Experiment 2, we experimentally investigated the removal of hydrogen sulfide using (2mm or less and 2 to 5mm) of the steelmaking slag. Experiments are conducted both in the laboratory with the same boundary conditions. The overlying water were replaced with deoxygenated filtered seawater, and cores were sealed a top cap to keep anoxic condition with a stirrer to circulate the overlying water gently. The incubation experiments have been set in three treatments included the control, and each treatment replicated and were conducted with the same temperature of the in-situ conditions. Water samples were collected to measure the dissolved sulfide concentrations in the overlying water at appropriate time intervals by the methylene blue method. Sediment quality was also analyzed after the completion of the experiment. After the 21 days incubation, experimental results using iron powder and ferric hydroxide revealed that application of these iron containing materials significantly reduced sulfide release flux from the sediment into the overlying water. The average dissolved sulfides concentration in the overlying water of the treatment group was significantly decrease (p = .0001). While no significant difference was observed between the control group after 21 day incubation. Therefore, the application of iron to the sediment is a promising method to remediate contaminated sediments in a eutrophic water body, although ferric hydroxide has better hydrogen sulfide removal effects. Experiments using the steelmaking slag also clarified the fact that capping with (2mm or less and 2 to 5mm) of slag steelmaking is an effective technique for remediation of bottom sediments enriched organic containing hydrogen sulfide because it leads to the induction of chemical reaction between Fe and sulfides occur in sediments which did not occur in conditions naturally. Although (2mm or less) of slag steelmaking has better hydrogen sulfide removal effects. Because of economic reasons, the application of steelmaking slag to the sediment is a promising method to remediate contaminated sediments in the eutrophic water body.Keywords: sedimentary, H2S, iron, iron hydroxide
Procedia PDF Downloads 163523 The Interventricular Septum as a Site for Implantation of Electrocardiac Devices - Clinical Implications of Topography and Variation in Position
Authors: Marcin Jakiel, Maria Kurek, Karolina Gutkowska, Sylwia Sanakiewicz, Dominika Stolarczyk, Jakub Batko, Rafał Jakiel, Mateusz K. Hołda
Abstract:
Proper imaging of the interventricular septum during endocavital lead implantation is essential for successful procedure. The interventricular septum is located oblique to the 3 main body planes and forms angles of 44.56° ± 7.81°, 45.44° ± 7.81°, 62.49° (IQR 58.84° - 68.39°) with the sagittal, frontal and transverse planes, respectively. The optimal left anterior oblique (LAO) projection is to have the septum aligned along the radiation beam and will be obtained for an angle of 53.24° ± 9,08°, while the best visualization of the septal surface in the right anterior oblique (RAO) projection is obtained by using an angle of 45.44° ± 7.81°. In addition, the RAO angle (p=0.003) and the septal slope to the transverse plane (p=0.002) are larger in the male group, but the LAO angle (p=0.003) and the dihedral angle that the septum forms with the sagittal plane (p=0.003) are smaller, compared to the female group. Analyzing the optimal RAO angle in cross-sections lying at the level of the connections of the septum with the free wall of the right ventricle from the front and back, we obtain slightly smaller angle values, i.e. 41.11° ± 8.51° and 43.94° ± 7.22°, respectively. As the septum is directed leftward in the apical region, the optimal RAO angle for this area decreases (16.49° ± 7,07°) and does not show significant differences between the male and female groups (p=0.23). Within the right ventricular apex, there is a cavity formed by the apical segment of the interventricular septum and the free wall of the right ventricle with a depth of 12.35mm (IQR 11.07mm - 13.51mm). The length of the septum measured in longitudinal section, containing 4 heart cavities, is 73.03mm ± 8.06mm. With the left ventricular septal wall formed by the interventricular septum in the apical region at a length of 10.06mm (IQR 8.86 - 11.07mm) already lies outside the right ventricle. Both mentioned lengths are significantly larger in the male group (p<0.001). For proper imaging of the septum from the right ventricular side, an oblique position of the visualization devices is necessary. Correct determination of the RAO and LAO angle during the procedure allows to improve the procedure performed, and possible modification of the visual field when moving in the anterior, posterior and apical directions of the septum will avoid complications. Overlooking the change in the direction of the interventricular septum in the apical region and a significant decrease in the RAO angle can result in implantation of the lead into the free wall of the right ventricle with less effective pacing and even complications such as wall perforation and cardiac tamponade. The demonstrated gender differences can also be helpful in setting the right projections. A necessary addition to the analysis will be a description of the area of the ventricular septum, which we are currently working on using autopsy material.Keywords: anatomical variability, angle, electrocardiological procedure, intervetricular septum
Procedia PDF Downloads 99522 Computational Team Dynamics and Interaction Patterns in New Product Development Teams
Authors: Shankaran Sitarama
Abstract:
New Product Development (NPD) is invariably a team effort and involves effective teamwork. NPD team has members from different disciplines coming together and working through the different phases all the way from conceptual design phase till the production and product roll out. Creativity and Innovation are some of the key factors of successful NPD. Team members going through the different phases of NPD interact and work closely yet challenge each other during the design phases to brainstorm on ideas and later converge to work together. These two traits require the teams to have a divergent and a convergent thinking simultaneously. There needs to be a good balance. The team dynamics invariably result in conflicts among team members. While some amount of conflict (ideational conflict) is desirable in NPD teams to be creative as a group, relational conflicts (or discords among members) could be detrimental to teamwork. Team communication truly reflect these tensions and team dynamics. In this research, team communication (emails) between the members of the NPD teams is considered for analysis. The email communication is processed through a semantic analysis algorithm (LSA) to analyze the content of communication and a semantic similarity analysis to arrive at a social network graph that depicts the communication amongst team members based on the content of communication. The amount of communication (content and not frequency of communication) defines the interaction strength between the members. Social network adjacency matrix is thus obtained for the team. Standard social network analysis techniques based on the Adjacency Matrix (AM) and Dichotomized Adjacency Matrix (DAM) based on network density yield network graphs and network metrics like centrality. The social network graphs are then rendered for visual representation using a Metric Multi-Dimensional Scaling (MMDS) algorithm for node placements and arcs connecting the nodes (representing team members) are drawn. The distance of the nodes in the placement represents the tie-strength between the members. Stronger tie-strengths render nodes closer. Overall visual representation of the social network graph provides a clear picture of the team’s interactions. This research reveals four distinct patterns of team interaction that are clearly identifiable in the visual representation of the social network graph and have a clearly defined computational scheme. The four computational patterns of team interaction defined are Central Member Pattern (CMP), Subgroup and Aloof member Pattern (SAP), Isolate Member Pattern (IMP), and Pendant Member Pattern (PMP). Each of these patterns has a team dynamics implication in terms of the conflict level in the team. For instance, Isolate member pattern, clearly points to a near break-down in communication with the member and hence a possible high conflict level, whereas the subgroup or aloof member pattern points to a non-uniform information flow in the team and some moderate level of conflict. These pattern classifications of teams are then compared and correlated to the real level of conflict in the teams as indicated by the team members through an elaborate self-evaluation, team reflection, feedback form and results show a good correlation.Keywords: team dynamics, team communication, team interactions, social network analysis, sna, new product development, latent semantic analysis, LSA, NPD teams
Procedia PDF Downloads 70521 A 500 MWₑ Coal-Fired Power Plant Operated under Partial Oxy-Combustion: Methodology and Economic Evaluation
Authors: Fernando Vega, Esmeralda Portillo, Sara Camino, Benito Navarrete, Elena Montavez
Abstract:
The European Union aims at strongly reducing their CO₂ emissions from energy and industrial sector by 2030. The energy sector contributes with more than two-thirds of the CO₂ emission share derived from anthropogenic activities. Although efforts are mainly focused on the use of renewables by energy production sector, carbon capture and storage (CCS) remains as a frontline option to reduce CO₂ emissions from industrial process, particularly from fossil-fuel power plants and cement production. Among the most feasible and near-to-market CCS technologies, namely post-combustion and oxy-combustion, partial oxy-combustion is a novel concept that can potentially reduce the overall energy requirements of the CO₂ capture process. This technology consists in the use of higher oxygen content in the oxidizer that should increase the CO₂ concentration of the flue gas once the fuel is burnt. The CO₂ is then separated from the flue gas downstream by means of a conventional CO₂ chemical absorption process. The production of a higher CO₂ concentrated flue gas should enhance the CO₂ absorption into the solvent, leading to further reductions of the CO₂ separation performance in terms of solvent flow-rate, equipment size, and energy penalty related to the solvent regeneration. This work evaluates a portfolio of CCS technologies applied to fossil-fuel power plants. For this purpose, an economic evaluation methodology was developed in detail to determine the main economical parameters for CO₂ emission removal such as the levelized cost of electricity (LCOE) and the CO₂ captured and avoided costs. ASPEN Plus™ software was used to simulate the main units of power plant and solve the energy and mass balance. Capital and investment costs were determined from the purchased cost of equipment, also engineering costs and project and process contingencies. The annual capital cost and operating and maintenance costs were later obtained. A complete energy balance was performed to determine the net power produced in each case. The baseline case consists of a supercritical 500 MWe coal-fired power plant using anthracite as a fuel without any CO₂ capture system. Four cases were proposed: conventional post-combustion capture, oxy-combustion and partial oxy-combustion using two levels of oxygen-enriched air (40%v/v and 75%v/v). CO₂ chemical absorption process using monoethanolamine (MEA) was used as a CO₂ separation process whereas the O₂ requirement was achieved using a conventional air separation unit (ASU) based on Linde's cryogenic process. Results showed a reduction of 15% of the total investment cost of the CO₂ separation process when partial oxy-combustion was used. Oxygen-enriched air production also reduced almost half the investment costs required for ASU in comparison with oxy-combustion cases. Partial oxy-combustion has a significant impact on the performance of both CO₂ separation and O₂ production technologies, and it can lead to further energy reductions using new developments on both CO₂ and O₂ separation processes.Keywords: carbon capture, cost methodology, economic evaluation, partial oxy-combustion
Procedia PDF Downloads 148520 Numerical Investigation on Transient Heat Conduction through Brine-Spongy Ice
Authors: S. R. Dehghani, Y. S. Muzychka, G. F. Naterer
Abstract:
The ice accretion of salt water on cold substrates creates brine-spongy ice. This type of ice is a mixture of pure ice and liquid brine. A real case of creation of this type of ice is superstructure icing which occurs on marine vessels and offshore structures in cold and harsh conditions. Transient heat transfer through this medium causes phase changes between brine pockets and pure ice. Salt rejection during the process of transient heat conduction increases the salinity of brine pockets to reach a local equilibrium state. In this process the only effect of passing heat through the medium is not changing the sensible heat of the ice and brine pockets; latent heat plays an important role and affects the mechanism of heat transfer. In this study, a new analytical model for evaluating heat transfer through brine-spongy ice is suggested. This model considers heat transfer and partial solidification and melting together. Properties of brine-spongy ice are obtained using properties of liquid brine and pure ice. A numerical solution using Method of Lines discretizes the medium to reach a set of ordinary differential equations. Boundary conditions are chosen using one of the applicable cases of this type of ice; one side is considered as a thermally isolated surface, and the other side is assumed to be suddenly affected by a constant temperature boundary. All cases are evaluated in temperatures between -20 C and the freezing point of brine-spongy ice. Solutions are conducted using different salinities from 5 to 60 ppt. Time steps and space intervals are chosen properly to maintain the most stable and fast solution. Variation of temperature, volume fraction of brine and brine salinity versus time are the most important outputs of this study. Results show that transient heat conduction through brine-spongy ice can create a various range of salinity of brine pockets from the initial salinity to that of 180 ppt. The rate of variation of temperature is found to be slower for high salinity cases. The maximum rate of heat transfer occurs at the start of the simulation. This rate decreases as time passes. Brine pockets are smaller at portions closer to the colder side than that of the warmer side. A the start of the solution, the numerical solution tends to increase instabilities. This is because of sharp variation of temperature at the start of the process. Changing the intervals improves the unstable situation. The analytical model using a numerical scheme is capable of predicting thermal behavior of brine spongy ice. This model and numerical solutions are important for modeling the process of freezing of salt water and ice accretion on cold structures.Keywords: method of lines, brine-spongy ice, heat conduction, salt water
Procedia PDF Downloads 217519 ReactorDesign App: An Interactive Software for Self-Directed Explorative Learning
Authors: Chia Wei Lim, Ning Yan
Abstract:
The subject of reactor design, dealing with the transformation of chemical feedstocks into more valuable products, constitutes the central idea of chemical engineering. Despite its importance, the way it is taught to chemical engineering undergraduates has stayed virtually the same over the past several decades, even as the chemical industry increasingly leans towards the use of software for the design and daily monitoring of chemical plants. As such, there has been a widening learning gap as chemical engineering graduates transition from university to the industry since they are not exposed to effective platforms that relate the fundamental concepts taught during lectures to industrial applications. While the success of technology enhanced learning (TEL) has been demonstrated in various chemical engineering subjects, TELs in the teaching of reactor design appears to focus on the simulation of reactor processes, as opposed to arguably more important ideas such as the selection and optimization of reactor configuration for different types of reactions. This presents an opportunity for us to utilize the readily available easy-to-use MATLAB App platform to create an educational tool to aid the learning of fundamental concepts of reactor design and to link these concepts to the industrial context. Here, interactive software for the learning of reactor design has been developed to narrow the learning gap experienced by chemical engineering undergraduates. Dubbed the ReactorDesign App, it enables students to design reactors involving complex design equations for industrial applications without being overly focused on the tedious mathematical steps. With the aid of extensive visualization features, the concepts covered during lectures are explicitly utilized, allowing students to understand how these fundamental concepts are applied in the industrial context and equipping them for their careers. In addition, the software leverages the easily accessible MATLAB App platform to encourage self-directed learning. It is useful for reinforcing concepts taught, complementing homework assignments, and aiding exam revision. Accordingly, students are able to identify any lapses in understanding and clarify them accordingly. In terms of the topics covered, the app incorporates the design of different types of isothermal and non-isothermal reactors, in line with the lecture content and industrial relevance. The main features include the design of single reactors, such as batch reactors (BR), continuously stirred tank reactors (CSTR), plug flow reactors (PFR), and recycle reactors (RR), as well as multiple reactors consisting of any combination of ideal reactors. A version of the app, together with some guiding questions to aid explorative learning, was released to the undergraduates taking the reactor design module. A survey was conducted to assess its effectiveness, and an overwhelmingly positive response was received, with 89% of the respondents agreeing or strongly agreeing that the app has “helped [them] with understanding the unit” and 87% of the respondents agreeing or strongly agreeing that the app “offers learning flexibility”, compared to the conventional lecture-tutorial learning framework. In conclusion, the interactive ReactorDesign App has been developed to encourage self-directed explorative learning of the subject and demonstrate the industrial applications of the taught design concepts.Keywords: explorative learning, reactor design, self-directed learning, technology enhanced learning
Procedia PDF Downloads 93518 Investigating the Influence of Solidification Rate on the Microstructural, Mechanical and Physical Properties of Directionally Solidified Al-Mg Based Multicomponent Eutectic Alloys Containing High Mg Alloys
Authors: Fatih Kılıç, Burak Birol, Necmettin Maraşlı
Abstract:
The directional solidification process is generally used for homogeneous compound production, single crystal growth, and refining (zone refining), etc. processes. The most important two parameters that control eutectic structures are temperature gradient and grain growth rate which are called as solidification parameters The solidification behavior and microstructure characteristics is an interesting topic due to their effects on the properties and performance of the alloys containing eutectic compositions. The solidification behavior of multicomponent and multiphase systems is an important parameter for determining various properties of these materials. The researches have been conducted mostly on the solidification of pure materials or alloys containing two phases. However, there are very few studies on the literature about multiphase reactions and microstructure formation of multicomponent alloys during solidification. Because of this situation, it is important to study the microstructure formation and the thermodynamical, thermophysical and microstructural properties of these alloys. The production process is difficult due to easy oxidation of magnesium and therefore, there is not a comprehensive study concerning alloys containing high Mg (> 30 wt.% Mg). With the increasing amount of Mg inside Al alloys, the specific weight decreases, and the strength shows a slight increase, while due to formation of β-Al8Mg5 phase, ductility lowers. For this reason, production, examination and development of high Mg containing alloys will initiate the production of new advanced engineering materials. The original value of this research can be described as obtaining high Mg containing (> 30% Mg) Al based multicomponent alloys by melting under vacuum; controlled directional solidification with various growth rates at a constant temperature gradient; and establishing relationship between solidification rate and microstructural, mechanical, electrical and thermal properties. Therefore, within the scope of this research, some > 30% Mg containing ternary or quaternary Al alloy compositions were determined, and it was planned to investigate the effects of directional solidification rate on the mechanical, electrical and thermal properties of these alloys. Within the scope of the research, the influence of the growth rate on microstructure parameters, microhardness, tensile strength, electrical conductivity and thermal conductivity of directionally solidified high Mg containing Al-32,2Mg-0,37Si; Al-30Mg-12Zn; Al-32Mg-1,7Ni; Al-32,2Mg-0,37Fe; Al-32Mg-1,7Ni-0,4Si; Al-33,3Mg-0,35Si-0,11Fe (wt.%) alloys with wide range of growth rate (50-2500 µm/s) and fixed temperature gradient, will be investigated. The work can be planned as; (a) directional solidification of Al-Mg based Al-Mg-Si, Al-Mg-Zn, Al-Mg-Ni, Al-Mg-Fe, Al-Mg-Ni-Si, Al-Mg-Si-Fe within wide range of growth rates (50-2500 µm/s) at a constant temperature gradient by Bridgman type solidification system, (b) analysis of microstructure parameters of directionally solidified alloys by using an optical light microscopy and Scanning Electron Microscopy (SEM), (c) measurement of microhardness and tensile strength of directionally solidified alloys, (d) measurement of electrical conductivity by four point probe technique at room temperature (e) measurement of thermal conductivity by linear heat flow method at room temperature.Keywords: directional solidification, electrical conductivity, high Mg containing multicomponent Al alloys, microhardness, microstructure, tensile strength, thermal conductivity
Procedia PDF Downloads 260517 Topological Language for Classifying Linear Chord Diagrams via Intersection Graphs
Authors: Michela Quadrini
Abstract:
Chord diagrams occur in mathematics, from the study of RNA to knot theory. They are widely used in theory of knots and links for studying the finite type invariants, whereas in molecular biology one important motivation to study chord diagrams is to deal with the problem of RNA structure prediction. An RNA molecule is a linear polymer, referred to as the backbone, that consists of four types of nucleotides. Each nucleotide is represented by a point, whereas each chord of the diagram stands for one interaction for Watson-Crick base pairs between two nonconsecutive nucleotides. A chord diagram is an oriented circle with a set of n pairs of distinct points, considered up to orientation preserving diffeomorphisms of the circle. A linear chord diagram (LCD) is a special kind of graph obtained cutting the oriented circle of a chord diagram. It consists of a line segment, called its backbone, to which are attached a number of chords with distinct endpoints. There is a natural fattening on any linear chord diagram; the backbone lies on the real axis, while all the chords are in the upper half-plane. Each linear chord diagram has a natural genus of its associated surface. To each chord diagram and linear chord diagram, it is possible to associate the intersection graph. It consists of a graph whose vertices correspond to the chords of the diagram, whereas the chord intersections are represented by a connection between the vertices. Such intersection graph carries a lot of information about the diagram. Our goal is to define an LCD equivalence class in terms of identity of intersection graphs, from which many chord diagram invariants depend. For studying these invariants, we introduce a new representation of Linear Chord Diagrams based on a set of appropriate topological operators that permits to model LCD in terms of the relations among chords. Such set is composed of: crossing, nesting, and concatenations. The crossing operator is able to generate the whole space of linear chord diagrams, and a multiple context free grammar able to uniquely generate each LDC starting from a linear chord diagram adding a chord for each production of the grammar is defined. In other words, it allows to associate a unique algebraic term to each linear chord diagram, while the remaining operators allow to rewrite the term throughout a set of appropriate rewriting rules. Such rules define an LCD equivalence class in terms of the identity of intersection graphs. Starting from a modelled RNA molecule and the linear chord, some authors proposed a topological classification and folding. Our LCD equivalence class could contribute to the RNA folding problem leading to the definition of an algorithm that calculates the free energy of the molecule more accurately respect to the existing ones. Such LCD equivalence class could be useful to obtain a more accurate estimate of link between the crossing number and the topological genus and to study the relation among other invariants.Keywords: chord diagrams, linear chord diagram, equivalence class, topological language
Procedia PDF Downloads 201516 Quantification of the Non-Registered Electrical and Electronic Equipment for Domestic Consumption and Enhancing E-Waste Estimation: A Case Study on TVs in Vietnam
Authors: Ha Phuong Tran, Feng Wang, Jo Dewulf, Hai Trung Huynh, Thomas Schaubroeck
Abstract:
The fast increase and complex components have made waste of electrical and electronic equipment (or e-waste) one of the most problematic waste streams worldwide. Precise information on its size on national, regional and global level has therefore been highlighted as prerequisite to obtain a proper management system. However, this is a very challenging task, especially in developing countries where both formal e-waste management system and necessary statistical data for e-waste estimation, i.e. data on the production, sale and trade of electrical and electronic equipment (EEE), are often lacking. Moreover, there is an inflow of non-registered electronic and electric equipment, which ‘invisibly’ enters the EEE domestic market and then is used for domestic consumption. The non-registration/invisibility and (in most of the case) illicit nature of this flow make it difficult or even impossible to be captured in any statistical system. The e-waste generated from it is thus often uncounted in current e-waste estimation based on statistical market data. Therefore, this study focuses on enhancing e-waste estimation in developing countries and proposing a calculation pathway to quantify the magnitude of the non-registered EEE inflow. An advanced Input-Out Analysis model (i.e. the Sale–Stock–Lifespan model) has been integrated in the calculation procedure. In general, Sale-Stock-Lifespan model assists to improve the quality of input data for modeling (i.e. perform data consolidation to create more accurate lifespan profile, model dynamic lifespan to take into account its changes over time), via which the quality of e-waste estimation can be improved. To demonstrate the above objectives, a case study on televisions (TVs) in Vietnam has been employed. The results show that the amount of waste TVs in Vietnam has increased four times since 2000 till now. This upward trend is expected to continue in the future. In 2035, a total of 9.51 million TVs are predicted to be discarded. Moreover, estimation of non-registered TV inflow shows that it might on average contribute about 15% to the total TVs sold on the Vietnamese market during the whole period of 2002 to 2013. To tackle potential uncertainties associated with estimation models and input data, sensitivity analysis has been applied. The results show that both estimations of waste and non-registered inflow depend on two parameters i.e. number of TVs used in household and the lifespan. Particularly, with a 1% increase in the TV in-use rate, the average market share of non-register inflow in the period 2002-2013 increases 0.95%. However, it decreases from 27% to 15% when the constant unadjusted lifespan is replaced by the dynamic adjusted lifespan. The effect of these two parameters on the amount of waste TV generation for each year is more complex and non-linear over time. To conclude, despite of remaining uncertainty, this study is the first attempt to apply the Sale-Stock-Lifespan model to improve the e-waste estimation in developing countries and to quantify the non-registered EEE inflow to domestic consumption. It therefore can be further improved in future with more knowledge and data.Keywords: e-waste, non-registered electrical and electronic equipment, TVs, Vietnam
Procedia PDF Downloads 246515 Monitoring and Improving Performance of Soil Aquifer Treatment System and Infiltration Basins Performance: North Gaza Emergency Sewage Treatment Plant as Case Study
Authors: Sadi Ali, Yaser Kishawi
Abstract:
As part of Palestine, Gaza Strip (365 km2 and 1.8 million habitants) is considered a semi-arid zone relies solely on the Coastal Aquifer. The coastal aquifer is only source of water with only 5-10% suitable for human use. This barely cover the domestic and agricultural needs of Gaza Strip. Palestinian Water Authority Strategy is to find non-conventional water resource from treated wastewater to irrigate 1500 hectares and serves over 100,000 inhabitants. A new WWTP project is to replace the old-overloaded Biet Lahia WWTP. The project consists of three parts; phase A (pressure line & 9 infiltration basins - IBs), phase B (a new WWTP) and phase C (Recovery and Reuse Scheme – RRS – to capture the spreading plume). Currently, phase A is functioning since Apr 2009. Since Apr 2009, a monitoring plan is conducted to monitor the infiltration rate (I.R.) of the 9 basins. Nearly 23 million m3 of partially treated wastewater were infiltrated up to Jun 2014. It is important to maintain an acceptable rate to allow the basins to handle the coming quantities (currently 10,000 m3 are pumped an infiltrated daily). The methodology applied was to review and analysis the collected data including the I.R.s, the WW quality and the drying-wetting schedule of the basins. One of the main findings is the relation between the Total Suspended Solids (TSS) at BLWWTP and the I.R. at the basins. Since April 2009, the basins scored an average I.R. of about 2.5 m/day. Since then the records showed a decreasing pattern of the average rate until it reached the lower value of 0.42 m/day in Jun 2013. This was accompanied with an increase of TSS (mg/L) concentration at the source reaching above 200 mg/L. The reducing of TSS concentration directly improved the I.R. (by cleaning the WW source ponds at Biet Lahia WWTP site). This was reflected in an improvement in I.R. in last 6 months from 0.42 m/day to 0.66 m/day then to nearly 1.0 m/day as the average of the last 3 months of 2013. The wetting-drying scheme of the basins was observed (3 days wetting and 7 days drying) besides the rainfall rates. Despite the difficulty to apply this scheme accurately a control of flow to each basin was applied to improve the I.R. The drying-wetting system affected the I.R. of individual basins, thus affected the overall system rate which was recorded and assessed. Also the ploughing activities at the infiltration basins as well were recommended at certain times to retain a certain infiltration level. This breaks the confined clogging layer which prevents the infiltration. It is recommended to maintain proper quality of WW infiltrated to ensure an acceptable performance of IBs. The continual maintenance of settling ponds at BLWWTP, continual ploughing of basins and applying soil treatment techniques at the IBs will improve the I.R.s. When the new WWTP functions a high standard effluent quality (TSS 20mg, BOD 20 mg/l and TN 15 mg/l) will be infiltrated, thus will enhance I.R.s of IBs due to lower organic load.Keywords: SAT, wastewater quality, soil remediation, North Gaza
Procedia PDF Downloads 234