Search results for: array code
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2160

Search results for: array code

390 Development of Scenarios for Sustainable Next Generation Nuclear System

Authors: Muhammad Minhaj Khan, Jaemin Lee, Suhong Lee, Jinyoung Chung, Johoo Whang

Abstract:

The Republic of Korea has been facing strong storage crisis from nuclear waste generation as At Reactor (AR) temporary storage sites are about to reach saturation. Since the country is densely populated with a rate of 491.78 persons per square kilometer, Construction of High-level waste repository will not be a feasible option. In order to tackle the storage waste generation problem which is increasing at a rate of 350 tHM/Yr. and 380 tHM/Yr. in case of 20 PWRs and 4 PHWRs respectively, the study strongly focuses on the advancement of current nuclear power plants to GEN-IV sustainable and ecological nuclear systems by burning TRUs (Pu, MAs). First, Calculations has made to estimate the generation of SNF including Pu and MA from PWR and PHWR NPPS by using the IAEA code Nuclear Fuel Cycle Simulation System (NFCSS) for the period of 2016, 2030 (including the saturation period of each site from 2024~2028), 2089 and 2109 as the number of NPPS will increase due to high import cost of non-nuclear energy sources. 2ndly, in order to produce environmentally sustainable nuclear energy systems, 4 scenarios to burnout the Plutonium and MAs are analyzed with the concentration on burning of MA only, MA and Pu together by utilizing SFR, LFR and KALIMER-600 burner reactor after recycling the spent oxide fuel from PWR through pyro processing technology developed by Korea Atomic Energy Research Institute (KAERI) which shows promising and sustainable future benefits by minimizing the HLW generation with regard to waste amount, decay heat, and activity. Finally, With the concentration on front and back end fuel cycles for open and closed fuel cycles of PWR and Pyro-SFR respectively, an overall assessment has been made which evaluates the quantitative as well as economical combativeness of SFR metallic fuel against PWR once through nuclear fuel cycle.

Keywords: GEN IV nuclear fuel cycle, nuclear waste, waste sustainability, transmutation

Procedia PDF Downloads 348
389 Rapid Soil Classification Using Computer Vision with Electrical Resistivity and Soil Strength

Authors: Eugene Y. J. Aw, J. W. Koh, S. H. Chew, K. E. Chua, P. L. Goh, Grace H. B. Foo, M. L. Leong

Abstract:

This paper presents the evaluation of various soil testing methods such as the four-probe soil electrical resistivity method and cone penetration test (CPT) that can complement a newly developed novel rapid soil classification scheme using computer vision, to improve the accuracy and productivity of on-site classification of excavated soil. In Singapore, excavated soils from the local construction industry are transported to Staging Grounds (SGs) to be reused as fill material for land reclamation. Excavated soils are mainly categorized into two groups (“Good Earth” and “Soft Clay”) based on particle size distribution (PSD) and water content (w) from soil investigation reports and on-site visual survey, such that proper treatment and usage can be exercised. However, this process is time-consuming and labor-intensive. Thus, a rapid classification method is needed at the SGs. Four-probe soil electrical resistivity and CPT were evaluated for their feasibility as suitable additions to the computer vision system to further develop this innovative non-destructive and instantaneous classification method. The computer vision technique comprises soil image acquisition using an industrial-grade camera; image processing and analysis via calculation of Grey Level Co-occurrence Matrix (GLCM) textural parameters; and decision-making using an Artificial Neural Network (ANN). It was found from the previous study that the ANN model coupled with ρ can classify soils into “Good Earth” and “Soft Clay” in less than a minute, with an accuracy of 85% based on selected representative soil images. To further improve the technique, the following three items were targeted to be added onto the computer vision scheme: the apparent electrical resistivity of soil (ρ) measured using a set of four probes arranged in Wenner’s array, the soil strength measured using a modified mini cone penetrometer, and w measured using a set of time-domain reflectometry (TDR) probes. Laboratory proof-of-concept was conducted through a series of seven tests with three types of soils – “Good Earth”, “Soft Clay,” and a mix of the two. Validation was performed against the PSD and w of each soil type obtained from conventional laboratory tests. The results show that ρ, w and CPT measurements can be collectively analyzed to classify soils into “Good Earth” or “Soft Clay” and are feasible as complementing methods to the computer vision system.

Keywords: computer vision technique, cone penetration test, electrical resistivity, rapid and non-destructive, soil classification

Procedia PDF Downloads 231
388 Comparison of Steel and Composite Analysis of a Multi-Storey Building

Authors: Çiğdem Avcı Karataş

Abstract:

Mitigation of structural damage caused by earthquake and reduction of fatality is one of the main concerns of engineers in seismic prone zones of the world. To achieve this aim many technologies have been developed in the last decades and applied in construction and retrofit of structures. On the one hand Turkey is well-known a country of high level of seismicity; on the other hand steel-composite structures appear competitive today in this country by comparison with other types of structures, for example only-steel or concrete structures. Composite construction is the dominant form of construction for the multi-storey building sector. The reason why composite construction is often so good can be expressed in one simple way - concrete is good in compression and steel is good in tension. By joining the two materials together structurally these strengths can be exploited to result in a highly efficient design. The reduced self-weight of composite elements has a knock-on effect by reducing the forces in those elements supporting them, including the foundations. The floor depth reductions that can be achieved using composite construction can also provide significant benefits in terms of the costs of services and the building envelope. The scope of this paper covers analysis, materials take-off, cost analysis and economic comparisons of a multi-storey building with composite and steel frames. The aim of this work is to show that designing load carrying systems as composite is more economical than designing as steel. Design of the nine stories building which is under consideration is done according to the regulation of the 2007, Turkish Earthquake Code and by using static and dynamic analysis methods. For the analyses of the steel and composite systems, plastic analysis methods have been used and whereas steel system analyses have been checked in compliance with EC3 and composite system analyses have been checked in compliance with EC4. At the end of the comparisons, it is revealed that composite load carrying systems analysis is more economical than the steel load carrying systems analysis considering the materials to be used in the load carrying system and the workmanship to be spent for this job.

Keywords: composite analysis, earthquake, steel, multi-storey building

Procedia PDF Downloads 566
387 Historical Evolution of Islamic Law and Its Application to the Islamic Finance

Authors: Malik Imtiaz Ahmad

Abstract:

The prime sources of Islamic Law or Shariah are Quran and Sunnah and is applied to the personal and public affairs of Muslims. Islamic law is deemed to be divine and furnishes a complete code of conduct based upon universal values to build honesty, trust, righteousness, piety, charity, and social justice. The primary focus of this paper was to examine the development of Islamic jurisprudence (Fiqh) over time and its relevance to the field of Islamic finance. This encompassed a comprehensive analysis of the historical context, key legal principles, and their application in contemporary financial systems adhering to Islamic principles. This study aimed to elucidate the deep-rooted connection between Islamic law and finance, offering valuable insights for practitioners and policymakers in the Islamic finance sector. Understanding the historical context and legal underpinnings is crucial for ensuring the compliance and ethicality of modern financial systems adhering to Islamic principles. Fintech solutions are developing fields to accelerate the digitalization of Islamic finance products and services for the harmonization of global investors' mandate. Through this study, we focus on institutional governance that will improve Sharia compliance, efficiency, transparency in decision-making, and Islamic finance's contribution to humanity through the SDGs program. The research paper employed an extensive literature review, historical analysis, examination of legal principles, and case studies to trace the evolution of Islamic law and its contemporary application in Islamic finance, providing a concise yet comprehensive understanding of this intricate relationship. Through these research methodologies, the aim was to provide a comprehensive and insightful exploration of the historical evolution of Islamic law and its relevance to contemporary Islamic finance, thereby contributing to a deeper understanding of this unique and growing sector of the global financial industry.

Keywords: sharia, sequencing Islamic jurisprudence, Islamic congruent marketing, social development goals of Islamic finance

Procedia PDF Downloads 64
386 A Team-Based Learning Game Guided by a Social Robot

Authors: Gila Kurtz, Dan Kohen Vacs

Abstract:

Social robots (SR) is an emerging field striving to deploy computers capable of resembling human shapes and mimicking human movements, gestures, and behaviors. The evolving capability of SR to interact with human offers groundbreaking ways for learning and training opportunities. Studies show that SR can offer instructional experiences for fostering creativity, entertainment, enjoyment, and curiosity. These added values are essential for empowering instructional opportunities as gamified learning experiences. We present our project focused on deploying an activity to be experienced in an escape room aimed at team-based learning scaffolded by an SR, NAO. An escape room is a well-known approach for gamified activities focused on a simulated scenario experienced by team-based participants. Usually, the simulation takes place in a physical environment where participants must complete a series of challenges in a limited amount of time. During this experience, players learn something about the assigned topic of the room. In the current learning simulation, students must "save the nation" by locating sensitive information stolen and stored in a vault of four locks. Team members have to look for hints and solve riddles mediated by NAO. Each solution provides a unique code for opening one of the four locks. NAO is also used to provide ongoing feedback on the team's performance. We captured the proceeding of our activity and used it to conduct an evaluation study among ten experts in related areas. The experts were interviewed on their overall assessment of the learning activity and their perception of the added value related to the robot. The results were very encouraging on the feasibility that NAO can serve as a motivational tutor in adults' collaborative game-based learning. We believe that this study marks the first step toward a template for developing innovative team-based training using escape rooms supported by a humanoid robot.

Keywords: social robot, NAO, learning, team based activity, escape room

Procedia PDF Downloads 66
385 Machine Learning for Exoplanetary Habitability Assessment

Authors: King Kumire, Amos Kubeka

Abstract:

The synergy of machine learning and astronomical technology advancement is giving rise to the new space age, which is pronounced by better habitability assessments. To initiate this discussion, it should be recorded for definition purposes that the symbiotic relationship between astronomy and improved computing has been code-named the Cis-Astro gateway concept. The cosmological fate of this phrase has been unashamedly plagiarized from the cis-lunar gateway template and its associated LaGrange points which act as an orbital bridge to the moon from our planet Earth. However, for this study, the scientific audience is invited to bridge toward the discovery of new habitable planets. It is imperative to state that cosmic probes of this magnitude can be utilized as the starting nodes of the astrobiological search for galactic life. This research can also assist by acting as the navigation system for future space telescope launches through the delimitation of target exoplanets. The findings and the associated platforms can be harnessed as building blocks for the modeling of climate change on planet earth. The notion that if the human genus exhausts the resources of the planet earth or there is a bug of some sort that makes the earth inhabitable for humans explains the need to find an alternative planet to inhabit. The scientific community, through interdisciplinary discussions of the International Astronautical Federation so far has the common position that engineers can reduce space mission costs by constructing a stable cis-lunar orbit infrastructure for refilling and carrying out other associated in-orbit servicing activities. Similarly, the Cis-Astro gateway can be envisaged as a budget optimization technique that models extra-solar bodies and can facilitate the scoping of future mission rendezvous. It should be registered as well that this broad and voluminous catalog of exoplanets shall be narrowed along the way using machine learning filters. The gist of this topic revolves around the indirect economic rationale of establishing a habitability scoping platform.

Keywords: machine-learning, habitability, exoplanets, supercomputing

Procedia PDF Downloads 84
384 Machine Learning for Exoplanetary Habitability Assessment

Authors: King Kumire, Amos Kubeka

Abstract:

The synergy of machine learning and astronomical technology advancement is giving rise to the new space age, which is pronounced by better habitability assessments. To initiate this discussion, it should be recorded for definition purposes that the symbiotic relationship between astronomy and improved computing has been code-named the Cis-Astro gateway concept. The cosmological fate of this phrase has been unashamedly plagiarized from the cis-lunar gateway template and its associated LaGrange points which act as an orbital bridge to the moon from our planet Earth. However, for this study, the scientific audience is invited to bridge toward the discovery of new habitable planets. It is imperative to state that cosmic probes of this magnitude can be utilized as the starting nodes of the astrobiological search for galactic life. This research can also assist by acting as the navigation system for future space telescope launches through the delimitation of target exoplanets. The findings and the associated platforms can be harnessed as building blocks for the modeling of climate change on planet earth. The notion that if the human genus exhausts the resources of the planet earth or there is a bug of some sort that makes the earth inhabitable for humans explains the need to find an alternative planet to inhabit. The scientific community, through interdisciplinary discussions of the International Astronautical Federation so far, has the common position that engineers can reduce space mission costs by constructing a stable cis-lunar orbit infrastructure for refilling and carrying out other associated in-orbit servicing activities. Similarly, the Cis-Astro gateway can be envisaged as a budget optimization technique that models extra-solar bodies and can facilitate the scoping of future mission rendezvous. It should be registered as well that this broad and voluminous catalog of exoplanets shall be narrowed along the way using machine learning filters. The gist of this topic revolves around the indirect economic rationale of establishing a habitability scoping platform.

Keywords: exoplanets, habitability, machine-learning, supercomputing

Procedia PDF Downloads 113
383 Environmental Photodegradation of Tralkoxydim Herbicide and Its Formulation in Natural Waters

Authors: María José Patiño-Ropero, Manuel Alcamí, Al Mokhtar Lamsabhi, José Luis Alonso-Prados, Pilar Sandín-España

Abstract:

Tralkoxydim, commercialized under different trade names, among them Splendor® (25% active ingredient), is a cyclohexanedione herbicide used in wheat and barley fields for the post-emergence control of annual winter grass weeds. Due to their physicochemical properties, herbicides belonging to this family are known to be susceptible to reaching natural waters, where different degradation pathways can take place. Photolysis represents one of the main routes of abiotic degradation of these herbicides in water. This transformation pathway can lead to the formation of unknown by-products, which could be more toxic and/or persistent than the active substances themselves. Therefore, there is a growing need to understand the science behind such dissipation routes, which is key to estimating the persistence of these compounds and ensuring the accurate assessment of environmental behavior. However, to our best knowledge, any information regarding the photochemical behavior of tralkoxydim under natural conditions in an aqueous environment has not been available till now in the literature. This work has focused on investigating the photochemical behavior of tralkoxydim herbicide and its commercial formulation (Splendor®) in the ultrapure, river and spring water using simulated solar radiation. Besides, the evolution of detected degradation products formed in the samples has been studied. A reversed-phase HPLC-DAD (high-performance liquid chromatography with diode array detector) method was developed to evaluate the kinetic evolution and to obtain the half-lives. In both cases, the degradation rates of active ingredient tralkoxydim in natural waters were lower than in ultrapure water following the order; river water < spring water < ultrapure water, and with first-order half-life values of 5.1 h, 2.7 h and 1.1 h, respectively. These findings indicate that the photolytical behavior of active ingredients is largely affected by the water composition, and these components can exert an internal filter effect. In addition, tralkoxydim herbicide and its formulation showed the same half-lives for each one of the types of water studied, showing that the presence of adjuvants in the commercial formulation has not any effect on the degradation rates of the active ingredient. HPLC-MS (high-performance liquid chromatography with mass spectrometry) experiments were performed to study the by-products deriving from the photodegradation of tralkoxydim in water. Accordingly, three compounds were tentatively identified. These results provide a better understanding of the tralkoxydim herbicide behavior in natural waters and its fate in the environment.

Keywords: by-products, natural waters, photodegradation, tralkoxydim herbicide

Procedia PDF Downloads 87
382 Metadiscourse in EFL, ESP and Subject-Teaching Online Courses in Higher Education

Authors: Maria Antonietta Marongiu

Abstract:

Propositional information in discourse is made coherent, intelligible, and persuasive through metadiscourse. The linguistic and rhetorical choices that writers/speakers make to organize and negotiate content matter are intended to help relate a text to its context. Besides, they help the audience to connect to and interpret a text according to the values of a specific discourse community. Based on these assumptions, this work aims to analyse the use of metadiscourse in the spoken performance of teachers in online EFL, ESP, and subject-teacher courses taught in English to non-native learners in higher education. In point of fact, the global spread of Covid 19 has forced universities to transition their in-class courses to online delivery. This has inevitably placed on the instructor a heavier interactional responsibility compared to in-class courses. Accordingly, online delivery needs greater structuring as regards establishing the reader/listener’s resources for text understanding and negotiating. Indeed, in online as well as in in-class courses, lessons are social acts which take place in contexts where interlocutors, as members of a community, affect the ways ideas are presented and understood. Following Hyland’s Interactional Model of Metadiscourse (2005), this study intends to investigate Teacher Talk in online academic courses during the Covid 19 lock-down in Italy. The selected corpus includes the transcripts of online EFL and ESP courses and subject-teachers online courses taught in English. The objective of the investigation is, firstly, to ascertain the presence of metadiscourse in the form of interactive devices (to guide the listener through the text) and interactional features (to involve the listener in the subject). Previous research on metadiscourse in academic discourse, in college students' presentations in EAP (English for Academic Purposes) lessons, as well as in online teaching methodology courses and MOOC (Massive Open Online Courses) has shown that instructors use a vast array of metadiscoursal features intended to express the speakers’ intentions and standing with respect to discourse. Besides, they tend to use directions to orient their listeners and logical connectors referring to the structure of the text. Accordingly, the purpose of the investigation is also to find out whether metadiscourse is used as a rhetorical strategy by instructors to control, evaluate and negotiate the impact of the ongoing talk, and eventually to signal their attitudes towards the content and the audience. Thus, the use of metadiscourse can contribute to the informative and persuasive impact of discourse, and to the effectiveness of online communication, especially in learning contexts.

Keywords: discourse analysis, metadiscourse, online EFL and ESP teaching, rhetoric

Procedia PDF Downloads 126
381 Predicting Radioactive Waste Glass Viscosity, Density and Dissolution with Machine Learning

Authors: Joseph Lillington, Tom Gout, Mike Harrison, Ian Farnan

Abstract:

The vitrification of high-level nuclear waste within borosilicate glass and its incorporation within a multi-barrier repository deep underground is widely accepted as the preferred disposal method. However, for this to happen, any safety case will require validation that the initially localized radionuclides will not be considerably released into the near/far-field. Therefore, accurate mechanistic models are necessary to predict glass dissolution, and these should be robust to a variety of incorporated waste species and leaching test conditions, particularly given substantial variations across international waste-streams. Here, machine learning is used to predict glass material properties (viscosity, density) and glass leaching model parameters from large-scale industrial data. A variety of different machine learning algorithms have been compared to assess performance. Density was predicted solely from composition, whereas viscosity additionally considered temperature. To predict suitable glass leaching model parameters, a large simulated dataset was created by coupling MATLAB and the chemical reactive-transport code HYTEC, considering the state-of-the-art GRAAL model (glass reactivity in allowance of the alteration layer). The trained models were then subsequently applied to the large-scale industrial, experimental data to identify potentially appropriate model parameters. Results indicate that ensemble methods can accurately predict viscosity as a function of temperature and composition across all three industrial datasets. Glass density prediction shows reliable learning performance with predictions primarily being within the experimental uncertainty of the test data. Furthermore, machine learning can predict glass dissolution model parameters behavior, demonstrating potential value in GRAAL model development and in assessing suitable model parameters for large-scale industrial glass dissolution data.

Keywords: machine learning, predictive modelling, pattern recognition, radioactive waste glass

Procedia PDF Downloads 114
380 Strategic Policy Formulation to Ensure the Atlantic Forest Regeneration

Authors: Ramon F. B. da Silva, Mateus Batistella, Emilio Moran

Abstract:

Although the existence of two Forest Transition (FT) pathways, the economic development and the forest scarcity, there are many contexts that shape the model of FT observed in each particular region. This means that local conditions, such as relief, soil quality, historic land use/cover, public policies, the engagement of society in compliance with legal regulations, and the action of enforcement agencies, represent dimensions which combined, creates contexts that enable forest regeneration. From this perspective we can understand the regeneration process of native vegetation cover in the Paraíba Valley (Forest Atlantic biome), ongoing since the 1960s. This research analyzed public information, land use/cover maps, environmental public policies, and interviewed 17 stakeholders from the Federal and State agencies, municipal environmental and agricultural departments, civil society, farmers, aiming comprehend the contexts behind the forest regeneration in the Paraíba Valley, Sao Paulo State, Brazil. The first policy to protect forest vegetation was the Forest Code n0 4771 of 1965, but this legislation did not promote the increase of forest, just the control of deforestation, not enough to the Atlantic Forest biome that reached its highest pick of degradation in 1985 (8% of Atlantic Forest remnants). We concluded that the Brazilian environmental legislation acted in a strategic way to promote the increase of forest cover (102% of regeneration between 1985 and 2011) from 1993 when the Federal Decree n0 750 declared the initial and advanced stages of secondary succession protected against any kind of exploitation or degradation ensuring the forest regeneration process. The strategic policy formulation was also observed in the Sao Paulo State law n0 6171 of 1988 that prohibited the use of fire to manage agricultural landscape, triggering a process of forest regeneration in formerly pasture areas.

Keywords: forest transition, land abandonment, law enforcement, rural economic crisis

Procedia PDF Downloads 551
379 Expression of DNMT Enzymes-Regulated miRNAs Involving in Epigenetic Event of Tumor and Margin Tissues in Patients with Breast Cancer

Authors: Fatemeh Zeinali Sehrig

Abstract:

Background: miRNAs play an important role in the post-transcriptional regulation of genes, including genes involved in DNA methylation (DNMTs), and are also important regulators of oncogenic pathways. The study of microRNAs and DNMTs in breast cancer allows the development of targeted treatments and early detection of this cancer. Methods and Materials: Clinical Patients and Samples: Institutional guidelines, including ethical approval and informed consent, were followed by the Ethics Committee (Ethics code: IR.IAU.TABRIZ.REC.1401.063) of Tabriz Azad University, Tabriz, Iran. In this study, tissues of 100 patients with breast cancer and tissues of 100 healthy women were collected from Noor Nejat Hospital in Tabriz. The basic characteristics of the patients with breast cancer included: 1)Tumor grade(Grade 3 = 5%, Grade 2 = 87.5%, Grade 1 = 7.5%), 2)Lymph node(Yes = 87.5%, No = 12.5%), 3)Family cancer history(Yes = 47.5%, No = 41.3%, Unknown = 11.2%), 4) Abortion history(Yes = 36.2%).In silico methods (data gathering, process, and build networks): Gene Expression Omnibus (GEO), a high-throughput genomic database, was queried for miRNAs expression profiles in breast cancer. For Experimental protocol Tissue Processing, Total RNA isolation, complementary DNA(cDNA) synthesis, and quantitative real time PCR (QRT-PCR) analysis were performed. Results: In the present study, we found significant (p.value<0.05) changes in the expression level of miRNAs and DNMTs in patients with breast cancer. In bioinformatics studies, the GEO microarray data set, similar to qPCR results, showed a decreased expression of miRNAs and increased expression of DNMTs in breast cancer. Conclusion: According to the results of the present study, which showed a decrease in the expression of miRNAs and DNMTs in breast cancer, it can be said that these genes can be used as important diagnostic and therapeutic biomarkers in breast cancer.

Keywords: gene expression omnibus, microarray dataset, breast cancer, miRNA, DNMT (DNA methyltransferases)

Procedia PDF Downloads 25
378 Numerical Study on the Effects of Truncated Ribs on Film Cooling with Ribbed Cross-Flow Coolant Channel

Authors: Qijiao He, Lin Ye

Abstract:

To evaluate the effect of the ribs on internal structure in film hole and the film cooling performance on outer surface, the numerical study investigates on the effects of rib configuration on the film cooling performance with ribbed cross-flow coolant channel. The base smooth case and three ribbed cases, including the continuous rib case and two cross-truncated rib cases with different arrangement, are studied. The distributions of adiabatic film cooling effectiveness and heat transfer coefficient are obtained under the blowing ratios with the value of 0.5 and 1.0, respectively. A commercial steady RANS (Reynolds-averaged Navier-Stokes) code with realizable k-ε turbulence model and enhanced wall treatment were performed for numerical simulations. The numerical model is validated against available experimental data. The two cross-truncated rib cases produce approximately identical cooling effectiveness compared with the smooth case under lower blowing ratio. The continuous rib case significantly outperforms the other cases. With the increase of blowing ratio, the cases with ribs are inferior to the smooth case, especially in the upstream region. The cross-truncated rib I case produces the highest cooling effectiveness among the studied the ribbed channel case. It is found that film cooling effectiveness deteriorates with the increase of spiral intensity of the cross-flow inside the film hole. Lower spiral intensity leads to a better film coverage and thus results in better cooling effectiveness. The distinct relative merits among the cases at different blowing ratios are explored based on the aforementioned dominant mechanism. With regard to the heat transfer coefficient, the smooth case has higher heat transfer intensity than the ribbed cases under the studied blowing ratios. The laterally-averaged heat transfer coefficient of the cross-truncated rib I case is higher than the cross-truncated rib II case.

Keywords: cross-flow, cross-truncated rib, film cooling, numerical simulation

Procedia PDF Downloads 132
377 Polycode Texts in Communication of Antisocial Groups: Functional and Pragmatic Aspects

Authors: Ivan Potapov

Abstract:

Background: The aim of this paper is to investigate poly code texts in the communication of youth antisocial groups. Nowadays, the notion of a text has numerous interpretations. Besides all the approaches to defining a text, we must take into account semiotic and cultural-semiotic ones. Rapidly developing IT, world globalization, and new ways of coding of information increase the role of the cultural-semiotic approach. However, the development of computer technologies leads also to changes in the text itself. Polycode texts play a more and more important role in the everyday communication of the younger generation. Therefore, the research of functional and pragmatic aspects of both verbal and non-verbal content is actually quite important. Methods and Material: For this survey, we applied the combination of four methods of text investigation: not only intention and content analysis but also semantic and syntactic analysis. Using these methods provided us with information on general text properties, the content of transmitted messages, and each communicants’ intentions. Besides, during our research, we figured out the social background; therefore, we could distinguish intertextual connections between certain types of polycode texts. As the sources of the research material, we used 20 public channels in the popular messenger Telegram and data extracted from smartphones, which belonged to arrested members of antisocial groups. Findings: This investigation let us assert that polycode texts can be characterized as highly intertextual language unit. Moreover, we could outline the classification of these texts based on communicants’ intentions. The most common types of antisocial polycode texts are a call to illegal actions and agitation. What is more, each type has its own semantic core: it depends on the sphere of communication. However, syntactic structure is universal for most of the polycode texts. Conclusion: Polycode texts play important role in online communication. The results of this investigation demonstrate that in some social groups using these texts has a destructive influence on the younger generation and obviously needs further researches.

Keywords: text, polycode text, internet linguistics, text analysis, context, semiotics, sociolinguistics

Procedia PDF Downloads 130
376 Shear Behavior of Reinforced Concrete Beams Casted with Recycled Coarse Aggregate

Authors: Salah A. Aly, Mohammed A. Ibrahim, Mostafa M. khttab

Abstract:

The amount of construction and demolition (C&D) waste has increased considerably over the last few decades. From the viewpoint of environmental preservation and effective utilization of resources, crushing C&D concrete waste to produce coarse aggregate (CA) with different replacement percentage for the production of new concrete is one common means for achieving a more environment-friendly concrete. In the study presented herein, the investigation was conducted in two phases. In the first phase, the selection of the materials was carried out and the physical, mechanical and chemical characteristics of these materials were evaluated. Different concrete mixes were designed. The investigation parameter was Recycled Concrete Aggregate (RCA) ratios. The mechanical properties of all mixes were evaluated based on compressive strength and workability results. Accordingly, two mixes have been chosen to be used in the next phase. In the second phase, the study of the structural behavior of the concrete beams was developed. Sixteen beams were casted to investigate the effect of RCA ratios, the shear span to depth ratios and the effect of different locations and reinforcement of openings on the shear behavior of the tested specimens. All these beams were designed to fail in shear. Test results of the compressive strength of concrete indicated that, replacement of natural aggregate by up to 50% recycled concrete aggregates in mixtures with 350 Kg/m3 cement content led to increase of concrete compressive strength. Moreover, the tensile strength and the modulus of elasticity of the specimens with RCA have very close values to those with natural aggregates. The ultimate shear strength of beams with RCA is very close to those with natural aggregates indicating the possibility of using RCA as partial replacement to produce structural concrete elements. The validity of both the Egyptian Code for the design and implementation of Concrete Structures (ECCS) 203-2007 and American Concrete Institute (ACI) 318-2011Codes for estimating the shear strength of the tested RCA beams was investigated. It was found that the codes procedures gives conservative estimates for shear strength.

Keywords: construction and demolition (C&D) waste, coarse aggregate (CA), recycled coarse aggregates (RCA), opening

Procedia PDF Downloads 389
375 Optimization of Polymerase Chain Reaction Condition to Amplify Exon 9 of PIK3CA Gene in Preventing False Positive Detection Caused by Pseudogene Existence in Breast Cancer

Authors: Dina Athariah, Desriani Desriani, Bugi Ratno Budiarto, Abinawanto Abinawanto, Dwi Wulandari

Abstract:

Breast cancer is a regulated by many genes. Defect in PIK3CA gene especially at position of exon 9 (E542K and E545K), called hot spot mutation induce early transformation of breast cells. The early detection of breast cancer based on mutation profile of this hot spot region would be hampered by the existence of pseudogene, marked by its substitution mutation at base 1658 (E545A) and deletion at 1659 that have been previously proven in several cancers. To the best of the authors’ knowledge, until recently no studies have been reported about pseudogene phenomenon in breast cancer. Here, we reported PCR optimization to to obtain true exon 9 of PIK3CA gene from its pseudogene hence increasing the validity of data. Material and methods: two genomic DNA with Dev and En code were used in this experiment. Two pairs of primer were design for Standard PCR method. The size of PCR products for each primer is 200bp and 400bp. While other primer was designed for Nested-PCR followed with DNA sequencing method. For Nested-PCR, we optimized the annealing temperature in first and second run of PCR, and the PCR cycle for first run PCR (15x versus 25x). Result: standard PCR using both primer pairs designed is failed to detect the true PIK3CA gene, appearing a substitution mutation at 1658 and deletion at 1659 of PCR product in sequence chromatogram indicated pseudogene. Meanwhile, Nested-PCR with optimum condition (annealing temperature for the first round at 55oC, annealing temperatung for the second round at 60,7oC with 15x PCR cycles) and could detect the true PIK3CA gene. Dev sample were identified as WT while En sample contain one substitution mutation at position 545 of exon 9, indicating amino acid changing from E to K. For the conclusion, pseudogene also exists in breast cancer and the apllication of optimazed Nested-PCR in this study could detect the true exon 9 of PIK3CA gene.

Keywords: breast cancer, exon 9, hotspot mutation, PIK3CA, pseudogene

Procedia PDF Downloads 238
374 The Role of Establishing Zakat-Based Finance in Alleviating Poverty in the Muslim World

Authors: Khan Md. Abdus Subhan, Rabeya Bushra

Abstract:

The management of Intellectual Property (IP) in museums can be complex and challenging, as it requires balancing access and control. On the one hand, museums must ensure that they have balanced permissions to display works in their collections and make them accessible to the public. On the other hand, they must also protect the rights of creators and owners of works and ensure that they are not infringing on IP rights. Intellectual property has become an increasingly important aspect of museum operations in the digital age. Museums hold a vast array of cultural assets in their collections, many of which have significant value as IP assets. The balanced management of IP in museums can help generate additional revenue and promote cultural heritage while also protecting the rights of the museum and its collections. Digital technologies have greatly impacted the way museums manage IP, providing new opportunities for revenue generation through e-commerce and licensing while also presenting new challenges related to IP protection and management. Museums must take a comprehensive approach to IP management, leveraging digital technologies, protecting IP rights, and engaging in licensing and e-commerce activities to maximize income and the economy of countries through the strong management of cultural institutions. Overall, the balanced management of IP in museums is crucial for ensuring the sustainability of museum operations and for preserving cultural heritage for future generations. By taking a balanced approach to identifying museum IP assets, museums can generate revenues and secure their financial sustainability to ensure the long-term preservation of their cultural heritage. We can divide IP assets in museums into two kinds: collection IP and museum-generated IP. Certain museums become confused and lose sight of their mission when trying to leverage collections-based IP. This was the case at the German State Museum in Berlin when the museum made 100 replicas from the Nefertiti bust and wrote under the replicas all rights reserved to the Berlin Museum and issued a certificate to prevent any person or Institution from reproducing any replica from this bust. The implications of IP in museums are far-reaching and can have significant impacts on the preservation of cultural heritage, the dissemination of information, and the development of educational programs. As such, it is important for museums to have a comprehensive understanding of IP laws and regulations and to properly manage IP to avoid legal liability, damage to reputation, and loss of revenue. The research aims to highlight the importance and role of intellectual property in museums and provide some illustrative examples of this.

Keywords: zakat, economic development, Muslim world, poverty alleviation.

Procedia PDF Downloads 33
373 Stable Isotope Ratios Data for Tracing the Origin of Greek Olive Oils and Table Olives

Authors: Efthimios Kokkotos, Kostakis Marios, Beis Alexandros, Angelos Patakas, Antonios Avgeris, Vassilios Triantafyllidis

Abstract:

H, C, and O stable isotope ratios were measured in different olive oils and table olives originating from different regions of Greece. In particular, the stable isotope ratios of different olive oils produced in the Lakonia region (Peloponesse – South Greece) from different varieties, i.e., cvs ‘Athinolia’ and ‘koroneiki’, were determined. Additionally, stable isotope ratios were also measured in different table olives (cvs ‘koroneiki’ and ‘kalamon’) produced in the same region (Messinia). The aim of this study was to provide sufficient isotope ratio data regarding each variety and region of origin that could be used in discriminative studies of oil olives and table olives produced by different varieties in other regions. In total, 97 samples of olive oil (cv ‘Athinolia’ and ‘koroneiki’) and 67 samples of table olives (cvs ‘kalmon’ and ‘koroneiki’) collected during two consecutive sampling periods (2021-2022 and 2022-2023) were measured. The C, H, and O isotope ratios were measured using Isotope Ratio Mass Spectrometry (IRMS), and the results obtained were analyzed using chemometric techniques. The measurements of the isotope ratio analyses were expressed in permille (‰) using the delta δ notation (δ=Rsample/Rstandard-1, where Rsample and Rstandardis represent the isotope ratio of sample and standard). Results indicate that stable isotope ratios of C, H, and O ranged between -28,5+0,45‰, -142,83+2,82‰, 25,86+0,56‰ and -29,78+0,71‰, -143,62+1,4‰, 26,32+0,55‰ in olive oils produced in Lakonia region from ‘Athinolia’ and ‘koroneiki ‘varieties, respectively. The C, H, and O values from table olives originated from Messinia region were -28,58+0,63‰, -138,09+3,27‰, 25,45+0,62‰ and -29,41+0,59‰,-137,67+1,15‰, 24,37+0,6‰ for ‘Kalamon’ and ‘koroneiki’ olives respectively. Acknowledgments: This research has been co-financed by the European Regional Development Fund of the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH—CREATE—INNOVATE (Project code: T2EDK-02637; MIS 5075094, Title: ‘Innovative Methodological Tools for Traceability, Certification and Authenticity Assessment of Olive Oil and Olives’).

Keywords: olive oil, table olives, Isotope ratio, IRMS, geographical origin

Procedia PDF Downloads 50
372 The Rule of Architectural Firms in Enhancing Building Energy Efficiency in Emerging Countries: Processes and Tools Evaluation of Architectural Firms in Egypt

Authors: Mahmoud F. Mohamadin, Ahmed Abdel Malek, Wessam Said

Abstract:

Achieving energy efficient architecture in general, and in emerging countries in particular, is a challenging process that requires the contribution of various governmental, institutional, and individual entities. The rule of architectural design is essential in this process as it is considered as one of the earliest steps on the road to sustainability. Architectural firms have a moral and professional responsibility to respond to these challenges and deliver buildings that consume less energy. This study aims to evaluate the design processes and tools in practice of Egyptian architectural firms based on a limited survey to investigate if their processes and methods can lead to projects that meet the Egyptian Code of Energy Efficiency Improvement. A case study of twenty architectural firms in Cairo was selected and categorized according to their scale; large-scale, medium-scale, and small-scale. A questionnaire was designed and distributed to the firms, and personal meetings with the firms’ representatives took place. The questionnaire answered three main points; the design processes adopted, the usage of performance-based simulation tools, and the usage of BIM tools for energy efficiency purposes. The results of the study revealed that only little percentage of the large-scale firms have clear strategies for building energy efficiency in their building design, however the application is limited to certain project types, or according to the client request. On the other hand, the percentage of medium-scale firms is much less, and it is almost absent in the small-scale ones. This demonstrates the urgent need of enhancing the awareness of the Egyptian architectural design community of the great importance of implementing these methods starting from the early stages of the building design. Finally, the study proposed recommendations for such firms to be able to create a healthy built environment and improve the quality of life in emerging countries.

Keywords: architectural firms, emerging countries, energy efficiency, performance-based simulation tools

Procedia PDF Downloads 281
371 Optimizing Fire Tube Boiler Design for Efficient Saturated Steam Production: A Cost-Minimization Approach

Authors: Yoftahe Nigussie Worku

Abstract:

This report unveils a meticulous project focused on the design intricacies of a Fire Tube Boiler tailored for the efficient generation of saturated steam. The overarching objective is to produce 2000kg/h of saturated steam at 12-bar design pressure, achieved through the development of an advanced fire tube boiler. This design is meticulously crafted to harmonize cost-effectiveness and parameter refinement, with a keen emphasis on material selection for component parts, construction materials, and production methods throughout the analytical phases. The analytical process involves iterative calculations, utilizing pertinent formulas to optimize design parameters, including the selection of tube diameters and overall heat transfer coefficients. The boiler configuration incorporates two passes, a strategic choice influenced by tube and shell size considerations. The utilization of heavy oil fuel no. 6, with a higher heating value of 44000kJ/kg and a lower heating value of 41300kJ/kg, results in a fuel consumption of 140.37kg/hr. The boiler achieves an impressive heat output of 1610kW with an efficiency rating of 85.25%. The fluid flow pattern within the boiler adopts a cross-flow arrangement strategically chosen for inherent advantages. Internally, the welding of the tube sheet to the shell, secured by gaskets and welds, ensures structural integrity. The shell design adheres to European Standard code sections for pressure vessels, encompassing considerations for weight, supplementary accessories (lifting lugs, openings, ends, manhole), and detailed assembly drawings. This research represents a significant stride in optimizing fire tube boiler technology, balancing efficiency and safety considerations in the pursuit of enhanced saturated steam production.

Keywords: fire tube, saturated steam, material selection, efficiency

Procedia PDF Downloads 74
370 E4D-MP: Time-Lapse Multiphysics Simulation and Joint Inversion Toolset for Large-Scale Subsurface Imaging

Authors: Zhuanfang Fred Zhang, Tim C. Johnson, Yilin Fang, Chris E. Strickland

Abstract:

A variety of geophysical techniques are available to image the opaque subsurface with little or no contact with the soil. It is common to conduct time-lapse surveys of different types for a given site for improved results of subsurface imaging. Regardless of the chosen survey methods, it is often a challenge to process the massive amount of survey data. The currently available software applications are generally based on the one-dimensional assumption for a desktop personal computer. Hence, they are usually incapable of imaging the three-dimensional (3D) processes/variables in the subsurface of reasonable spatial scales; the maximum amount of data that can be inverted simultaneously is often very small due to the capability limitation of personal computers. Presently, high-performance or integrating software that enables real-time integration of multi-process geophysical methods is needed. E4D-MP enables the integration and inversion of time-lapsed large-scale data surveys from geophysical methods. Using the supercomputing capability and parallel computation algorithm, E4D-MP is capable of processing data across vast spatiotemporal scales and in near real time. The main code and the modules of E4D-MP for inverting individual or combined data sets of time-lapse 3D electrical resistivity, spectral induced polarization, and gravity surveys have been developed and demonstrated for sub-surface imaging. E4D-MP provides capability of imaging the processes (e.g., liquid or gas flow, solute transport, cavity development) and subsurface properties (e.g., rock/soil density, conductivity) critical for successful control of environmental engineering related efforts such as environmental remediation, carbon sequestration, geothermal exploration, and mine land reclamation, among others.

Keywords: gravity survey, high-performance computing, sub-surface monitoring, electrical resistivity tomography

Procedia PDF Downloads 152
369 Magnetic Properties of Nickel Oxide Nanoparticles in Superparamagnetic State

Authors: Navneet Kaur, S. D. Tiwari

Abstract:

Superparamagnetism is an interesting phenomenon and observed in small particles of magnetic materials. It arises due to a reduction in particle size. In the superparamagnetic state, as the thermal energy overcomes magnetic anisotropy energy, the magnetic moment vector of particles flip their magnetization direction between states of minimum energy. Superparamagnetic nanoparticles have been attracting the researchers due to many applications such as information storage, magnetic resonance imaging, biomedical applications, and sensors. For information storage, thermal fluctuations lead to loss of data. So that nanoparticles should have high blocking temperature. And to achieve this, nanoparticles should have a higher magnetic moment and magnetic anisotropy constant. In this work, the magnetic anisotropy constant of the antiferromagnetic nanoparticles system is determined. Magnetic studies on nanoparticles of NiO (nickel oxide) are reported well. This antiferromagnetic nanoparticle system has high blocking temperature and magnetic anisotropy constant of order 105 J/m3. The magnetic study of NiO nanoparticles in the superparamagnetic region is presented. NiO particles of two different sizes, i.e., 6 and 8 nm, are synthesized using the chemical route. These particles are characterized by an x-ray diffractometer, transmission electron microscope, and superconducting quantum interference device magnetometry. The magnetization vs. applied magnetic field and temperature data for both samples confirm their superparamagnetic nature. The blocking temperature for 6 and 8 nm particles is found to be 200 and 172 K, respectively. Magnetization vs. applied magnetic field data of NiO is fitted to an appropriate magnetic expression using a non-linear least square fit method. The role of particle size distribution and magnetic anisotropy is taken in to account in magnetization expression. The source code is written in Python programming language. This fitting provides us the magnetic anisotropy constant for NiO and other magnetic fit parameters. The particle size distribution estimated matches well with the transmission electron micrograph. The value of magnetic anisotropy constants for 6 and 8 nm particles is found to be 1.42 X 105 and 1.20 X 105 J/m3, respectively. The obtained magnetic fit parameters are verified using the Neel model. It is concluded that the effect of magnetic anisotropy should not be ignored while studying the magnetization process of nanoparticles.

Keywords: anisotropy, superparamagnetic, nanoparticle, magnetization

Procedia PDF Downloads 129
368 Thermodynamic Analysis and Experimental Study of Agricultural Waste Plasma Processing

Authors: V. E. Messerle, A. B. Ustimenko, O. A. Lavrichshev

Abstract:

A large amount of manure and its irrational use negatively affect the environment. As compared with biomass fermentation, plasma processing of manure enhances makes it possible to intensify the process of obtaining fuel gas, which consists mainly of synthesis gas (CO + H₂), and increase plant productivity by 150–200 times. This is achieved due to the high temperature in the plasma reactor and a multiple reduction in waste processing time. This paper examines the plasma processing of biomass using the example of dried mixed animal manure (dung with a moisture content of 30%). Characteristic composition of dung, wt.%: Н₂О – 30, С – 29.07, Н – 4.06, О – 32.08, S – 0.26, N – 1.22, P₂O₅ – 0.61, K₂O – 1.47, СаО – 0.86, MgO – 0.37. The thermodynamic code TERRA was used to numerically analyze dung plasma gasification and pyrolysis. Plasma gasification and pyrolysis of dung were analyzed in the temperature range 300–3,000 K and pressure 0.1 MPa for the following thermodynamic systems: 100% dung + 25% air (plasma gasification) and 100% dung + 25% nitrogen (plasma pyrolysis). Calculations were conducted to determine the composition of the gas phase, the degree of carbon gasification, and the specific energy consumption of the processes. At an optimum temperature of 1,500 K, which provides both complete gasification of dung carbon and the maximum yield of combustible components (99.4 vol.% during dung gasification and 99.5 vol.% during pyrolysis), and decomposition of toxic compounds of furan, dioxin, and benz(a)pyrene, the following composition of combustible gas was obtained, vol.%: СО – 29.6, Н₂ – 35.6, СО₂ – 5.7, N₂ – 10.6, H₂O – 17.9 (gasification) and СО – 30.2, Н₂ – 38.3, СО₂ – 4.1, N₂ – 13.3, H₂O – 13.6 (pyrolysis). The specific energy consumption of gasification and pyrolysis of dung at 1,500 K is 1.28 and 1.33 kWh/kg, respectively. An installation with a DC plasma torch with a rated power of 100 kW and a plasma reactor with a dung capacity of 50 kg/h was used for dung processing experiments. The dung was gasified in an air (or nitrogen during pyrolysis) plasma jet, which provided a mass-average temperature in the reactor volume of at least 1,600 K. The organic part of the dung was gasified, and the inorganic part of the waste was melted. For pyrolysis and gasification of dung, the specific energy consumption was 1.5 kWh/kg and 1.4 kWh/kg, respectively. The maximum temperature in the reactor reached 1,887 K. At the outlet of the reactor, a gas of the following composition was obtained, vol.%: СO – 25.9, H₂ – 32.9, СO₂ – 3.5, N₂ – 37.3 (pyrolysis in nitrogen plasma); СO – 32.6, H₂ – 24.1, СO₂ – 5.7, N₂ – 35.8 (air plasma gasification). The specific heat of combustion of the combustible gas formed during pyrolysis and plasma-air gasification of agricultural waste is 10,500 and 10,340 kJ/kg, respectively. Comparison of the integral indicators of dung plasma processing showed satisfactory agreement between the calculation and experiment.

Keywords: agricultural waste, experiment, plasma gasification, thermodynamic calculation

Procedia PDF Downloads 35
367 Zero Energy Buildings in Hot-Humid Tropical Climates: Boundaries of the Energy Optimization Grey Zone

Authors: Nakul V. Naphade, Sandra G. L. Persiani, Yew Wah Wong, Pramod S. Kamath, Avinash H. Anantharam, Hui Ling Aw, Yann Grynberg

Abstract:

Achieving zero-energy targets in existing buildings is known to be a difficult task requiring important cuts in the building energy consumption, which in many cases clash with the functional necessities of the building wherever the on-site energy generation is unable to match the overall energy consumption. Between the building’s consumption optimization limit and the energy, target stretches a case-specific optimization grey zone, which requires tailored intervention and enhanced user’s commitment. In the view of the future adoption of more stringent energy-efficiency targets in the context of hot-humid tropical climates, this study aims to define the energy optimization grey zone by assessing the energy-efficiency limit in the state-of-the-art typical mid- and high-rise full AC office buildings, through the integration of currently available technologies. Energy models of two code-compliant generic office-building typologies were developed as a baseline, a 20-storey ‘high-rise’ and a 7-storey ‘mid-rise’. Design iterations carried out on the energy models with advanced market ready technologies in lighting, envelope, plug load management and ACMV systems and controls, lead to a representative energy model of the current maximum technical potential. The simulations showed that ZEB targets could be achieved in fully AC buildings under an average of seven floors only by compromising on energy-intense facilities (as full AC, unlimited power-supply, standard user behaviour, etc.). This paper argues that drastic changes must be made in tropical buildings to span the energy optimization grey zone and achieve zero energy. Fully air-conditioned areas must be rethought, while smart technologies must be integrated with an aggressive involvement and motivation of the users to synchronize with the new system’s energy savings goal.

Keywords: energy simulation, office building, tropical climate, zero energy buildings

Procedia PDF Downloads 180
366 The Efficacy of Thymbra spicata Ethanolic Extract and its Main Component Carvacrol on In vitro Model of Metabolically-Associated Dysfunctions

Authors: Farah Diab, Mohamad Khalil, Francesca Storace, Francesca Baldini, Piero Portincasaa, Giulio Lupidi, Laura Vergani

Abstract:

Thymbra spicata is a thyme-like plant belonging to the Lamiaceae family that shows a global distribution, especially in the eastern Mediterranean region. Leaves of T. spicata contain large amounts of phenols such as phenolic acids (rosmarinic acid), phenolic monoterpenes (carvacrol), and flavonoids. In Lebanon, T. spicata is currently used as a culinary herb in salad and infusion, as well as for traditional medicinal purposes. Carvacrol (5-isopropyl-2-methyl phenol), the most abundant polyphenol in the organic extract and essential oils, has a great array of pharmacological properties. In fact, carvacrol is largely employed as a food additive and neutraceutical agent. Our aim is to investigate the beneficial effects of T. spicata ethanolic extract (TE) and its main component, carvacrol, using in vitro models of hepatic steatosis and endothelial dysfunction. As a further point, we focused on investigating if and how the binding of carvacrol to albumin, the physiological transporter for drugs in the blood, might be altered by the presence of high levels of fatty acids (FAs), thus impairing the carvacrol bio-distribution in vivo. For that reason, hepatic FaO cells treated with exogenous FAs such as oleate and palmitate mimic hepatosteatosis; endothelial HECV cells exposed to hydrogen peroxide are a model of endothelial dysfunction. In these models, we measured lipid accumulation, free radical production, lipoperoxidation, and nitric oxide release before and after treatment with carvacrol. The carvacrol binding to albumin with/without high levels of long-chain FAs was assessed by absorption and emission spectroscopies. Our findings show that both TE and carvacrol (i) counteracted lipid accumulation in hepatocytes by decreasing the intracellular and extracellular lipid contents in steatotic FaO cells; (ii) decreased oxidative stress in endothelial cells by significantly reducing lipoperoxidation and free radical production, as well as, attenuating the nitric oxide release; (ii) high levels of circulating FAs reduced the binding of carvacrol to albumin. The beneficial effects of TE and carvacrol on both hepatic and endothelial cells point to a nutraceutical potential. However, high levels of circulating FAs, such as those occurring in metabolic disorders, might hinder the carvacrol transport, bio-distribution, and pharmacodynamics.

Keywords: carvacrol, endothelial dysfunction, fatty acids, non-alcoholic fatty liver diseases, serum albumin

Procedia PDF Downloads 187
365 Nanofluidic Cell for Resolution Improvement of Liquid Transmission Electron Microscopy

Authors: Deybith Venegas-Rojas, Sercan Keskin, Svenja Riekeberg, Sana Azim, Stephanie Manz, R. J. Dwayne Miller, Hoc Khiem Trieu

Abstract:

Liquid Transmission Electron Microscopy (TEM) is a growing area with a broad range of applications from physics and chemistry to material engineering and biology, in which it is possible to image in-situ unseen phenomena. For this, a nanofluidic device is used to insert the nanoflow with the sample inside the microscope in order to keep the liquid encapsulated because of the high vacuum. In the last years, Si3N4 windows have been widely used because of its mechanical stability and low imaging contrast. Nevertheless, the pressure difference between the inside fluid and the outside vacuum in the TEM generates bulging in the windows. This increases the imaged fluid volume, which decreases the signal to noise ratio (SNR), limiting the achievable spatial resolution. With the proposed device, the membrane is fortified with a microstructure capable of stand higher pressure differences, and almost removing completely the bulging. A theoretical study is presented with Finite Element Method (FEM) simulations which provide a deep understanding of the membrane mechanical conditions and proves the effectiveness of this novel concept. Bulging and von Mises Stress were studied for different membrane dimensions, geometries, materials, and thicknesses. The microfabrication of the device was made with a thin wafer coated with thin layers of SiO2 and Si3N4. After the lithography process, these layers were etched (reactive ion etching and buffered oxide etch (BOE) respectively). After that, the microstructure was etched (deep reactive ion etching). Then the back side SiO2 was etched (BOE) and the array of free-standing micro-windows was obtained. Additionally, a Pyrex wafer was patterned with windows, and inlets/outlets, and bonded (anodic bonding) to the Si side to facilitate the thin wafer handling. Later, a thin spacer is sputtered and patterned with microchannels and trenches to guide the nanoflow with the samples. This approach reduces considerably the common bulging problem of the window, improving the SNR, contrast and spatial resolution, increasing substantially the mechanical stability of the windows, allowing a larger viewing area. These developments lead to a wider range of applications of liquid TEM, expanding the spectrum of possible experiments in the field.

Keywords: liquid cell, liquid transmission electron microscopy, nanofluidics, nanofluidic cell, thin films

Procedia PDF Downloads 251
364 Nanowire Substrate to Control Differentiation of Mesenchymal Stem Cells

Authors: Ainur Sharip, Jose E. Perez, Nouf Alsharif, Aldo I. M. Bandeas, Enzo D. Fabrizio, Timothy Ravasi, Jasmeen S. Merzaban, Jürgen Kosel

Abstract:

Bone marrow-derived human mesenchymal stem cells (MSCs) are attractive candidates for tissue engineering and regenerative medicine, due to their ability to differentiate into osteoblasts, chondrocytes or adipocytes. Differentiation is influenced by biochemical and biophysical stimuli provided by the microenvironment of the cell. Thus, altering the mechanical characteristics of a cell culture scaffold can directly influence a cell’s microenvironment and lead to stem cell differentiation. Mesenchymal stem cells were cultured on densely packed, vertically aligned magnetic iron nanowires (NWs) and the effect of NWs on the cell cytoskeleton rearrangement and differentiation were studied. An electrochemical deposition method was employed to fabricate NWs into nanoporous alumina templates, followed by a partial release to reveal the NW array. This created a cell growth substrate with free-standing NWs. The Fe NWs possessed a length of 2-3 µm, with each NW having a diameter of 33 nm on average. Mechanical stimuli generated by the physical movement of these iron NWs, in response to a magnetic field, can stimulate osteogenic differentiation. Induction of osteogenesis was estimated using an osteogenic marker, osteopontin, and a reduction of stem cell markers, CD73 and CD105. MSCs were grown on the NWs, and fluorescent microscopy was employed to monitor the expression of markers. A magnetic field with an intensity of 250 mT and a frequency of 0.1 Hz was applied for 12 hours/day over a period of one week and two weeks. The magnetically activated substrate enhanced the osteogenic differentiation of the MSCs compared to the culture conditions without magnetic field. Quantification of the osteopontin signal revealed approximately a seven-fold increase in the expression of this protein after two weeks of culture. Immunostaining staining against CD73 and CD105 revealed the expression of antibodies at the earlier time point (two days) and a considerable reduction after one-week exposure to a magnetic field. Overall, these results demonstrate the application of a magnetic NW substrate in stimulating the osteogenic differentiation of MSCs. This method significantly decreases the time needed to induce osteogenic differentiation compared to commercial biochemical methods, such as osteogenic differentiation kits, that usually require more than two weeks. Contact-free stimulation of MSC differentiation using a magnetic field has potential uses in tissue engineering, regenerative medicine, and bone formation therapies.

Keywords: cell substrate, magnetic nanowire, mesenchymal stem cell, stem cell differentiation

Procedia PDF Downloads 192
363 An Evolutionary Approach for QAOA for Max-Cut

Authors: Francesca Schiavello

Abstract:

This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.

Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization

Procedia PDF Downloads 57
362 The Negative Implications of Childhood Obesity and Malnutrition on Cognitive Development

Authors: Stephanie Remedios, Linda Veronica Rios

Abstract:

Background. Pediatric obesity is a serious health problem linked to multiple physical diseases and ailments, including diabetes, heart disease, and joint issues. While research has shown pediatric obesity can bring about an array of physical illnesses, it is less known how such a condition can affect children’s cognitive development. With childhood overweight and obesity prevalence rates on the rise, it is essential to understand the scope of their cognitive consequences. The present review of the literature tested the hypothesis that poor physical health, such as childhood obesity or malnutrition, negatively impacts a child’s cognitive development. Methodology. A systematic review was conducted to determine the relationship between poor physical health and lower cognitive functioning in children ages 4-16. Electronic databases were searched for studies dating back to ten years. The following databases were used: Science Direct, FIU Libraries, and Google Scholar. Inclusion criteria consisted of peer-reviewed academic articles written in English from 2012 to 2022 that analyzed the relationship between childhood malnutrition and obesity on cognitive development. A total of 17,000 articles were obtained, of which 16,987 were excluded for not addressing the cognitive implications exclusively. Of the acquired articles, 13 were retained. Results. Research suggested a significant connection between diet and cognitive development. Both diet and physical activity are strongly correlated with higher cognitive functioning. Cognitive domains explored in this work included learning, memory, attention, inhibition, and impulsivity. IQ scores were also considered objective representations of overall cognitive performance. Studies showed physical activity benefits cognitive development, primarily for executive functioning and language development. Additionally, children suffering from pediatric obesity or malnutrition were found to score 3-10 points lower in IQ scores when compared to healthy, same-aged children. Conclusion. This review provides evidence that the presence of physical activity and overall physical health, including appropriate diet and nutritional intake, has beneficial effects on cognitive outcomes. The primary conclusion from this research is that childhood obesity and malnutrition show detrimental effects on cognitive development in children, primarily with learning outcomes. Assuming childhood obesity and malnutrition rates continue their current trade, it is essential to understand the complete physical and psychological implications of obesity and malnutrition in pediatric populations. Given the limitations encountered through our research, further studies are needed to evaluate the areas of cognition affected during childhood.

Keywords: childhood malnutrition, childhood obesity, cognitive development, cognitive functioning

Procedia PDF Downloads 114
361 Investigating the Effect of Using Amorphous Silica Ash Obtained from Rice Husk as a Partial Replacement of Ordinary Portland Cement on the Mechanical and Microstructure Properties of Cement Paste and Mortar

Authors: Aliyu Usman, Muhaammed Bello Ibrahim, Yusuf D. Amartey, Jibrin M. Kaura

Abstract:

This research is aimed at investigating the effect of using amorphous silica ash (ASA) obtained from rice husk as a partial replacement of ordinary Portland cement (OPC) on the mechanical and microstructure properties of cement paste and mortar. ASA was used in partial replacement of ordinary Portland cement in the following percentages 3 percent, 5 percent, 8 percent and 10 percent. These partial replacements were used to produce Cement-ASA paste and Cement-ASA mortar. ASA was found to contain all the major chemical compounds found in cement with the exception of alumina, which are SiO2 (91.5%), CaO (2.84%), Fe2O3 (1.96%), and loss on ignition (LOI) was found to be 9.18%. It also contains other minor oxides found in cement. Consistency of Cement-ASA paste was found to increase with increase in ASA replacement. Likewise, the setting time and soundness of the Cement-ASA paste also increases with increase in ASA replacements. The test on hardened mortar were destructive in nature which include flexural strength test on prismatic beam (40mm x 40mm x 160mm) at 2, 7, 14 and 28 days curing and compressive strength test on the cube size (40mm x 40mm, by using the auxiliary steel platens) at 2,7,14 and 28 days curing. The Cement-ASA mortar flexural and compressive strengths were found to be increasing with curing time and decreases with cement replacement by ASA. It was observed that 5 percent replacement of cement with ASA attained the highest strength for all the curing ages and all the percentage replacements attained the targeted compressive strength of 6N/mm2 for 28 days. There is an increase in the drying shrinkage of Cement-ASA mortar with curing time, it was also observed that the drying shrinkages for all the curing ages were greater than the control specimen all of which were greater than the code recommendation of less than 0.03%. The scanning electron microscope (SEM) was used to study the Cement-ASA mortar microstructure and to also look for hydration product and morphology.

Keywords: amorphous silica ash, cement mortar, cement paste, scanning electron microscope

Procedia PDF Downloads 430