Search results for: homogenisation of elastic material properties
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13942

Search results for: homogenisation of elastic material properties

922 Generation and Migration of CO₂ in the Bahi Sandstone Reservoir within the Ennaga Sub Basin, Sirte Basin, Libya

Authors: Moaawia Abdulgader Gdara

Abstract:

This work presents a study of carbon dioxide generation and migration in the Bahi sandstone reservoir over the EPSA 120/136 (conc 72), En Naga Sub Basin, Sirte Basin, Libya. The Lower Cretaceous Bahi Sandstone is the result of deposition that occurred between the start of the Cretaceous rifting that formed the area's Horsts, Grabens, and Cenomanian marine transgression. Bahi sediments were derived mainly from those Nubian sediments exposed on the structurally higher blocks, transported short distances into newly forming depocenters such as the En Naga Sub-basin, and were deposited by continental processes over the Sirte Unconformity (pre-Late Cretaceous surface). Bahi Sandstone facies are recognized in the En Naga Sub-basin within different lithofacies distributed over this sub-base. One of the two lithofacies recognized in the Bahi is a very fine to very coarse, subangular to angular, pebbly, and occasionally conglomeratic quartz sandstone, which is commonly described as being compacted but friable. This sandstone may contain pyrite, minor kaolinite. This facies was encountered at 11,042 feet in F1-72 well and at 9,233 feet in L1-72. Good, reservoir quality sandstones are associated with paleotopographic highs within the sub-basin and around its margins where winnowing and/or deflationary processes occurred. The second Bahi Lithofacies is a thinly bedded sequence dominated by shales and siltstones with subordinate sandstones and carbonates. The sandstones become more abundant with depth. This facies was encountered at 12,580 feet in P1 -72 and at 11,850 feet in G1a -72. This argillaceous sequence is likely the Bahi sandstone's lateral facies equivalent deposited in paleotopographic lows, which received finer grained material. The Bahi sandstones are generally described as a good reservoir rock, which after prolific production tests for the drilled wells that makes Bahi sandstones the principal reservoir rocks for CO₂ where large volumes of CO₂ gas have been discovered in the Bahi Formation on and near EPSA 120/136, (conc 72). CO₂ occurs in this area as a result of the igneous activity of the Al Harouge Al Aswad complex. Igneous extrusive have been pierced in the subsurface and are exposed at the surface. Bahi CO₂ prospectivity is thought to be excellent in the central to western areas of EPSA 120/136 (CONC 72), where there are better reservoir quality sandstones associated with Paleostructural highs. Condensate and gas prospectivity increases to the east as the CO₂ prospectivity decreases with distance away from the Al Haruj Al Aswad igneous complex. To date, it has not been possible to accurately determine the volume of these strategically valuable reserves, although there are positive indications that they are very large. Three main structures (Barrut I, En Naga A, and En Naga O) are thought to be prospective for the lower Cretaceous Bahi sandstone development. These leads are the most attractive on EPSA 120/136 for the deep potential.

Keywords: En Naga Sub Basin, Al Harouge Al Aswad’s Igneous Complex, carbon dioxide generation and migration in the Bahi sandstone reservoir, lower cretaceous Bahi sandstone

Procedia PDF Downloads 106
921 Assessing Prescribed Burn Severity in the Wetlands of the Paraná River -Argentina

Authors: Virginia Venturini, Elisabet Walker, Aylen Carrasco-Millan

Abstract:

Latin America stands at the front of climate change impacts, with forecasts projecting accelerated temperature and sea level rises compared to the global average. These changes are set to trigger a cascade of effects, including coastal retreat, intensified droughts in some nations, and heightened flood risks in others. In Argentina, wildfires historically affected forests, but since 2004, wetland fires have emerged as a pressing concern. By 2021, the wetlands of the Paraná River faced a dangerous situation. In fact, during the year 2021, a high-risk scenario was naturally formed in the wetlands of the Paraná River, in Argentina. Very low water levels in the rivers, and excessive standing dead plant material (fuel), triggered most of the fires recorded in the vast wetland region of the Paraná during 2020-2021. During 2008 fire events devastated nearly 15% of the Paraná Delta, and by late 2021 new fires burned more than 300,000 ha of these same wetlands. Therefore, the goal of this work is to explore remote sensing tools to monitor environmental conditions and the severity of prescribed burns in the Paraná River wetlands. Thus, two prescribed burning experiments were carried out in the study area (31°40’ 05’’ S, 60° 34’ 40’’ W) during September 2023. The first experiment was carried out on Sept. 13th, in a plot of 0.5 ha which dominant vegetation were Echinochloa sp., and Thalia, while the second trial was done on Sept 29th in a plot of 0.7 ha, next to the first burned parcel; here the dominant vegetation species were Echinochloa sp. and Solanum glaucophyllum. Field campaigns were conducted between September 8th and November 8th to assess the severity of the prescribed burns. Flight surveys were conducted utilizing a DJI® Inspire II drone equipped with a Sentera® NDVI camera. Then, burn severity was quantified by analyzing images captured by the Sentera camera along with data from the Sentinel 2 satellite mission. This involved subtracting the NDVI images obtained before and after the burn experiments. The results from both data sources demonstrate a highly heterogeneous impact of fire within the patch. Mean severity values obtained with drone NDVI images of the first experience were about 0.16 and 0.18 with Sentinel images. For the second experiment, mean values obtained with the drone were approximately 0.17 and 0.16 with Sentinel images. Thus, most of the pixels showed low fire severity and only a few pixels presented moderated burn severity, based on the wildfire scale. The undisturbed plots maintained consistent mean NDVI values throughout the experiments. Moreover, the severity assessment of each experiment revealed that the vegetation was not completely dry, despite experiencing extreme drought conditions.

Keywords: prescribed-burn, severity, NDVI, wetlands

Procedia PDF Downloads 69
920 An Experimental Study of Scalar Implicature Processing in Chinese

Authors: Liu Si, Wang Chunmei, Liu Huangmei

Abstract:

A prominent component of the semantic versus pragmatic debate, scalar implicature (SI) has been gaining great attention ever since it was proposed by Horn. The constant debate is between the structural and pragmatic approach. The former claims that generation of SI is costless, automatic, and dependent mostly on the structural properties of sentences, whereas the latter advocates both that such generation is largely dependent upon context, and that the process is costly. Many experiments, among which Katsos’s text comprehension experiments are influential, have been designed and conducted in order to verify their views, but the results are not conclusive. Besides, most of the experiments were conducted in English language materials. Katsos conducted one off-line and three on-line text comprehension experiments, in which the previous shortcomings were addressed on a certain extent and the conclusion was in favor of the pragmatic approach. We intend to test the results of Katsos’s experiment in Chinese scalar implicature. Four experiments in both off-line and on-line conditions to examine the generation and response time of SI in Chinese "yixie" (some) and "quanbu (dou)" (all) will be conducted in order to find out whether the structural or the pragmatic approach could be sustained. The study mainly aims to answer the following questions: (1) Can SI be generated in the upper- and lower-bound contexts as Katsos confirmed when Chinese language materials are used in the experiment? (2) Can SI be first generated, then cancelled as default view claimed or can it not be generated in a neutral context when Chinese language materials are used in the experiment? (3) Is SI generation costless or costly in terms of processing resources? (4) In line with the SI generation process, what conclusion can be made about the cognitive processing model of language meaning? Is it a parallel model or a linear model? Or is it a dynamic and hierarchical model? According to previous theoretical debates and experimental conflicts, presumptions could be made that SI, in Chinese language, might be generated in the upper-bound contexts. Besides, the response time might be faster in upper-bound than that found in lower-bound context. SI generation in neutral context might be the slowest. At last, a conclusion would be made that the processing model of SI could not be verified by either absolute structural or pragmatic approaches. It is, rather, a dynamic and complex processing mechanism, in which the interaction of language forms, ad hoc context, mental context, background knowledge, speakers’ interaction, etc. are involved.

Keywords: cognitive linguistics, pragmatics, scalar implicture, experimental study, Chinese language

Procedia PDF Downloads 361
919 NFTs, between Opportunities and Absence of Legislation: A Study on the Effect of the Rulings of the OpenSea Case

Authors: Andrea Ando

Abstract:

The development of the blockchain has been a major innovation in the technology field. It opened the door to the creation of novel cyberassets and currencies. In more recent times, the non-fungible tokens have started to be at the centre of media attention. Their popularity has been increasing since 2021, and they represent the latest in the world of distributed ledger technologies and cryptocurrencies. It seems more and more likely that NFTs will play a more important role in our online interactions. They are indeed increasingly taking part in the arts and technology sectors. Their impact on society and the market is still very difficult to define, but it is very likely that there will be a turning point in the world of digital assets. There are some examples of their peculiar behaviour and effect in our contemporary tech-market: the former CEO of the famous social media site Twitter sold an NFT of his first tweet for around £2,1 million ($2,5 million), or the National Basketball Association has created a platform to sale unique moment and memorabilia from the history of basketball through the non-fungible token technology. Their growth, as imaginable, paved the way for civil disputes, mostly regarding their position under the current intellectual property law in each jurisdiction. In April 2022, the High Court of England and Wales ruled in the OpenSea case that non-fungible tokens can be considered properties. The judge, indeed, concluded that the cryptoasset had all the indicia of property under common law (National Provincial Bank v. Ainsworth). The research has demonstrated that the ruling of the High Court is not providing enough answers to the dilemma of whether minting an NFT is a violation or not of intellectual property and/or property rights. Indeed, if, on the one hand, the technology follows the framework set by the case law (e.g., the 4 criteria of Ainsworth), on the other hand, the question that arises is what is effectively protected and owned by both the creator and the purchaser. Then the question that arises is whether a person has ownership of the cryptographed code, that it is indeed definable, identifiable, intangible, distinct, and has a degree of permanence, or what is attached to this block-chain, hence even a physical object or piece of art. Indeed, a simple code would not have any financial importance if it were not attached to something that is widely recognised as valuable. This was demonstrated first through the analysis of the expectations of intellectual property law. Then, after having laid the foundation, the paper examined the OpenSea case, and finally, it analysed whether the expectations were met or not.

Keywords: technology, technology law, digital law, cryptoassets, NFTs, NFT, property law, intellectual property law, copyright law

Procedia PDF Downloads 89
918 Towards an Environmental Knowledge System in Water Management

Authors: Mareike Dornhoefer, Madjid Fathi

Abstract:

Water supply and water quality are key problems of mankind at the moment and - due to increasing population - in the future. Management disciplines like water, environment and quality management therefore need to closely interact, to establish a high level of water quality and to guarantee water supply in all parts of the world. Groundwater remediation is one aspect in this process. From a knowledge management perspective it is only possible to solve complex ecological or environmental problems if different factors, expert knowledge of various stakeholders and formal regulations regarding water, waste or chemical management are interconnected in form of a knowledge base. In general knowledge management focuses the processes of gathering and representing existing and new knowledge in a way, which allows for inference or deduction of knowledge for e.g. a situation where a problem solution or decision support are required. A knowledge base is no sole data repository, but a key element in a knowledge based system, thus providing or allowing for inference mechanisms to deduct further knowledge from existing facts. In consequence this knowledge provides decision support. The given paper introduces an environmental knowledge system in water management. The proposed environmental knowledge system is part of a research concept called Green Knowledge Management. It applies semantic technologies or concepts such as ontology or linked open data to interconnect different data and information sources about environmental aspects, in this case, water quality, as well as background material enriching an established knowledge base. Examples for the aforementioned ecological or environmental factors threatening water quality are among others industrial pollution (e.g. leakage of chemicals), environmental changes (e.g. rise in temperature) or floods, where all kinds of waste are merged and transferred into natural water environments. Water quality is usually determined with the help of measuring different indicators (e.g. chemical or biological), which are gathered with the help of laboratory testing, continuous monitoring equipment or other measuring processes. During all of these processes data are gathered and stored in different databases. Meanwhile the knowledge base needs to be established through interconnecting data of these different data sources and enriching its semantics. Experts may add their knowledge or experiences of previous incidents or influencing factors. In consequence querying or inference mechanisms are applied for the deduction of coherence between indicators, predictive developments or environmental threats. Relevant processes or steps of action may be modeled in form of a rule based approach. Overall the environmental knowledge system supports the interconnection of information and adding semantics to create environmental knowledge about water environment, supply chain as well as quality. The proposed concept itself is a holistic approach, which links to associated disciplines like environmental and quality management. Quality indicators and quality management steps need to be considered e.g. for the process and inference layers of the environmental knowledge system, thus integrating the aforementioned management disciplines in one water management application.

Keywords: water quality, environmental knowledge system, green knowledge management, semantic technologies, quality management

Procedia PDF Downloads 221
917 Syngas From Polypropylene Gasification in a Fluidized Bed

Authors: Sergio Rapagnà, Alessandro Antonio Papa, Armando Vitale, Andre Di Carlo

Abstract:

In recent years the world population has enormously increased the use of plastic products for their living needs, in particular for transporting and storing consumer goods such as food and beverage. Plastics are widely used in the automotive industry, in construction of electronic equipment, clothing and home furnishings. Over the last 70 years, the annual production of plastic products has increased from 2 million tons to 460 million tons. About 20% of the last quantity is mismanaged as waste. The consequence of this mismanagement is the release of plastic waste into the terrestrial and marine environments which represents a danger to human health and the ecosystem. Recycling all plastics is difficult because they are often made with mixtures of polymers that are incompatible with each other and contain different additives. The products obtained are always of lower quality and after two/three recycling cycles they must be eliminated either by thermal treatment to produce heat or disposed of in landfill. An alternative to these current solutions is to obtain a mixture of gases rich in H₂, CO and CO₂ suitable for being profitably used for the production of chemicals with consequent savings fossil sources. Obtaining a hydrogen-rich syngas can be achieved by gasification process using the fluidized bed reactor, in presence of steam as the fluidization medium. The fluidized bed reactor allows the gasification process of plastics to be carried out at a constant temperature and allows the use of different plastics with different compositions and different grain sizes. Furthermore, during the gasification process the use of steam increase the gasification of char produced by the first pyrolysis/devolatilization process of the plastic particles. The bed inventory can be made with particles having catalytic properties such as olivine, capable to catalyse the steam reforming reactions of heavy hydrocarbons normally called tars, with a consequent increase in the quantity of gases produced. The plant is composed of a fluidized bed reactor made of AISI 310 steel, having an internal diameter of 0.1 m, containing 3 kg of olivine particles as a bed inventory. The reactor is externally heated by an oven up to 1000 °C. The hot producer gases that exit the reactor, after being cooled, are quantified using a mass flow meter. Gas analyzers are present to measure instantly the volumetric composition of H₂, CO, CO₂, CH₄ and NH₃. At the conference, the results obtained from the continuous gasification of polypropylene (PP) particles in a steam atmosphere at temperatures of 840-860 °C will be presented.

Keywords: gasification, fluidized bed, hydrogen, olivine, polypropyle

Procedia PDF Downloads 27
916 Fabrication of Carbon Nanoparticles and Graphene Using Pulsed Laser Ablation

Authors: Davoud Dorranian, Hajar Sadeghi, Elmira Solati

Abstract:

Carbon nanostructures in various forms were synthesized using pulsed laser ablation of a graphite target in different liquid environment. The beam of a Q-switched Nd:YAG laser of 1064-nm wavelength at 7-ns pulse width is employed to irradiate the solid target in water, acetone, alcohol, and cetyltrimethylammonium bromide (CTAB). Then the effect of the liquid environment on the characteristic of carbon nanostructures produced by laser ablation was investigated. The optical properties of the carbon nanostructures were examined at room temperature by UV–Vis-NIR spectrophotometer. The crystalline structure of the carbon nanostructures was analyzed by X-ray diffraction (XRD). The morphology of samples was investigated by field emission scanning electron microscope (FE-SEM). Transmission electron microscope (TEM) was employed to investigate the form of carbon nanostructures. Raman spectroscopy was used to determine the quality of carbon nanostructures. Results show that different carbon nanostructures such as nanoparticles and few-layer graphene were formed in various liquid environments. The UV-Vis-NIR absorption spectra of samples reveal that the intensity of absorption peak of nanoparticles in alcohol is higher than the other liquid environments due to the larger number of nanoparticles in this environment. The red shift of the absorption peak of the sample in acetone confirms that produced carbon nanoparticles in this liquid are averagely larger than the other medium. The difference in the intensity and shape of the absorption peak indicated the effect of the liquid environment in producing the nanoparticles. The XRD pattern of the sample in water indicates an amorphous structure due to existence the graphene sheets. X-ray diffraction pattern shows that the degree of crystallinity of sample produced in CTAB is higher than the other liquid environments. Transmission electron microscopy images reveal that the generated carbon materials in water are graphene sheet and in the other liquid environments are graphene sheet and spherical nanostructures. According to the TEM images, we have the larger amount of carbon nanoparticles in the alcohol environment. FE-SEM micrographs indicate that in this liquids sheet like structures are formed however in acetone, produced sheets are adhered and these layers overlap with each other. According to the FE-SEM micrographs, the surface morphology of the sample in CTAB was coarser than that without surfactant. From Raman spectra, it can be concluded the distinct shape, width, and position of the graphene peaks and corresponding graphite source.

Keywords: carbon nanostructures, graphene, pulsed laser ablation, graphite

Procedia PDF Downloads 315
915 Resonant Fluorescence in a Two-Level Atom and the Terahertz Gap

Authors: Nikolai N. Bogolubov, Andrey V. Soldatov

Abstract:

Terahertz radiation occupies a range of frequencies somewhere from 100 GHz to approximately 10 THz, just between microwaves and infrared waves. This range of frequencies holds promise for many useful applications in experimental applied physics and technology. At the same time, reliable, simple techniques for generation, amplification, and modulation of electromagnetic radiation in this range are far from been developed enough to meet the requirements of its practical usage, especially in comparison to the level of technological abilities already achieved for other domains of the electromagnetic spectrum. This situation of relative underdevelopment of this potentially very important range of electromagnetic spectrum is known under the name of the 'terahertz gap.' Among other things, technological progress in the terahertz area has been impeded by the lack of compact, low energy consumption, easily controlled and continuously radiating terahertz radiation sources. Therefore, development of new techniques serving this purpose as well as various devices based on them is of obvious necessity. No doubt, it would be highly advantageous to employ the simplest of suitable physical systems as major critical components in these techniques and devices. The purpose of the present research was to show by means of conventional methods of non-equilibrium statistical mechanics and the theory of open quantum systems, that a thoroughly studied two-level quantum system, also known as an one-electron two-level 'atom', being driven by external classical monochromatic high-frequency (e.g. laser) field, can radiate continuously at much lower (e.g. terahertz) frequency in the fluorescent regime if the transition dipole moment operator of this 'atom' possesses permanent non-equal diagonal matrix elements. This assumption contradicts conventional assumption routinely made in quantum optics that only the non-diagonal matrix elements persist. The conventional assumption is pertinent to natural atoms and molecules and stems from the property of spatial inversion symmetry of their eigenstates. At the same time, such an assumption is justified no more in regard to artificially manufactured quantum systems of reduced dimensionality, such as, for example, quantum dots, which are often nicknamed 'artificial atoms' due to striking similarity of their optical properties to those ones of the real atoms. Possible ways to experimental observation and practical implementation of the predicted effect are discussed too.

Keywords: terahertz gap, two-level atom, resonant fluorescence, quantum dot, resonant fluorescence, two-level atom

Procedia PDF Downloads 271
914 Optimal Pricing Based on Real Estate Demand Data

Authors: Vanessa Kummer, Maik Meusel

Abstract:

Real estate demand estimates are typically derived from transaction data. However, in regions with excess demand, transactions are driven by supply and therefore do not indicate what people are actually looking for. To estimate the demand for housing in Switzerland, search subscriptions from all important Swiss real estate platforms are used. These data do, however, suffer from missing information—for example, many users do not specify how many rooms they would like or what price they would be willing to pay. In economic analyses, it is often the case that only complete data is used. Usually, however, the proportion of complete data is rather small which leads to most information being neglected. Also, the data might have a strong distortion if it is complete. In addition, the reason that data is missing might itself also contain information, which is however ignored with that approach. An interesting issue is, therefore, if for economic analyses such as the one at hand, there is an added value by using the whole data set with the imputed missing values compared to using the usually small percentage of complete data (baseline). Also, it is interesting to see how different algorithms affect that result. The imputation of the missing data is done using unsupervised learning. Out of the numerous unsupervised learning approaches, the most common ones, such as clustering, principal component analysis, or neural networks techniques are applied. By training the model iteratively on the imputed data and, thereby, including the information of all data into the model, the distortion of the first training set—the complete data—vanishes. In a next step, the performances of the algorithms are measured. This is done by randomly creating missing values in subsets of the data, estimating those values with the relevant algorithms and several parameter combinations, and comparing the estimates to the actual data. After having found the optimal parameter set for each algorithm, the missing values are being imputed. Using the resulting data sets, the next step is to estimate the willingness to pay for real estate. This is done by fitting price distributions for real estate properties with certain characteristics, such as the region or the number of rooms. Based on these distributions, survival functions are computed to obtain the functional relationship between characteristics and selling probabilities. Comparing the survival functions shows that estimates which are based on imputed data sets do not differ significantly from each other; however, the demand estimate that is derived from the baseline data does. This indicates that the baseline data set does not include all available information and is therefore not representative for the entire sample. Also, demand estimates derived from the whole data set are much more accurate than the baseline estimation. Thus, in order to obtain optimal results, it is important to make use of all available data, even though it involves additional procedures such as data imputation.

Keywords: demand estimate, missing-data imputation, real estate, unsupervised learning

Procedia PDF Downloads 285
913 Modeling and Simulation of Primary Atomization and Its Effects on Internal Flow Dynamics in a High Torque Low Speed Diesel Engine

Authors: Muteeb Ulhaq, Rizwan Latif, Sayed Adnan Qasim, Imran Shafi

Abstract:

Diesel engines are most efficient and reliable in terms of efficiency, reliability and adaptability. Most of the research and development up till now have been directed towards High-Speed Diesel Engine, for Commercial use. In these engines objective is to optimize maximum acceleration by reducing exhaust emission to meet international standards. In high torque low-speed engines the requirement is altogether different. These types of Engines are mostly used in Maritime Industry, Agriculture industry, Static Engines Compressors Engines etc. Unfortunately due to lack of research and development, these engines have low efficiency and high soot emissions and one of the most effective way to overcome these issues is by efficient combustion in an engine cylinder, the fuel spray atomization process plays a vital role in defining mixture formation, fuel consumption, combustion efficiency and soot emissions. Therefore, a comprehensive understanding of the fuel spray characteristics and atomization process is of a great importance. In this research, we will examine the effects of primary breakup modeling on the spray characteristics under diesel engine conditions. KH-ACT model is applied to cater the effect of aerodynamics in an engine cylinder and also cavitations and turbulence generated inside the injector. It is a modified form of most commonly used KH model, which considers only the aerodynamically induced breakup based on the Kelvin–Helmholtz instability. Our model is extensively evaluated by performing 3-D time-dependent simulations on Open FOAM, which is an open source flow solver. Spray characteristics like Spray Penetration, Liquid length, Spray cone angle and Souter mean diameter (SMD) were validated by comparing the results of Open Foam and Matlab. Including the effects of cavitation and turbulence enhances primary breakup, leading to smaller droplet sizes, decrease in liquid penetration, and increase in the radial dispersion of spray. All these properties favor early evaporation of fuel which enhances Engine efficiency.

Keywords: Kelvin–Helmholtz instability, open foam, primary breakup, souter mean diameter, turbulence

Procedia PDF Downloads 212
912 Determining the Effective Substance of Cottonseed Extract on the Treatment of Leishmaniasis

Authors: Mehrosadat Mirmohammadi, Sara Taghdisi, Ali Padash, Mohammad Hossein Pazandeh

Abstract:

Gossypol, a yellowish anti-nutritional compound found in cotton plants, exists in various plant parts, including seeds, husks, leaves, and stems. Chemically, gossypol is a potent polyphenolic aldehyde with antioxidant and therapeutic properties. However, its free form can be toxic, posing risks to both humans and animals. Initially, we extracted gossypol from cotton seeds using n-hexane as a solvent (yield: 84.0 ± 4.0%). We also obtained cotton seed and cotton boll extracts via Soxhlet extraction (25:75 hydroalcoholic ratio). These extracts, combined with cornstarch, formed four herbal medicinal formulations. Ethical approval allowed us to investigate their effects on Leishmania-caused skin wounds, comparing them to glucantime (local ampoule). Herbal formulas outperformed the control group (ethanol only) in wound treatment (p-value 0.05). The average wound diameter after two months did not significantly differ between plant extract ointments and topical glucantime. Notably, cotton boll extract with 1% extra gossypol crystal showed the best therapeutic effect. We extracted gossypol from cotton seeds using n-hexane via Soxhlet extraction. Saponification, acidification, and recrystallization steps followed. FTIR, UV-Vis, and HPLC analyses confirmed the product’s identity. Herbal medicines from cotton seeds effectively treated chronic wounds compared to the ethanol-only control group. Wound diameter differed significantly between extract ointments and glucantime injections. It seems that due to the presence of large amounts of fat in the oil, the extraction of gossypol from it faces many obstacles. The extraction of this compound with our technique showed that extraction from oil has a higher efficiency, perhaps because of the preparation of oil by cold pressing method, the possibility of losing this compound is much less than when extraction is done with Soxhlet. On the other hand, the gossypol in the oil is mostly bound to the protein, which somehow protects the gossypol until the last stage of the extraction process. Since this compound is very sensitive to light and heat, it was extracted as a derivative with acetic acid. Also, in the treatment section, it was found that the ointment prepared with the extract is more effective and Gossypol is one of the effective ingredients in the treatment. Therefore, gossypol can be extracted from the oil and added to the extract from which gossypol has been extracted to make an effective medicine with a certain dose.

Keywords: cottonseed, glucantime, gossypol, leishmaniasis

Procedia PDF Downloads 61
911 Entropy in a Field of Emergence in an Aspect of Linguo-Culture

Authors: Nurvadi Albekov

Abstract:

Communicative situation is a basis, which designates potential models of ‘constructed forms’, a motivated basis of a text, for a text can be assumed as a product of the communicative situation. It is within the field of emergence the models of text, that can be potentially prognosticated in a certain communicative situation, are designated. Every text can be assumed as conceptual system structured on the base of certain communicative situation. However in the process of ‘structuring’ of a certain model of ‘conceptual system’ consciousness of a recipient is able act only within the border of the field of emergence for going out of this border indicates misunderstanding of the communicative situation. On the base of communicative situation we can witness the increment of meaning where the synergizing of the informative model of communication, formed by using of the invariant units of a language system, is a result of verbalization of the communicative situation. The potential of the models of a text, prognosticated within the field of emergence, also depends on the communicative situation. The conception ‘the field of emergence’ is interpreted as a unit of the language system, having poly-directed universal structure, implying the presence of the core, the center and the periphery, including different levels of means of a functioning system of language, both in terms of linguistic resources, and in terms of extra linguistic factors interaction of which results increment of a text. The conception ‘field of emergence’ is considered as the most promising in the analysis of texts: oral, written, printed and electronic. As a unit of the language system field of emergence has several properties that predict its use during the study of a text in different levels. This work is an attempt analysis of entropy in a text in the aspect of lingua-cultural code, prognosticated within the model of the field of emergence. The article describes the problem of entropy in the field of emergence, caused by influence of the extra-linguistic factors. The increasing of entropy is caused not only by the fact of intrusion of the language resources but by influence of the alien culture in a whole, and by appearance of non-typical for this very culture symbols in the field of emergence. The borrowing of alien lingua-cultural symbols into the lingua-culture of the author is a reason of increasing the entropy when constructing a text both in meaning and in structuring level. It is nothing but artificial formatting of lexical units that violate stylistic unity of a phrase. It is marked that one of the important characteristics descending the entropy in the field of emergence is a typical similarity of lexical and semantic resources of the different lingua-cultures in aspects of extra linguistic factors.

Keywords: communicative situation, field of emergence, lingua-culture, entropy

Procedia PDF Downloads 362
910 Urban Livelihoods and Climate Change: Adaptation Strategies for Urban Poor in Douala, Cameroon

Authors: Agbortoko Manyigbe Ayuk Nkem, Eno Cynthia Osuh

Abstract:

This paper sets to examine the relationship between climate change and urban livelihood through a vulnerability assessment of the urban poor in Douala. Urban development in Douala places priority towards industrial and city-centre development with little focus on the urban poor in terms of housing units and areas of sustenance. With the high rate of urbanisation and increased land prices, the urban poor are forced to occupy marginal lands which are mainly wetlands, wastelands and along abandoned neighbourhoods prone to natural hazards. Due to climate change and its effects, these wetlands are constantly flooded thereby destroying homes, properties, and crops. Also, most of these urban dwellers have found solace in urban agriculture as a means for survival. However, since agriculture in tropical regions like Cameroon depends largely on seasonal rainfall, the changes in rainfall pattern has led to misplaced periods for crop planting and a huge wastage of resources as rainfall becomes very unreliable with increased temperature levels. Data for the study was obtained from both primary and secondary sources. Secondary sources included published materials related to climate change and vulnerability. Primary data was obtained through focus-group discussions with some urban farmers while a stratified sampling of residents within marginal lands was done. Each stratum was randomly sampled to obtain information on different stressors related to climate change and their effect on livelihood. Findings proved that the high rate of rural-urban migration into Douala has led to increased prevalence of the urban poor and their vulnerability to climate change as evident in their constant fight against flood from unexpected sea level rise and irregular rainfall pattern for urban agriculture. The study also proved that women were most vulnerable as they depended solely on urban agriculture and its related activities like retailing agricultural products in different urban markets which to them serves as a main source of income in the attainment of basic needs for the family. Adaptation measures include the constant use of sand bags, raised makeshifts as well as cultivation along streams, planting after evidence of constant rainfall has become paramount for sustainability.

Keywords: adaptation, Douala, Cameroon, climate change, development, livelihood, vulnerability

Procedia PDF Downloads 293
909 Alternative Fuel Production from Sewage Sludge

Authors: Jaroslav Knapek, Kamila Vavrova, Tomas Kralik, Tereza Humesova

Abstract:

The treatment and disposal of sewage sludge is one of the most important and critical problems of waste water treatment plants. Currently, 180 thousand tonnes of sludge dry matter are produced in the Czech Republic, which corresponds to approximately 17.8 kg of stabilized sludge dry matter / year per inhabitant of the Czech Republic. Due to the fact that sewage sludge contains a large amount of substances that are not beneficial for human health, the conditions for sludge management will be significantly tightened in the Czech Republic since 2023. One of the tested methods of sludge liquidation is the production of alternative fuel from sludge from sewage treatment plants and paper production. The paper presents an analysis of economic efficiency of alternative fuel production from sludge and its use for fluidized bed boiler with nominal consumption of 5 t of fuel per hour. The evaluation methodology includes the entire logistics chain from sludge extraction, through mechanical moisture reduction to about 40%, transport to the pelletizing line, moisture drying for pelleting and pelleting itself. For economic analysis of sludge pellet production, a time horizon of 10 years corresponding to the expected lifetime of the critical components of the pelletizing line is chosen. The economic analysis of pelleting projects is based on a detailed analysis of reference pelleting technologies suitable for sludge pelleting. The analysis of the economic efficiency of pellet is based on the simulation of cash flows associated with the implementation of the project over the life of the project. For the entered value of return on the invested capital, the price of the resulting product (in EUR / GJ or in EUR / t) is searched to ensure that the net present value of the project is zero over the project lifetime. The investor then realizes the return on the investment in the amount of the discount used to calculate the net present value. The calculations take place in a real business environment (taxes, tax depreciation, inflation, etc.) and the inputs work with market prices. At the same time, the opportunity cost principle is respected; waste disposal for alternative fuels includes the saved costs of waste disposal. The methodology also respects the emission allowances saved due to the displacement of coal by alternative (bio) fuel. Preliminary results of testing of pellet production from sludge show that after suitable modifications of the pelletizer it is possible to produce sufficiently high quality pellets from sludge. A mixture of sludge and paper waste has proved to be a more suitable material for pelleting. At the same time, preliminary results of the analysis of the economic efficiency of this sludge disposal method show that, despite the relatively low calorific value of the fuel produced (about 10-11 MJ / kg), this sludge disposal method is economically competitive. This work has been supported by the Czech Technology Agency within the project TN01000048 Biorefining as circulation technology.

Keywords: Alternative fuel, Economic analysis, Pelleting, Sewage sludge

Procedia PDF Downloads 135
908 Influence of Glass Plates Different Boundary Conditions on Human Impact Resistance

Authors: Alberto Sanchidrián, José A. Parra, Jesús Alonso, Julián Pecharromán, Antonia Pacios, Consuelo Huerta

Abstract:

Glass is a commonly used material in building; there is not a unique design solution as plates with a different number of layers and interlayers may be used. In most façades, a security glazing have to be used according to its performance in the impact pendulum. The European Standard EN 12600 establishes an impact test procedure for classification under the point of view of the human security, of flat plates with different thickness, using a pendulum of two tires and 50 kg mass that impacts against the plate from different heights. However, this test does not replicate the actual dimensions and border conditions used in building configurations and so the real stress distribution is not determined with this test. The influence of different boundary conditions, as the ones employed in construction sites, is not well taking into account when testing the behaviour of safety glazing and there is not a detailed procedure and criteria to determinate the glass resistance against human impact. To reproduce the actual boundary conditions on site, when needed, the pendulum test is arranged to be used "in situ", with no account for load control, stiffness, and without a standard procedure. Fracture stress of small and large glass plates fit a Weibull distribution with quite a big dispersion so conservative values are adopted for admissible fracture stress under static loads. In fact, test performed for human impact gives a fracture strength two or three times higher, and many times without a total fracture of the glass plate. Newest standards, as for example DIN 18008-4, states for an admissible fracture stress 2.5 times higher than the ones used for static and wing loads. Now two working areas are open: a) to define a standard for the ‘in situ’ test; b) to prepare a laboratory procedure that allows testing with more real stress distribution. To work on both research lines a laboratory that allows to test medium size specimens with different border conditions, has been developed. A special steel frame allows reproducing the stiffness of the glass support substructure, including a rigid condition used as reference. The dynamic behaviour of the glass plate and its support substructure have been characterized with finite elements models updated with modal tests results. In addition, a new portable impact machine is being used to get enough force and direction control during the impact test. Impact based on 100 J is used. To avoid problems with broken glass plates, the test have been done using an aluminium plate of 1000 mm x 700 mm size and 10 mm thickness supported on four sides; three different substructure stiffness conditions are used. A detailed control of the dynamic stiffness and the behaviour of the plate is done with modal tests. Repeatability of the test and reproducibility of results prove that procedure to control both, stiffness of the plate and the impact level, is necessary.

Keywords: glass plates, human impact test, modal test, plate boundary conditions

Procedia PDF Downloads 307
907 Characterization and Modelling of Groundwater Flow towards a Public Drinking Water Well Field: A Case Study of Ter Kamerenbos Well Field

Authors: Buruk Kitachew Wossenyeleh

Abstract:

Groundwater is the largest freshwater reservoir in the world. Like the other reservoirs of the hydrologic cycle, it is a finite resource. This study focused on the groundwater modeling of the Ter Kamerenbos well field to understand the groundwater flow system and the impact of different scenarios. The study area covers 68.9Km2 in the Brussels Capital Region and is situated in two river catchments, i.e., Zenne River and Woluwe Stream. The aquifer system has three layers, but in the modeling, they are considered as one layer due to their hydrogeological properties. The catchment aquifer system is replenished by direct recharge from rainfall. The groundwater recharge of the catchment is determined using the spatially distributed water balance model called WetSpass, and it varies annually from zero to 340mm. This groundwater recharge is used as the top boundary condition for the groundwater modeling of the study area. During the groundwater modeling using Processing MODFLOW, constant head boundary conditions are used in the north and south boundaries of the study area. For the east and west boundaries of the study area, head-dependent flow boundary conditions are used. The groundwater model is calibrated manually and automatically using observed hydraulic heads in 12 observation wells. The model performance evaluation showed that the root means the square error is 1.89m and that the NSE is 0.98. The head contour map of the simulated hydraulic heads indicates the flow direction in the catchment, mainly from the Woluwe to Zenne catchment. The simulated head in the study area varies from 13m to 78m. The higher hydraulic heads are found in the southwest of the study area, which has the forest as a land-use type. This calibrated model was run for the climate change scenario and well operation scenario. Climate change may cause the groundwater recharge to increase by 43% and decrease by 30% in 2100 from current conditions for the high and low climate change scenario, respectively. The groundwater head varies for a high climate change scenario from 13m to 82m, whereas for a low climate change scenario, it varies from 13m to 76m. If doubling of the pumping discharge assumed, the groundwater head varies from 13m to 76.5m. However, if the shutdown of the pumps is assumed, the head varies in the range of 13m to 79m. It is concluded that the groundwater model is done in a satisfactory way with some limitations, and the model output can be used to understand the aquifer system under steady-state conditions. Finally, some recommendations are made for the future use and improvement of the model.

Keywords: Ter Kamerenbos, groundwater modelling, WetSpass, climate change, well operation

Procedia PDF Downloads 152
906 Combating Corruption to Enhance Learner Academic Achievement: A Qualitative Study of Zimbabwean Public Secondary Schools

Authors: Onesmus Nyaude

Abstract:

The aim of the study was to investigate participants’ views on how corruption can be combated to enhance learner academic achievement. The study was undertaken on three select public secondary institutions in Zimbabwe. This study also focuses on exploring the various views of educators; parents and the learners on the role played by corruption in perpetuating the seemingly existing learner academic achievement disparities in various educational institutions. The study further interrogates and examines the nexus between the prevalence of corruption in schools and the subsequent influence on the academic achievement of learners. Corruption is considered a form of social injustice; hence in Zimbabwe, the general consensus is that it is perceived rife to the extent that it is overtaking the traditional factors that contributed to the poor academic achievement of learners. Coupled to this, have been the issue of gross abuse of power and some malpractices emanating from concealment of essential and official transactions in the conduct of business. Through proposing robust anti-corruption mechanisms, teaching and learning resources poured in schools would be put into good use. This would prevent the unlawful diversion and misappropriation of the resources in question which has always been the culture. This study is of paramount significance to curriculum planners, teachers, parents, and learners. The study was informed by the interpretive paradigm; thus qualitative research approaches were used. Both probability and non-probability sampling techniques were adopted in ‘site and participants’ selection. A representative sample of (150) participants was used. The study found that the majority of the participants perceived corruption as a social problem and a human right threat affecting the quality of teaching and learning processes in the education sector. It was established that corruption prevalence within institutions is as a result of the perpetual weakening of ethical values and other variables linked to upholding of ‘Ubuntu’ among general citizenry. It was further established that greediness and weak systems are major causes of rampant corruption within institutions of higher learning and are manifesting through abuse of power, bribery, misappropriation and embezzlement of material and financial resources. Therefore, there is great need to collectively address the problem of corruption in educational institutions and society at large. The study additionally concludes that successful combating of corruption will promote successful moral development of students as well as safeguarding their human rights entitlements. The study recommends the adoption of principles of good corporate governance within educational institutions in order to successfully curb corruption. The study further recommends the intensification of interventionist strategies and strengthening of systems in educational institutions as well as regular audits to overcome the problem associated with rampant corruption cases.

Keywords: academic achievement, combating, corruption, good corporate governance, qualitative study

Procedia PDF Downloads 243
905 Making the Neighbourhood: Analyzing Mapping Procedures to Deal with Plurality and Conflict

Authors: Barbara Roosen, Oswald Devisch

Abstract:

Spatial projects are often contested. Despite participatory trajectories in official spatial development processes, citizens engage often by their power to say no. Participatory mapping helps to produce more legible and democratic ways of decision-making. It has proven its value in producing a multitude of knowledges and views, for individuals and community groups and local stakeholders to imagine desired and undesired futures and to give them the rhetorical power to present their views throughout the development process. From this perspective, mapping works as a social process in which individuals and groups share their knowledge, learn from each other and negotiate their relationship with each other as well as with space and power. In this way, these processes eventually aim to activate communities to intervene in cooperation in real problems. However, these are fragile and bumpy processes, sometimes leading to (local) conflict and intractable situations. Heterogeneous subjectivities and knowledge that become visible during the mapping process and which are contested by members of the community, is often the first trigger. This paper discusses a participatory mapping project conducted in a residential subdivision in Flanders to provide a deeper understanding of how or under which conditions the mapping process could moderate discordant situations amongst inhabitants, local organisations and local authorities, towards a more constructive outcome. In our opinion, this implies a thorough documentation and presentation of the different steps of the mapping process to design and moderate an open and transparent dialogue. The mapping project ‘Make the Neighbourhood’, is set up in the aftermath of a socio-spatial design intervention in the neighbourhood that led to polarization within the community. To start negotiation between the diverse claims that came to the fore, we co-create a desired future map of the neighbourhood together with local organisations and inhabitants as a way to engage them in the development of a new spatial development plan for the area. This mapping initiative set up a new ‘common’ goal or concern, as a first step to bridge the gap that we experienced between different sociocultural groups, bottom-up and top-down initiatives and between professionals and non-professionals. An atlas of elements (materials), an atlas of actors with different roles and an atlas of ways of cooperation and organisation form the work and building material of the future neighbourhood map, assembled in two co-creation sessions. Firstly, we will consider how the mapping procedures articulate the plurality of claims and agendas. Secondly, we will elaborate upon how social relations and spatialities are negotiated and reproduced during the different steps of the map making. Thirdly, we will reflect on the role of the rules, format, and structure of the mapping process in moderating negotiations between much divided claims. To conclude, we will discuss the challenges of visualizing the different steps of mapping process as a strategy to moderate tense negotiations in a more constructive direction in the context of spatial development processes.

Keywords: conflict, documentation, participatory mapping, residential subdivision

Procedia PDF Downloads 209
904 Challenges of Blockchain Applications in the Supply Chain Industry: A Regulatory Perspective

Authors: Pardis Moslemzadeh Tehrani

Abstract:

Due to the emergence of blockchain technology and the benefits of cryptocurrencies, intelligent or smart contracts are gaining traction. Artificial intelligence (AI) is transforming our lives, and it is being embraced by a wide range of sectors. Smart contracts, which are at the heart of blockchains, incorporate AI characteristics. Such contracts are referred to as "smart" contracts because of the underlying technology that allows contracting parties to agree on terms expressed in computer code that defines machine-readable instructions for computers to follow under specific situations. The transmission happens automatically if the conditions are met. Initially utilised for financial transactions, blockchain applications have since expanded to include the financial, insurance, and medical sectors, as well as supply networks. Raw material acquisition by suppliers, design, and fabrication by manufacturers, delivery of final products to consumers, and even post-sales logistics assistance are all part of supply chains. Many issues are linked with managing supply chains from the planning and coordination stages, which can be implemented in a smart contract in a blockchain due to their complexity. Manufacturing delays and limited third-party amounts of product components have raised concerns about the integrity and accountability of supply chains for food and pharmaceutical items. Other concerns include regulatory compliance in multiple jurisdictions and transportation circumstances (for instance, many products must be kept in temperature-controlled environments to ensure their effectiveness). Products are handled by several providers before reaching customers in modern economic systems. Information is sent between suppliers, shippers, distributors, and retailers at every stage of the production and distribution process. Information travels more effectively when individuals are eliminated from the equation. The usage of blockchain technology could be a viable solution to these coordination issues. In blockchains, smart contracts allow for the rapid transmission of production data, logistical data, inventory levels, and sales data. This research investigates the legal and technical advantages and disadvantages of AI-blockchain technology in the supply chain business. It aims to uncover the applicable legal problems and barriers to the use of AI-blockchain technology to supply chains, particularly in the food industry. It also discusses the essential legal and technological issues and impediments to supply chain implementation for stakeholders, as well as methods for overcoming them before releasing the technology to clients. Because there has been little research done on this topic, it is difficult for industrial stakeholders to grasp how blockchain technology could be used in their respective operations. As a result, the focus of this research will be on building advanced and complex contractual terms in supply chain smart contracts on blockchains to cover all unforeseen supply chain challenges.

Keywords: blockchain, supply chain, IoT, smart contract

Procedia PDF Downloads 127
903 The Gaps of Environmental Criminal Liability in Armed Conflicts and Its Consequences: An Analysis under Stockholm, Geneva and Rome

Authors: Vivian Caroline Koerbel Dombrowski

Abstract:

Armed conflicts have always meant the ultimate expression of power and at the same time, lack of understanding among nations. Cities were destroyed, people were killed, assets were devastated. But these are not only the loss of a war: the environmental damage comes to be considered immeasurable losses in the short, medium and long term. And this is because no nation wants to bear that cost. They invest in military equipment, training, technical equipment but the environmental account yet finds gaps in international law. Considering such a generalization in rights protection, many nations are at imminent danger in a conflict if the water will be used as a mass weapon, especially if we consider important rivers such as Jordan, Euphrates and Nile. The top three international documents were analyzed on the subject: the Stockholm Convention (1972), Additional Protocol I to the Geneva Convention (1977) and the Rome Statute (1998). Indeed, some references are researched in doctrine, especially scientific articles, to substantiate with consistent data about the extent of the damage, historical factors and decisions which have been successful. However, due to the lack of literature about this subject, the research tends to be exhaustive. From the study of the indicated material, it was noted that international law - humanitarian and environmental - calls in some of its instruments the environmental protection in war conflicts, but they are generic and vague rules that do not define exactly what is the environmental damage , nor sets standards for measure them. Taking into account the mains conflicts of the century XX: World War II, the Vietnam War and the Gulf War, one must realize that the environmental consequences were of great rides - never deactivated landmines, buried nuclear weapons, armaments and munitions destroyed in the soil, chemical weapons, not to mention the effects of some weapons when used (uranium, agent Orange, etc). Extending the search for more recent conflicts such as Afghanistan, it is proven that the effects on health of the civilian population were catastrophic: cancer, birth defects, and deformities in newborns. There are few reports of nations that, somehow, repaired the damage caused to the environment as a result of the conflict. In the pitch of contemporary conflicts, many nations fear that water resources are used as weapons of mass destruction, because once contaminated - directly or indirectly - can become a means of disguised genocide side effect of military objective. In conclusion, it appears that the main international treaties governing the subject mention the concern for environmental protection, however leave the normative specifications vacancies necessary to effectively there is a prevention of environmental damage in armed conflict and, should they occur, the repair of the same. Still, it appears that there is no protection mechanism to safeguard natural resources and avoid them to become a mass destruction weapon.

Keywords: armed conflicts, criminal liability, environmental damages, humanitarian law, mass weapon

Procedia PDF Downloads 420
902 Notes on Matter: Ibn Arabi, Bernard Silvestris, and Other Ghosts

Authors: Brad Fox

Abstract:

Between something and nothing, a bit of both, neither/nor, a figment of the imagination, the womb of the universe - questions of what matter is, where it exists and what it means continue to surge up from the bottom of our concepts and theories. This paper looks at divergences and convergences, intimations and mistranslations, in a lineage of thought that begins with Plato’s Timaeus, travels through Arabic Spain and Syria, finally to end up in the language of science. Up to the 13th century, philosophers in Christian France based such inquiries on a questionable and fragmented translation of the Timaeus by Calcidius, with a commentary that conflated the Platonic concept of khora (‘space’ or ‘void’) with Aristotle’s hyle (‘primal matter’ as derived from ‘wood’ as a building material). Both terms were translated by Calcidius as silva. For 700 years, this was the only source for philosophers of matter in the Latin-speaking world. Bernard Silvestris, in his Cosmographia, exemplifies the concepts developed before new translations from Arabic began to pour into the Latin world from such centers as the court of Toledo. Unlike their counterparts across the Pyrenees, 13th century philosophers in Muslim Spain had access to a broad vocabulary for notions of primal matter. The prolific and visionary theologian, philosopher, and poet Muhyiddin Ibn Arabi could draw on the Ikhwan Al-Safa’s 10th Century renderings of Aristotle, which translated the Greek hyle as the everyday Arabic word maddah, still used for building materials today. He also often used the simple transliteration of hyle as hayula, probably taken from Ibn Sina. The prophet’s son-in-law Ali talked of dust in the air, invisible until it is struck by sunlight. Ibn Arabi adopted this dust - haba - as an expression for an original metaphysical substance, nonexistent but susceptible to manifesting forms. Ibn Arabi compares the dust to a phoenix, because we have heard about it and can conceive of it, but it has no existence unto itself and can be described only in similes. Elsewhere he refers to it as quwwa wa salahiyya - pure potentiality and readiness. The final portion of the paper will compare Bernard and Ibn Arabi’s notions of matter to the recent ontology developed by theoretical physicist and philosopher Karen Barad. Looking at Barad’s work with the work of Nils Bohr, it will argue that there is a rich resonance between Ibn Arabi’s paradoxical conceptions of matter and the quantum vacuum fluctuations verified by recent lab experiments. The inseparability of matter and meaning in Barad recall Ibn Arabi’s original response to Ibn Rushd’s question: Does revelation offer the same knowledge as rationality? ‘Yes and No,’ Ibn Arabi said, ‘and between the yes and no spirit is divided from matter and heads are separated from bodies.’ Ibn Arabi’s double affirmation continues to offer insight into our relationship to momentary experience at its most fundamental level.

Keywords: Karen Barad, Muhyiddin Ibn Arabi, primal matter, Bernard Silvestris

Procedia PDF Downloads 427
901 DeepNIC a Method to Transform Each Tabular Variable into an Independant Image Analyzable by Basic CNNs

Authors: Nguyen J. M., Lucas G., Ruan S., Digonnet H., Antonioli D.

Abstract:

Introduction: Deep Learning (DL) is a very powerful tool for analyzing image data. But for tabular data, it cannot compete with machine learning methods like XGBoost. The research question becomes: can tabular data be transformed into images that can be analyzed by simple CNNs (Convolutional Neuron Networks)? Will DL be the absolute tool for data classification? All current solutions consist in repositioning the variables in a 2x2 matrix using their correlation proximity. In doing so, it obtains an image whose pixels are the variables. We implement a technology, DeepNIC, that offers the possibility of obtaining an image for each variable, which can be analyzed by simple CNNs. Material and method: The 'ROP' (Regression OPtimized) model is a binary and atypical decision tree whose nodes are managed by a new artificial neuron, the Neurop. By positioning an artificial neuron in each node of the decision trees, it is possible to make an adjustment on a theoretically infinite number of variables at each node. From this new decision tree whose nodes are artificial neurons, we created the concept of a 'Random Forest of Perfect Trees' (RFPT), which disobeys Breiman's concepts by assembling very large numbers of small trees with no classification errors. From the results of the RFPT, we developed a family of 10 statistical information criteria, Nguyen Information Criterion (NICs), which evaluates in 3 dimensions the predictive quality of a variable: Performance, Complexity and Multiplicity of solution. A NIC is a probability that can be transformed into a grey level. The value of a NIC depends essentially on 2 super parameters used in Neurops. By varying these 2 super parameters, we obtain a 2x2 matrix of probabilities for each NIC. We can combine these 10 NICs with the functions AND, OR, and XOR. The total number of combinations is greater than 100,000. In total, we obtain for each variable an image of at least 1166x1167 pixels. The intensity of the pixels is proportional to the probability of the associated NIC. The color depends on the associated NIC. This image actually contains considerable information about the ability of the variable to make the prediction of Y, depending on the presence or absence of other variables. A basic CNNs model was trained for supervised classification. Results: The first results are impressive. Using the GSE22513 public data (Omic data set of markers of Taxane Sensitivity in Breast Cancer), DEEPNic outperformed other statistical methods, including XGBoost. We still need to generalize the comparison on several databases. Conclusion: The ability to transform any tabular variable into an image offers the possibility of merging image and tabular information in the same format. This opens up great perspectives in the analysis of metadata.

Keywords: tabular data, CNNs, NICs, DeepNICs, random forest of perfect trees, classification

Procedia PDF Downloads 125
900 Empirical Superpave Mix-Design of Rubber-Modified Hot-Mix Asphalt in Railway Sub-Ballast

Authors: Fernando M. Soto, Gaetano Di Mino

Abstract:

The design of an unmodified bituminous mixture and three rubber-aggregate mixtures containing rubber-aggregate by a dry process (RUMAC) was evaluated, using an empirical-analytical approach based on experimental findings obtained in the laboratory with the volumetric mix design by gyratory compaction. A reference dense-graded bituminous sub-ballast mixture (3% of air voids and a bitumen 4% over the total weight of the mix), and three rubberized mixtures by dry process (1,5 to 3% of rubber by total weight and 5-7% of binder) were used applying the Superpave mix-design for a level 3 (high-traffic) design rail lines. The railway trackbed section analyzed was a granular layer of 19 cm compacted, while for the sub-ballast a thickness of 12 cm has been used. In order to evaluate the effect of increasing the specimen density (as a percent of its theoretical maximum specific gravity), in this article, are illustrated the results obtained after different comparative analysis into the influence of varying the binder-rubber percentages under the sub-ballast layer mix-design. This work demonstrates that rubberized blends containing crumb and ground rubber in bituminous asphalt mixtures behave at least similar or better than conventional asphalt materials. By using the same methodology of volumetric compaction, the densification curves resulting from each mixture have been studied. The purpose is to obtain an optimum empirical parameter multiplier of the number of gyrations necessary to reach the same compaction energy as in conventional mixtures. It has provided some experimental parameters adopting an empirical-analytical method, evaluating the results obtained from the gyratory-compaction of bituminous mixtures with an HMA and rubber-aggregate blends. An extensive integrated research has been carried out to assess the suitability of rubber-modified hot mix asphalt mixtures as a sub-ballast layer in railway underlayment trackbed. Design optimization of the mixture was conducted for each mixture and the volumetric properties analyzed. Also, an improved and complete manufacturing process, compaction and curing of these blends are provided. By adopting this increase-parameters of compaction, called 'beta' factor, mixtures modified with rubber with uniform densification and workability are obtained that in the conventional mixtures. It is found that considering the usual bearing capacity requirements in rail track, the optimal rubber content is 2% (by weight) or 3.95% (by volumetric substitution) and a binder content of 6%.

Keywords: empirical approach, rubber-asphalt, sub-ballast, superpave mix-design

Procedia PDF Downloads 368
899 Biflavonoids from Selaginellaceae as Epidermal Growth Factor Receptor Inhibitors and Their Anticancer Properties

Authors: Adebisi Adunola Demehin, Wanlaya Thamnarak, Jaruwan Chatwichien, Chatchakorn Eurtivong, Kiattawee Choowongkomon, Somsak Ruchirawat, Nopporn Thasana

Abstract:

The epidermal growth factor receptor (EGFR) is a transmembrane glycoprotein involved in cellular signalling processes and, its aberrant activity is crucial in the development of many cancers such as lung cancer. Selaginellaceae are fern allies that have long been used in Chinese traditional medicine to treat various cancer types, especially lung cancer. Biflavonoids, the major secondary metabolites in Selaginellaceae, have numerous pharmacological activities, including anti-cancer and anti-inflammatory. For instance, amentoflavone induces a cytotoxic effect in the human NSCLC cell line via the inhibition of PARP-1. However, to the best of our knowledge, there are no studies on biflavonoids as EGFR inhibitors. Thus, this study aims to investigate the EGFR inhibitory activities of biflavonoids isolated from Selaginella siamensis and Selaginella bryopteris. Amentoflavone, tetrahydroamentoflavone, sciadopitysin, robustaflavone, robustaflavone-4-methylether, delicaflavone, and chrysocauloflavone were isolated from the ethyl-acetate extract of the whole plants. The structures were determined using NMR spectroscopy and mass spectrometry. In vitro study was conducted to evaluate their cytotoxicity against A549, HEPG2, and T47D human cancer cell lines using the MTT assay. In addition, a target-based assay was performed to investigate their EGFR inhibitory activity using the kinase inhibition assay. Finally, a molecular docking study was conducted to predict the binding modes of the compounds. Robustaflavone-4-methylether and delicaflavone showed the best cytotoxic activity on all the cell lines with IC50 (µM) values of 18.9 ± 2.1 and 22.7 ± 3.3 on A549, respectively. Of these biflavonoids, delicaflavone showed the most potent EGFR inhibitory activity with an 84% relative inhibition at 0.02 nM using erlotinib as a positive control. Robustaflavone-4-methylether showed a 78% inhibition at 0.15 nM. The docking scores obtained from the molecular docking study correlated with the kinase inhibition assay. Robustaflavone-4-methylether and delicaflavone had a docking score of 72.0 and 86.5, respectively. The inhibitory activity of delicaflavone seemed to be linked with the C2”=C3” and 3-O-4”’ linkage pattern. Thus, this study suggests that the structural features of these compounds could serve as a basis for developing new EGFR-TK inhibitors.

Keywords: anticancer, biflavonoids, EGFR, molecular docking, Selaginellaceae

Procedia PDF Downloads 198
898 Handling, Exporting and Archiving Automated Mineralogy Data Using TESCAN TIMA

Authors: Marek Dosbaba

Abstract:

Within the mining sector, SEM-based Automated Mineralogy (AM) has been the standard application for quickly and efficiently handling mineral processing tasks. Over the last decade, the trend has been to analyze larger numbers of samples, often with a higher level of detail. This has necessitated a shift from interactive sample analysis performed by an operator using a SEM, to an increased reliance on offline processing to analyze and report the data. In response to this trend, TESCAN TIMA Mineral Analyzer is designed to quickly create a virtual copy of the studied samples, thereby preserving all the necessary information. Depending on the selected data acquisition mode, TESCAN TIMA can perform hyperspectral mapping and save an X-ray spectrum for each pixel or segment, respectively. This approach allows the user to browse through elemental distribution maps of all elements detectable by means of energy dispersive spectroscopy. Re-evaluation of the existing data for the presence of previously unconsidered elements is possible without the need to repeat the analysis. Additional tiers of data such as a secondary electron or cathodoluminescence images can also be recorded. To take full advantage of these information-rich datasets, TIMA utilizes a new archiving tool introduced by TESCAN. The dataset size can be reduced for long-term storage and all information can be recovered on-demand in case of renewed interest. TESCAN TIMA is optimized for network storage of its datasets because of the larger data storage capacity of servers compared to local drives, which also allows multiple users to access the data remotely. This goes hand in hand with the support of remote control for the entire data acquisition process. TESCAN also brings a newly extended open-source data format that allows other applications to extract, process and report AM data. This offers the ability to link TIMA data to large databases feeding plant performance dashboards or geometallurgical models. The traditional tabular particle-by-particle or grain-by-grain export process is preserved and can be customized with scripts to include user-defined particle/grain properties.

Keywords: Tescan, electron microscopy, mineralogy, SEM, automated mineralogy, database, TESCAN TIMA, open format, archiving, big data

Procedia PDF Downloads 110
897 The Digital Microscopy in Organ Transplantation: Ergonomics of the Tele-Pathological Evaluation of Renal, Liver, and Pancreatic Grafts

Authors: Constantinos S. Mammas, Andreas Lazaris, Adamantia S. Mamma-Graham, Georgia Kostopanagiotou, Chryssa Lemonidou, John Mantas, Eustratios Patsouris

Abstract:

The process to build a better safety culture, methods of error analysis, and preventive measures, starts with an understanding of the effects when human factors engineering refer to remote microscopic diagnosis in surgery and specially in organ transplantation for the evaluation of the grafts. Α high percentage of solid organs arrive at the recipient hospitals and are considered as injured or improper for transplantation in the UK. Digital microscopy adds information on a microscopic level about the grafts (G) in Organ Transplant (OT), and may lead to a change in their management. Such a method will reduce the possibility that a diseased G will arrive at the recipient hospital for implantation. Aim: The aim of this study is to analyze the ergonomics of digital microscopy (DM) based on virtual slides, on telemedicine systems (TS) for tele-pathological evaluation (TPE) of the grafts (G) in organ transplantation (OT). Material and Methods: By experimental simulation, the ergonomics of DM for microscopic TPE of renal graft (RG), liver graft (LG) and pancreatic graft (PG) tissues is analyzed. In fact, this corresponded to the ergonomics of digital microscopy for TPE in OT by applying virtual slide (VS) system for graft tissue image capture, for remote diagnoses of possible microscopic inflammatory and/or neoplastic lesions. Experimentation included the development of an OTE-TS similar experimental telemedicine system (Exp.-TS) for simulating the integrated VS based microscopic TPE of RG, LG and PG Simulation of DM on TS based TPE performed by 2 specialists on a total of 238 human renal graft (RG), 172 liver graft (LG) and 108 pancreatic graft (PG) tissues digital microscopic images for inflammatory and neoplastic lesions on four electronic spaces of the four used TS. Results: Statistical analysis of specialist‘s answers about the ability to accurately diagnose the diseased RG, LG and PG tissues on the electronic space among four TS (A,B,C,D) showed that DM on TS for TPE in OT is elaborated perfectly on the ES of a desktop, followed by the ES of the applied Exp.-TS. Tablet and mobile-phone ES seem significantly risky for the application of DM in OT (p<.001). Conclusion: To make the largest reduction in errors and adverse events referring to the quality of the grafts, it will take application of human factors engineering to procurement, design, audit, and awareness-raising activities. Consequently, it will take an investment in new training, people, and other changes to management activities for DM in OT. The simulating VS based TPE with DM of RG, LG and PG tissues after retrieval, seem feasible and reliable and dependable on the size of the electronic space of the applied TS, for remote prevention of diseased grafts from being retrieved and/or sent to the recipient hospital and for post-grafting and pre-transplant planning.

Keywords: digital microscopy, organ transplantation, tele-pathology, virtual slides

Procedia PDF Downloads 281
896 In vitro Regeneration of Neural Cells Using Human Umbilical Cord Derived Mesenchymal Stem Cells

Authors: Urvi Panwar, Kanchan Mishra, Kanjaksha Ghosh, ShankerLal Kothari

Abstract:

Background: Day-by-day the increasing prevalence of neurodegenerative diseases have become a global issue to manage them by medical sciences. The adult neural stem cells are rare and require an invasive and painful procedure to obtain it from central nervous system. Mesenchymal stem cell (MSCs) therapies have shown remarkable application in treatment of various cell injuries and cell loss. MSCs can be derived from various sources like adult tissues, human bone marrow, umbilical cord blood and cord tissue. MSCs have similar proliferation and differentiation capability, but the human umbilical cord-derived mesenchymal stem cells (hUCMSCs) are proved to be more beneficial with respect to cell procurement, differentiation to other cells, preservation, and transplantation. Material and method: Human umbilical cord is easily obtainable and non-controversial comparative to bone marrow and other adult tissues. The umbilical cord can be collected after delivery of baby, and its tissue can be cultured using explant culture method. Cell culture medium such as DMEMF12+10% FBS and DMEMF12+Neural growth factors (bFGF, human noggin, B27) with antibiotics (Streptomycin/Gentamycin) were used to culture and differentiate mesenchymal stem cells into neural cells, respectively. The characterisations of MSCs were done with Flow Cytometer for surface markers CD90, CD73 and CD105 and colony forming unit assay. The differentiated various neural cells will be characterised by fluorescence markers for neurons, astrocytes, and oligodendrocytes; quantitative PCR for genes Nestin and NeuroD1 and Western blotting technique for gap43 protein. Result and discussion: The high quality and number of MSCs were isolated from human umbilical cord via explant culture method. The obtained MSCs were differentiated into neural cells like neurons, astrocytes and oligodendrocytes. The differentiated neural cells can be used to treat neural injuries and neural cell loss by delivering cells by non-invasive administration via cerebrospinal fluid (CSF) or blood. Moreover, the MSCs can also be directly delivered to different injured sites where they differentiate into neural cells. Therefore, human umbilical cord is demonstrated to be an inexpensive and easily available source for MSCs. Moreover, the hUCMSCs can be a potential source for neural cell therapies and neural cell regeneration for neural cell injuries and neural cell loss. This new way of research will be helpful to treat and manage neural cell damages and neurodegenerative diseases like Alzheimer and Parkinson. Still the study has a long way to go but it is a promising approach for many neural disorders for which at present no satisfactory management is available.

Keywords: bone marrow, cell therapy, explant culture method, flow cytometer, human umbilical cord, mesenchymal stem cells, neurodegenerative diseases, neuroprotective, regeneration

Procedia PDF Downloads 202
895 Sculpted Forms and Sensitive Spaces: Walking through the Underground in Naples

Authors: Chiara Barone

Abstract:

In Naples, the visible architecture is only what emerges from the underground. Caves and tunnels cross it in every direction, intertwining with each other. They are not natural caves but spaces built by removing what is superfluous in order to dig a form out of the material. Architects, as sculptors of space, do not determine the exterior, what surrounds the volume and in which the forms live, but an interior underground space, perceptive and sensitive, able to generate new emotions each time. It is an intracorporeal architecture linked to the body, not in its external relationships, but rather with what happens inside. The proposed aims to reflect on the design of underground spaces in the Neapolitan city. The idea is to intend the underground as a spectacular museum of the city, an opportunity to learn in situ the history of the place along an unpredictable itinerary that crosses the caves and, in certain points, emerges, escaping from the world of shadows. Starting form the analysis and the study of the many overlapping elements, the archaeological one, the geological layer and the contemporary city above, it is possible to develop realistic alternatives for underground itineraries. The objective is to define minor paths to ensure the continuity between the touristic flows and entire underground segments already investigated but now disconnected: open-air paths, which abyss in the earth, retracing historical and preserved fragments. The visitor, in this way, passes from real spaces to sensitive spaces, in which the imaginary replaces the real experience, running towards exciting and secret knowledge. To safeguard the complex framework of the historical-artistic values, it is essential to use a multidisciplinary methodology based on a global approach. Moreover, it is essential to refer to similar design projects for the archaeological underground, capable of guide action strategies, looking at similar conditions in other cities, where the project has led to an enhancement of the heritage in the city. The research limits the field of investigation, by choosing the historic center of Naples, applying bibliographic and theoretical research to a real place. First of all, it’s necessary to deepen the places’ knowledge understanding the potentialities of the project as a link between what is below and what is above. Starting from a scientific approach, in which theory and practice are constantly intertwined through the architectural project, the major contribution is to provide possible alternative configurations for the underground space and its relationship with the city above, understanding how the condition of transition, as passage between the below and the above becomes structuring in the design process. Starting from the consideration of the underground as both a real physical place and a sensitive place, which engages the memory, imagination, and sensitivity of a man, the research aims at identifying possible configurations and actions useful for future urban programs to make the underground a central part of the lived city, again.

Keywords: underground paths, invisible ruins, imaginary, sculpted forms, sensitive spaces, Naples

Procedia PDF Downloads 103
894 Hygro-Thermal Modelling of Timber Decks

Authors: Stefania Fortino, Petr Hradil, Timo Avikainen

Abstract:

Timber bridges have an excellent environmental performance, are economical, relatively easy to build and can have a long service life. However, the durability of these bridges is the main problem because of their exposure to outdoor climate conditions. The moisture content accumulated in wood for long periods, in combination with certain temperatures, may cause conditions suitable for timber decay. In addition, moisture content variations affect the structural integrity, serviceability and loading capacity of timber bridges. Therefore, the monitoring of the moisture content in wood is important for the durability of the material but also for the whole superstructure. The measurements obtained by the usual sensor-based techniques provide hygro-thermal data only in specific locations of the wood components. In this context, the monitoring can be assisted by numerical modelling to get more information on the hygro-thermal response of the bridges. This work presents a hygro-thermal model based on a multi-phase moisture transport theory to predict the distribution of moisture content, relative humidity and temperature in wood. Below the fibre saturation point, the multi-phase theory simulates three phenomena in cellular wood during moisture transfer, i.e., the diffusion of water vapour in the pores, the sorption of bound water and the diffusion of bound water in the cell walls. In the multi-phase model, the two water phases are separated, and the coupling between them is defined through a sorption rate. Furthermore, an average between the temperature-dependent adsorption and desorption isotherms is used. In previous works by some of the authors, this approach was found very suitable to study the moisture transport in uncoated and coated stress-laminated timber decks. Compared to previous works, the hygro-thermal fluxes on the external surfaces include the influence of the absorbed solar radiation during the time and consequently, the temperatures on the surfaces exposed to the sun are higher. This affects the whole hygro-thermal response of the timber component. The multi-phase model, implemented in a user subroutine of Abaqus FEM code, provides the distribution of the moisture content, the temperature and the relative humidity in a volume of the timber deck. As a case study, the hygro-thermal data in wood are collected from the ongoing monitoring of the stress-laminated timber deck of Tapiola Bridge in Finland, based on integrated humidity-temperature sensors and the numerical results are found in good agreement with the measurements. The proposed model, used to assist the monitoring, can contribute to reducing the maintenance costs of bridges, as well as the cost of instrumentation, and increase safety.

Keywords: moisture content, multi-phase models, solar radiation, timber decks, FEM

Procedia PDF Downloads 175
893 Search of Сompounds with Antimicrobial and Antifungal Activity in the Series of 1-(2-(1H-Tetrazol-5-yl)-R1-phenyl)-3-R2-phenyl(ethyl)ureas

Authors: O. Antypenko, I. Vasilieva, S. Kovalenko

Abstract:

Investigations for new effective and less toxic antimicrobials agents are always up-to-date. The tetrazole derivatives are quite interesting objects as for synthesis as well as for pharmacological screening. Thus, some derivatives of tetrazole demonstrated antimicrobial activity, namely 5-phenyl-tetrazolo[1,5-c]quinazoline was effective one against Staphylococcus aureus and Esherichia faecalis (MIC = 250 mg/L). Besides, investigation of the 9-bromo(chloro)-5-morpholin(piperidine)-4-yl-tetrazolo[1,5-c]quinazoline’s antimicrobial activity against Esherichia coli and Enterococcus faecalis, Pseudomonas aeruginosa and Staphylococcus aureus revealed that sensitivity of Gram-positive bacteria to the compounds was higher than that of Gram-negative bacteria. So, our previously synthesized, 31 derivatives of 1-(2-(1H-tetrazol-5-yl)-R1-phenyl)-3-R2-phenyl(ethyl)ureas were decided to test for their in vitro antibacterial activity against Gram-positive bacteria (Staphylococcus aureus ATCC 25923, Enterobacter aerogenes, Enterococcus faecalis ATCC 29212), Gram-negative bacteria (Pseudomonas aeruginosa ATCC 9027, Escherichia coli ATCC 25922, Klebsiella pneumoniae 68) and antifungal properties against Candida albicans ATCC 885653. Agar-diffusion method was used for determination of the preliminary activity compared to well-known reference antimicrobials. All the compounds were dissolved in DMSO at a concentration of 100 μg/disk, using inhibition zone diameter (IZD, mm) as a measure for the antimicrobial activity. The most active turned to be 3 structures, that inhibited several bacterial strains: 1-ethyl-3-(5-fluoro-2-(1H-tetrazol-5-yl)phenyl)urea (1), 1-(4-bromo-2-(1H-tetrazol-5-yl)-phenyl)-3-(4-(trifluoromethyl)phenyl)urea (2) and 1-(4-chloro-2-(1H-tetrazol-5-yl)phenyl)-3-(3-(trifluoromethyl)phenyl)urea (3). IZM (mm) was 40 (Escherichia coli), 25 (Klebsiella pneumonia) for compound 1; 12 (Pseudomonas aeruginosa), 15 (Staphylococcus aureus), 10 (Enterococcus faecalis) for compound 2; 25 (Staphylococcus aureus), 15 (Enterococcus faecalis) for compound 3. The most sensitive to the activity of the substances were Gram-negative bacteria Pseudomonas aeruginosa. While none of compound effected on Candida albicans. Speaking about, reference drugs: Amikacin (30 µg/disk) showed 27 and Ceftazide (30 µg/disk) 25 against Pseudomonas aeruginosa. That is, unfortunately, higher than studied 1-(2-(1H-tetrazol-5-yl)-R1-phenyl)-3-R2-phenyl(ethyl)ureas. Obtained results will be used for further purposeful optimization of the leading compounds in the more effective antimicrobials because of the ever-mounting problem of microorganism’s resistance.

Keywords: antimicrobial, antifungal, compounds, 1-(2-(1H-tetrazol-5-yl)-R1-phenyl)-3-R2-phenyl(ethyl)ureas

Procedia PDF Downloads 360