Search results for: turbulent flow structure
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11918

Search results for: turbulent flow structure

668 Enhanced Field Emission from Plasma Treated Graphene and 2D Layered Hybrids

Authors: R. Khare, R. V. Gelamo, M. A. More, D. J. Late, Chandra Sekhar Rout

Abstract:

Graphene emerges out as a promising material for various applications ranging from complementary integrated circuits to optically transparent electrode for displays and sensors. The excellent conductivity and atomic sharp edges of unique two-dimensional structure makes graphene a propitious field emitter. Graphene analogues of other 2D layered materials have emerged in material science and nanotechnology due to the enriched physics and novel enhanced properties they present. There are several advantages of using 2D nanomaterials in field emission based devices, including a thickness of only a few atomic layers, high aspect ratio (the ratio of lateral size to sheet thickness), excellent electrical properties, extraordinary mechanical strength and ease of synthesis. Furthermore, the presence of edges can enhance the tunneling probability for the electrons in layered nanomaterials similar to that seen in nanotubes. Here we report electron emission properties of multilayer graphene and effect of plasma (CO2, O2, Ar and N2) treatment. The plasma treated multilayer graphene shows an enhanced field emission behavior with a low turn on field of 0.18 V/μm and high emission current density of 1.89 mA/cm2 at an applied field of 0.35 V/μm. Further, we report the field emission studies of layered WS2/RGO and SnS2/RGO composites. The turn on field required to draw a field emission current density of 1μA/cm2 is found to be 3.5, 2.3 and 2 V/μm for WS2, RGO and the WS2/RGO composite respectively. The enhanced field emission behavior observed for the WS2/RGO nanocomposite is attributed to a high field enhancement factor of 2978, which is associated with the surface protrusions of the single-to-few layer thick sheets of the nanocomposite. The highest current density of ~800 µA/cm2 is drawn at an applied field of 4.1 V/μm from a few layers of the WS2/RGO nanocomposite. Furthermore, first-principles density functional calculations suggest that the enhanced field emission may also be due to an overlap of the electronic structures of WS2 and RGO, where graphene-like states are dumped in the region of the WS2 fundamental gap. Similarly, the turn on field required to draw an emission current density of 1µA/cm2 is significantly low (almost half the value) for the SnS2/RGO nanocomposite (2.65 V/µm) compared to pristine SnS2 (4.8 V/µm) nanosheets. The field enhancement factor β (~3200 for SnS2 and ~3700 for SnS2/RGO composite) was calculated from Fowler-Nordheim (FN) plots and indicates emission from the nanometric geometry of the emitter. The field emission current versus time plot shows overall good emission stability for the SnS2/RGO emitter. The DFT calculations reveal that the enhanced field emission properties of SnS2/RGO composites are because of a substantial lowering of work function of SnS2 when supported by graphene, which is in response to p-type doping of the graphene substrate. Graphene and 2D analogue materials emerge as a potential candidate for future field emission applications.

Keywords: graphene, layered material, field emission, plasma, doping

Procedia PDF Downloads 356
667 The Rise in Popularity of Online Islamic Fashion In Indonesia: An Economic, Political, and Socio-Anthropological Perspective

Authors: Cazadira Fediva Tamzil, Agung Sulthonaulia Utama

Abstract:

The rise in popularity of Indonesian Islamic fashion displayed and sold through social networking sites, especially Instagram, might seem at first glance like a commonplace and localized phenomenon. However, when analyzed critically, it actually reveals the relations between the global and local Indonesian economy, as well as a deep socio-anthropological dimension relating to religion, culture, class, work, identity. Conducted using a qualitative methodology, data collection technique of literature review, and observation of various social networking sites, this research finds four things that lead to the aforementioned conclusion. First, the rise of online Islamic fashion retailers was triggered by the shift in the structure of global and national Indonesian economy as well as the free access of information made possible by democratization in Indonesia and worldwide advances in terms of technology. All of those factors combined together gave birth to a large amount of middle-class Indonesians with high consumer culture and entrepreneurial flair. Second, online Islamic fashion retailers are the new cultural trendsetters in society. All these show how Indonesians are becoming increasingly pious, no longer only adhere to Western conception of luxury and that many are increasingly exploiting Islam commercial and status-acquiring purposes. Third, the online Islamic fashion retailers actually reveal a shift in the conception of ‘work’ – social media has made work no longer only confined to the toiling activities inside factories, but instead something that can be done from any location only through posting online words or pictures that can increase a fashion product’s capital value. Without realizing it, many celebrities and online retailers who promote Islamic fashion through social media on a daily basis are now also ‘semi-free immaterial labors’ – a slight reconceptualization to Tiziana Terranova’s concept of ‘free labor’ and Maurizio Lazzarato’s ‘immaterial labor’, which basically refer to people who create economic value and thus help out capitals from producing immaterial things with only little compensation in return. Fourth, this research also shows that the diversity of Islamic fashion styles being sold on Instagram reflects the polarized identity of Islam in Indonesia. In stark contrast with the theory which states that globalization always leads to the strengthening and unification of identity, this research shows how polarized the Islamic identity in Indonesia really is – even in the face of globalization.

Keywords: global economy, Indonesian online Islamic fashion, political relations, socio-anthropology

Procedia PDF Downloads 339
666 An Effective Modification to Multiscale Elastic Network Model and Its Evaluation Based on Analyses of Protein Dynamics

Authors: Weikang Gong, Chunhua Li

Abstract:

Dynamics plays an essential role in function exertion of proteins. Elastic network model (ENM), a harmonic potential-based and cost-effective computational method, is a valuable and efficient tool for characterizing the intrinsic dynamical properties encoded in biomacromolecule structures and has been widely used to detect the large-amplitude collective motions of proteins. Gaussian network model (GNM) and anisotropic network model (ANM) are the two often-used ENM models. In recent years, many ENM variants have been proposed. Here, we propose a small but effective modification (denoted as modified mENM) to the multiscale ENM (mENM) where fitting weights of Kirchhoff/Hessian matrixes with the least square method (LSM) is modified since it neglects the details of pairwise interactions. Then we perform its comparisons with the original mENM, traditional ENM, and parameter-free ENM (pfENM) on reproducing dynamical properties for the six representative proteins whose molecular dynamics (MD) trajectories are available in http://mmb.pcb.ub.es/MoDEL/. In the results, for B-factor prediction, mENM achieves the best performance among the four ENM models. Additionally, it is noted that with the weights of the multiscale Kirchhoff/Hessian matrixes modified, interestingly, the modified mGNM/mANM still has a much better performance than the corresponding traditional ENM and pfENM models. As to dynamical cross-correlation map (DCCM) calculation, taking the data obtained from MD trajectories as the standard, mENM performs the worst while the results produced by the modified mENM and pfENM models are close to those from MD trajectories with the latter a little better than the former. Generally, ANMs perform better than the corresponding GNMs except for the mENM. Thus, pfANM and the modified mANM, especially the former, have an excellent performance in dynamical cross-correlation calculation. Compared with GNMs (except for mGNM), the corresponding ANMs can capture quite a number of positive correlations for the residue pairs nearly largest distances apart, which is maybe due to the anisotropy consideration in ANMs. Furtherly, encouragingly the modified mANM displays the best performance in capturing the functional motional modes, followed by pfANM and traditional ANM models, while mANM fails in all the cases. This suggests that the consideration of long-range interactions is critical for ANM models to produce protein functional motions. Based on the analyses, the modified mENM is a promising method in capturing multiple dynamical characteristics encoded in protein structures. This work is helpful for strengthening the understanding of the elastic network model and provides a valuable guide for researchers to utilize the model to explore protein dynamics.

Keywords: elastic network model, ENM, multiscale ENM, molecular dynamics, parameter-free ENM, protein structure

Procedia PDF Downloads 113
665 A Conceptual Study for Investigating the Preliminary State of Energy at the Birth of Universe and Understanding Its Emergence From the State of Nothing

Authors: Mahmoud Reza Hosseini

Abstract:

The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times is studied known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity which cannot be explained by modern physics and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe. According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature could be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing a state of energy called a “neutral state” possessing an energy level which is referred to as the “base energy”. The governing principles of base energy are discussed in detail in our second paper in the series “A Conceptual Study for Addressing the Singularity of the Emerging Universe” which is discussed in detail. To establish a complete picture, the origin of the base energy should be identified and studied. In this research paper, the mechanism which led to the emergence of this natural state and its corresponding base energy is proposed. In addition, the effect of the base energy in the space-time fabric is discussed. Finally, the possible role of the base energy in quantization and energy exchange is investigated. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.

Keywords: big bang, cosmic inflation, birth of universe, energy creation, universe evolution

Procedia PDF Downloads 39
664 Severe Infestation of Laspeyresia Koenigana Fab. and Alternaria Leaf Spot on Azadirachta Indica (Neem)

Authors: Shiwani Bhatnagar, K. K. Srivastava, Sangeeta Singh, Ameen Ullah Khan, Bundesh Kumar, Lokendra Singh Rathore

Abstract:

From the instigation of the world medicinal plants are treated as part and parcel of human society to fight against diseases. Azadirachta indica (Neem) a herbal plant has been used as an Indian traditional medicine since ages and its products are acknowledged to solve agricultural, forestry and public health related problems, owing to its beneficial medicinal properties. Each part of the neem tree is known for its medicinal property. Bark & leaf extracts of neem have been used to control leprosy, respiratory disorders, constipation and also as blood purifier and a general health tonic. Neem is still regarded as ' rural community dispensary' in India or a tree for solving medical problems. Use of Neem as pesticides for the management of insect pest of agriculture crops and forestry has been seen as a shift in the use of synthetic pesticides to ecofriendly botanicals. Neem oil and seed extracts possess germicidal and anti-bacterial properties which when sprayed on the plant helps in protecting them from foliage pests. Azadirachtin, the main active ingredient found in neem tree, acts as an insect repellent and antifeedant. However the young plants are susceptible to many insect pest and foliar diseases. Recently, in the avenue plantation, planted by Arid Forest Research Institute, Jodhpur, around the premises of IIT Jodhpur, two years old neem plants were found to be severely infested with tip borer Laspeyresia koenigana (Family: Eucosmidae). The adult moth of L. koenigana lays eggs on the tender shoots and the young larvae tunnel into the shoot and feed inside. A small pinhole can be seen at the entrance point, from where the larva enters in to the stem. The severely attached apical shoots exhibit profuse gum exudation resulting in development of a callus structure. The internal feeding causes the stem to wilt and the leaves to dry up from the tips resulting in growth retardation. Alternaria Leaf spot and blight symptoms were also recorded on these neem plants. For the management of tip borer and Alternaria Leaf spot, foliar spray of monocrotophos @0.05% and Dithane M-45 @ 0.15% and powermin @ 2ml/lit were found efficient in managing the insect pest and foliar disease problem. No Further incidence of pest/diseases was noticed.

Keywords: azadirachta indica, alternaria leaf spot, laspeyresia koenigana, management

Procedia PDF Downloads 464
663 Raman Spectroscopy Analysis of MnTiO₃-TiO₂ Eutectic

Authors: Adrian Niewiadomski, Barbara Surma, Katarzyna Kolodziejak, Dorota A. Pawlak

Abstract:

Oxide-oxide eutectic is attracting increasing interest of scientific community because of their unique properties and numerous potential applications. Some of the most interesting examples of applications are metamaterials, glucose sensors, photoactive materials, thermoelectric materials, and photocatalysts. Their unique properties result from the fact that composite materials consist of two or more phases. As a result, these materials have additive and product properties. Additive properties originate from particular phases while product properties originate from the interaction between phases. MnTiO3-TiO2 eutectic is one of such materials. TiO2 is a well-known semiconductor, and it is used as a photocatalyst. Moreover, it may be used to produce solar cells, in a gas sensing devices and in electrochemistry. MnTiO3 is a semiconductor and antiferromagnetic. Therefore it has potential application in integrated circuits devices, and as a gas and humidity sensor, in non-linear optics and as a visible-light activated photocatalyst. The above facts indicate that eutectic MnTiO3-TiO2 constitutes an extremely promising material that should be studied. Despite that Raman spectroscopy is a powerful method to characterize materials, to our knowledge Raman studies of eutectics are very limited, and there are no studies of the MnTiO3-TiO2 eutectic. While to our knowledge the papers regarding this material are scarce. The MnTiO3-TiO2 eutectic, as well as TiO2 and MnTiO3 single crystals, were grown by the micro-pulling-down method at the Institute of Electronic Materials Technology in Warsaw, Poland. A nitrogen atmosphere was maintained during whole crystal growth process. The as-grown samples of MnTiO3-TiO2 eutectic, as well as TiO2 and MnTiO3 single crystals, are black and opaque. Samples were cut perpendicular to the growth direction. Cross sections were examined with scanning electron microscopy (SEM) and with Raman spectroscopy. The present studies showed that maintaining nitrogen atmosphere during crystal growth process may result in obtaining black TiO2 crystals. SEM and Raman experiments showed that studied eutectic consists of three distinct regions. Furthermore, two of these regions correspond with MnTiO3, while the third region corresponds with the TiO2-xNx phase. Raman studies pointed out that TiO2-xNx phase crystallizes in rutile structure. The studies show that Raman experiments may be successfully used to characterize eutectic materials. The MnTiO3-TiO2 eutectic was grown by the micro-pulling-down method. SEM and micro-Raman experiments were used to establish phase composition of studied eutectic. The studies revealed that the TiO2 phase had been doped with nitrogen. Therefore the TiO2 phase is, in fact, a solid solution with TiO2-xNx composition. The remaining two phases exhibit Raman lines of both rutile TiO2 and MnTiO3. This points out to some kind of coexistence of these phases in studied eutectic.

Keywords: compound materials, eutectic growth and characterization, Raman spectroscopy, rutile TiO₂

Procedia PDF Downloads 186
662 From Modelled Design to Reality through Material and Machinery Lab and Field Tests: Porous Concrete Carparks at the Wanda Metropolitano Stadium in Madrid

Authors: Manuel de Pazos-Liano, Manuel Cifuentes-Antonio, Juan Fisac-Gozalo, Sara Perales-Momparler, Carlos Martinez-Montero

Abstract:

The first-ever game in the Wanda Metropolitano Stadium, the new home of the Club Atletico de Madrid, was played on September 16, 2017, thanks to the work of a multidisciplinary team that made it possible to combine urban development with sustainability goals. The new football ground sits on a 1.2 km² land owned by the city of Madrid. Its construction has dramatically increased the sealed area of the site (transforming the runoff coefficient from 0.35 to 0.9), and the surrounding sewer network has no capacity for that extra flow. As an alternative to enlarge the existing 2.5 m diameter pipes, it was decided to detain runoff on site by means of an integrated and durable infrastructure that would not blow up the construction cost nor represent a burden on the municipality’s maintenance tasks. Instead of the more conventional option of building a large concrete detention tank, the decision was taken on the use of pervious pavement on the 3013 car parking spaces for sub-surface water storage, a solution aligned with the city water ordinance and the Madrid + Natural project. Making the idea a reality, in only five months and during the summer season (which forced to pour the porous concrete only overnight), was a challenge never faced before in Spain, that required of innovation both at the material as well as the machinery side. The process consisted on: a) defining the characteristics required for the porous concrete (compressive strength of 15 N/mm2 and 20% voids); b) testing of different porous concrete dosages at the construction company laboratory; c) stablishing the cross section in order to provide structural strength and sufficient water detention capacity (20 cm porous concrete over a 5 cm 5/10 gravel, that sits on a 50 cm coarse 40/50 aggregate sub-base separated by a virgin fiber polypropylene geotextile fabric); d) hydraulic computer modelling (using the Full Hydrograph Method based on the Wallingford Procedure) to estimate design peak flows decrease (an average of 69% at the three car parking lots); e) use of a variety of machinery for the application of the porous concrete to achieve both structural strength and permeable surface (including an inverse rotating rolling imported from USA, and the so-called CMI, a sliding concrete paver used in the construction of motorways with rigid pavements); f) full-scale pilots and final construction testing by an accredited laboratory (pavement compressive strength average value of 15 N/mm2 and 0,0032 m/s permeability). The continuous testing and innovating construction process explained in detail within this article, allowed for a growing performance with time, finally proving the use of the CMI valid also for large porous car park applications. All this process resulted in a successful story that converts the Wanda Metropolitano Stadium into a great demonstration site that will help the application of the Spanish Royal Decree 638/2016 (it also counts with rainwater harvesting for grass irrigation).

Keywords: construction machinery, permeable carpark, porous concrete, SUDS, sustainable develpoment

Procedia PDF Downloads 134
661 Trade in Value Added: The Case of the Central and Eastern European Countries

Authors: Łukasz Ambroziak

Abstract:

Although the impact of the production fragmentation on trade flows has been examined many times since the 1990s, the research was not comprehensive because of the limitations in traditional trade statistics. Early 2010s the complex databases containing world input-output tables (or indicators calculated on their basis) has made available. It increased the possibilities of examining the production sharing in the world. The trade statistic in value-added terms enables us better to estimate trade changes resulted from the internationalisation and globalisation as well as benefits of the countries from international trade. In the literature, there are many research studies on this topic. Unfortunately, trade in value added of the Central and Eastern European Countries (CEECs) has been so far insufficiently studied. Thus, the aim of the paper is to present changes in value added trade of the CEECs (Bulgaria, the Czech Republic, Estonia, Hungary, Latvia, Lithuania, Poland, Romania, Slovakia and Slovenia) in the period of 1995-2011. The concept 'trade in value added' or 'value added trade' is defined as the value added of a country which is directly and indirectly embodied in final consumption of another country. The typical question would be: 'How much value added is created in a country due to final consumption in the other countries?' The data will be downloaded from the World Input-Output Database (WIOD). The structure of this paper is as follows. First, theoretical and methodological aspects related to the application of the input-output tables in the trade analysis will be studied. Second, a brief survey of the empirical literature on this topic will be presented. Third, changes in exports and imports in value added of the CEECs will be analysed. A special attention will be paid to the differences in bilateral trade balances using traditional trade statistics (in gross terms) on one side, and value added statistics on the other. Next, in order to identify factors influencing value added exports and value added imports of the CEECs the generalised gravity model, based on panel data, will be used. The dependent variables will be value added exports and imports. The independent variables will be, among others, the level of GDP of trading partners, the level of GDP per capita of trading partners, the differences in GDP per capita, the level of the FDI inward stock, the geographical distance, the existence (or non-existence) of common border, the membership (or not) in preferential trade agreements or in the EU. For comparison, an estimation will also be made based on exports and imports in gross terms. The initial research results show that the gravity model better explained determinants of trade in value added than gross trade (R2 in the former is higher). The independent variables had the same direction of impact both on value added exports/imports and gross exports/imports. Only value of coefficients differs. The most difference concerned geographical distance. It had smaller impact on trade in value added than gross trade.

Keywords: central and eastern European countries, gravity model, input-output tables, trade in value added

Procedia PDF Downloads 232
660 The Influence of Operational Changes on Efficiency and Sustainability of Manufacturing Firms

Authors: Dimitrios Kafetzopoulos

Abstract:

Nowadays, companies are more concerned with adopting their own strategies for increased efficiency and sustainability. Dynamic environments are fertile fields for developing operational changes. For this purpose, organizations need to implement an advanced management philosophy that boosts changes to companies’ operation. Changes refer to new applications of knowledge, ideas, methods, and skills that can generate unique capabilities and leverage an organization’s competitiveness. So, in order to survive and compete in the global and niche markets, companies should incorporate the adoption of operational changes into their strategy with regard to their products and their processes. Creating the appropriate culture for changes in terms of products and processes helps companies to gain a sustainable competitive advantage in the market. Thus, the purpose of this study is to investigate the role of both incremental and radical changes into operations of a company, taking into consideration not only product changes but also process changes, and continues by measuring the impact of these two types of changes on business efficiency and sustainability of Greek manufacturing companies. The above discussion leads to the following hypotheses: H1: Radical operational changes have a positive impact on firm efficiency. H2: Incremental operational changes have a positive impact on firm efficiency. H3: Radical operational changes have a positive impact on firm sustainability. H4: Incremental operational changes have a positive impact on firm sustainability. In order to achieve the objectives of the present study, a research study was carried out in Greek manufacturing firms. A total of 380 valid questionnaires were received while a seven-point Likert scale was used to measure all the questionnaire items of the constructs (radical changes, incremental changes, efficiency and sustainability). The constructs of radical and incremental operational changes, each one as one variable, has been subdivided into product and process changes. Non-response bias, common method variance, multicollinearity, multivariate normal distribution and outliers have been checked. Moreover, the unidimensionality, reliability and validity of the latent factors were assessed. Exploratory Factor Analysis and Confirmatory Factor Analysis were applied to check the factorial structure of the constructs and the factor loadings of the items. In order to test the research hypotheses, the SEM technique was applied (maximum likelihood method). The goodness of fit of the basic structural model indicates an acceptable fit of the proposed model. According to the present study findings, radical operational changes and incremental operational changes significantly influence both efficiency and sustainability of Greek manufacturing firms. However, it is in the dimension of radical operational changes, meaning those in process and product, that the most significant contributors to firm efficiency are to be found, while its influence on sustainability is low albeit statistically significant. On the contrary, incremental operational changes influence sustainability more than firms’ efficiency. From the above, it is apparent that the embodiment of the concept of the changes into the products and processes operational practices of a firm has direct and positive consequences for what it achieves from efficiency and sustainability perspective.

Keywords: incremental operational changes, radical operational changes, efficiency, sustainability

Procedia PDF Downloads 127
659 Thermal Method Production of the Hydroxyapatite from Bone By-Products from Meat Industry

Authors: Agnieszka Sobczak-Kupiec, Dagmara Malina, Klaudia Pluta, Wioletta Florkiewicz, Bozena Tyliszczak

Abstract:

Introduction: Request for compound of phosphorus grows continuously, thus, it is searched for alternative sources of this element. One of these sources could be by-products from meat industry which contain prominent quantity of phosphorus compounds. Hydroxyapatite, which is natural component of animal and human bones, is leading material applied in bone surgery and also in stomatology. This is material, which is biocompatible, bioactive and osteoinductive. Methodology: Hydroxyapatite preparation: As a raw material was applied deproteinized and defatted bone pulp called bone sludge, which was formed as waste in deproteinization process of bones, in which a protein hydrolysate was the main product. Hydroxyapatite was received in calcining process in chamber kiln with electric heating in air atmosphere in two stages. In the first stage, material was calcining in temperature 600°C within 3 hours. In the next stage unified material was calcining in three different temperatures (750°C, 850°C and 950°C) keeping material in maximum temperature within 3.0 hours. Bone sludge: Bone sludge was formed as waste in deproteinization process of bones, in which a protein hydrolysate was the main product. Pork bones coming from the partition of meat were used as a raw material for the production of the protein hydrolysate. After disintegration, a mixture of bone pulp and water with a small amount of lactic acid was boiled at temperature 130-135°C and under pressure4 bar. After 3-3.5 hours boiled-out bones were separated on a sieve, and the solution of protein-fat hydrolysate got into a decanter, where bone sludge was separated from it. Results of the study: The phase composition was analyzed by roentgenographic method. Hydroxyapatite was the only crystalline phase observed in all the calcining products. XRD investigation was shown that crystallization degree of hydroxyapatite was increased with calcining temperature. Conclusion: The researches were shown that phosphorus content is around 12%, whereas, calcium content amounts to 28% on average. The conducted researches on bone-waste calcining at the temperatures of 750-950°C confirmed that thermal utilization of deproteinized bone-waste was possible. X-ray investigations were confirmed that hydroxyapatite is the main component of calcining products, and also XRD investigation was shown that crystallization degree of hydroxyapatite was increased with calcining temperature. Contents of calcium and phosphorus were distinctly increased with calcining temperature, whereas contents of phosphorus soluble in acids were decreased. It could be connected with higher crystallization degree of material received in higher temperatures and its stable structure. Acknowledgements: “The authors would like to thank the The National Centre for Research and Development (Grant no: LIDER//037/481/L-5/13/NCBR/2014) for providing financial support to this project”.

Keywords: bone by-products, bone sludge, calcination, hydroxyapatite

Procedia PDF Downloads 276
658 Anti-Obesity Effects of Pteryxin in Peucedanum japonicum Thunb Leaves through Different Pathways of Adipogenesis In-Vitro

Authors: Ruwani N. Nugara, Masashi Inafuku, Kensaku Takara, Hironori Iwasaki, Hirosuke Oku

Abstract:

Pteryxin from the partially purified hexane phase (HP) of Peucedanum japonicum Thunb (PJT) was identified as the active compound related to anti-obesity. Thus, in this study we investigated the mechanisms related to anti-obesity activity in-vitro. The HP was fractionated, and effect on the triglyceride (TG) content was evaluated in 3T3-L1 and HepG2 cells. Comprehensive spectroscopic analyses were used to identify the structure of the active compound. The dose dependent effect of active constituent on the TG content, and the gene expressions related to adipogenesis, fatty acid catabolism, energy expenditure, lipolysis and lipogenesis (20 μg/mL) were examined in-vitro. Furthermore, higher dosage of pteryxin (50μg/mL) was tested against 20μg/mL in 3T3-L1 adipocytes. The mRNA were subjected to SOLiD next generation sequencer and the obtained data were analyzed by Ingenuity Pathway Analysis (IPA). The active constituent was identified as pteryxin, a known compound in PJT. However, its biological activities against obesity have not been reported previously. Pteryxin dose dependently suppressed TG content in both 3T3-L1 adipocytes and HepG2 hepatocytes (P < 0.05). Sterol regulatory element-binding protein-1 (SREBP1 c), Fatty acid synthase (FASN), and acetyl-CoA carboxylase-1 (ACC1) were downregulated in pteryxin-treated adipocytes (by 18.0, 36.1 and 38.2%; P < 0.05, respectively) and hepatocytes (by 72.3, 62.9 and 38.8%, respectively; P < 0.05) indicating its suppressive effects on fatty acid synthesis. The hormone-sensitive lipase (HSL), a lipid catabolising gene was upregulated (by 15.1%; P < 0.05) in pteryxin-treated adipocytes suggesting improved lipolysis. Concordantly, the adipocyte size marker gene, paternally expressed gene1/mesoderm specific transcript (MEST) was downregulated (by 42.8%; P < 0.05), further accelerating the lipolytic activity. The upregulated trend of uncoupling protein 2 (UCP2; by 77.5%; P < 0.05) reflected the improved energy expenditure due to pteryxin. The 50μg/mL dosage of pteryxin completely suppressed PPARγ, MEST, SREBP 1C, HSL, Adiponectin, Fatty Acid Binding Protein (FABP) 4, and UCP’s in 3T3-L1 adipocytes. The IPA suggested that pteryxin at 20μg/mL and 50μg/mL suppress obesity in two different pathways, whereas the WNT signaling pathway play a key role in the higher dose of pteryxin in preadipocyte stage. Pteryxin in PJT play the key role in regulating lipid metabolism related gene network and improving energy production in vitro. Thus, the results suggests pteryxin as a new natural compound to be used as an anti-obesity drug in pharmaceutical industry.

Keywords: obesity, peucedanum japonicum thunb, pteryxin, food science

Procedia PDF Downloads 444
657 Redox-labeled Electrochemical Aptasensor Array for Single-cell Detection

Authors: Shuo Li, Yannick Coffinier, Chann Lagadec, Fabrizio Cleri, Katsuhiko Nishiguchi, Akira Fujiwara, Soo Hyeon Kim, Nicolas Clément

Abstract:

The need for single cell detection and analysis techniques has increased in the past decades because of the heterogeneity of individual living cells, which increases the complexity of the pathogenesis of malignant tumors. In the search for early cancer detection, high-precision medicine and therapy, the technologies most used today for sensitive detection of target analytes and monitoring the variation of these species are mainly including two types. One is based on the identification of molecular differences at the single-cell level, such as flow cytometry, fluorescence-activated cell sorting, next generation proteomics, lipidomic studies, another is based on capturing or detecting single tumor cells from fresh or fixed primary tumors and metastatic tissues, and rare circulating tumors cells (CTCs) from blood or bone marrow, for example, dielectrophoresis technique, microfluidic based microposts chip, electrochemical (EC) approach. Compared to other methods, EC sensors have the merits of easy operation, high sensitivity, and portability. However, despite various demonstrations of low limits of detection (LOD), including aptamer sensors, arrayed EC sensors for detecting single-cell have not been demonstrated. In this work, a new technique based on 20-nm-thick nanopillars array to support cells and keep them at ideal recognition distance for redox-labeled aptamers grafted on the surface. The key advantages of this technology are not only to suppress the false positive signal arising from the pressure exerted by all (including non-target) cells pushing on the aptamers by downward force but also to stabilize the aptamer at the ideal hairpin configuration thanks to a confinement effect. With the first implementation of this technique, a LOD of 13 cells (with5.4 μL of cell suspension) was estimated. In further, the nanosupported cell technology using redox-labeled aptasensors has been pushed forward and fully integrated into a single-cell electrochemical aptasensor array. To reach this goal, the LOD has been reduced by more than one order of magnitude by suppressing parasitic capacitive electrochemical signals by minimizing the sensor area and localizing the cells. Statistical analysis at the single-cell level is demonstrated for the recognition of cancer cells. The future of this technology is discussed, and the potential for scaling over millions of electrodes, thus pushing further integration at sub-cellular level, is highlighted. Despite several demonstrations of electrochemical devices with LOD of 1 cell/mL, the implementation of single-cell bioelectrochemical sensor arrays has remained elusive due to their challenging implementation at a large scale. Here, the introduced nanopillar array technology combined with redox-labeled aptamers targeting epithelial cell adhesion molecule (EpCAM) is perfectly suited for such implementation. Combining nanopillar arrays with microwells determined for single cell trapping directly on the sensor surface, single target cells are successfully detected and analyzed. This first implementation of a single-cell electrochemical aptasensor array based on Brownian-fluctuating redox species opens new opportunities for large-scale implementation and statistical analysis of early cancer diagnosis and cancer therapy in clinical settings.

Keywords: bioelectrochemistry, aptasensors, single-cell, nanopillars

Procedia PDF Downloads 103
656 Topological Language for Classifying Linear Chord Diagrams via Intersection Graphs

Authors: Michela Quadrini

Abstract:

Chord diagrams occur in mathematics, from the study of RNA to knot theory. They are widely used in theory of knots and links for studying the finite type invariants, whereas in molecular biology one important motivation to study chord diagrams is to deal with the problem of RNA structure prediction. An RNA molecule is a linear polymer, referred to as the backbone, that consists of four types of nucleotides. Each nucleotide is represented by a point, whereas each chord of the diagram stands for one interaction for Watson-Crick base pairs between two nonconsecutive nucleotides. A chord diagram is an oriented circle with a set of n pairs of distinct points, considered up to orientation preserving diffeomorphisms of the circle. A linear chord diagram (LCD) is a special kind of graph obtained cutting the oriented circle of a chord diagram. It consists of a line segment, called its backbone, to which are attached a number of chords with distinct endpoints. There is a natural fattening on any linear chord diagram; the backbone lies on the real axis, while all the chords are in the upper half-plane. Each linear chord diagram has a natural genus of its associated surface. To each chord diagram and linear chord diagram, it is possible to associate the intersection graph. It consists of a graph whose vertices correspond to the chords of the diagram, whereas the chord intersections are represented by a connection between the vertices. Such intersection graph carries a lot of information about the diagram. Our goal is to define an LCD equivalence class in terms of identity of intersection graphs, from which many chord diagram invariants depend. For studying these invariants, we introduce a new representation of Linear Chord Diagrams based on a set of appropriate topological operators that permits to model LCD in terms of the relations among chords. Such set is composed of: crossing, nesting, and concatenations. The crossing operator is able to generate the whole space of linear chord diagrams, and a multiple context free grammar able to uniquely generate each LDC starting from a linear chord diagram adding a chord for each production of the grammar is defined. In other words, it allows to associate a unique algebraic term to each linear chord diagram, while the remaining operators allow to rewrite the term throughout a set of appropriate rewriting rules. Such rules define an LCD equivalence class in terms of the identity of intersection graphs. Starting from a modelled RNA molecule and the linear chord, some authors proposed a topological classification and folding. Our LCD equivalence class could contribute to the RNA folding problem leading to the definition of an algorithm that calculates the free energy of the molecule more accurately respect to the existing ones. Such LCD equivalence class could be useful to obtain a more accurate estimate of link between the crossing number and the topological genus and to study the relation among other invariants.

Keywords: chord diagrams, linear chord diagram, equivalence class, topological language

Procedia PDF Downloads 197
655 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide

Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva

Abstract:

Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.

Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning

Procedia PDF Downloads 154
654 Poly(Acrylamide-Co-Itaconic Acid) Nanocomposite Hydrogels and Its Use in the Removal of Lead in Aqueous Solution

Authors: Majid Farsadrouh Rashti, Alireza Mohammadinejad, Amir Shafiee Kisomi

Abstract:

Lead (Pb²⁺), a cation, is a prime constituent of the majority of the industrial effluents such as mining, smelting and coal combustion, Pb-based painting and Pb containing pipes in water supply systems, paper and pulp refineries, printing, paints and pigments, explosive manufacturing, storage batteries, alloy and steel industries. The maximum permissible limit of lead in the water used for drinking and domesticating purpose is 0.01 mg/L as advised by Bureau of Indian Standards, BIS. This becomes the acceptable 'safe' level of lead(II) ions in water beyond which, the water becomes unfit for human use and consumption, and is potential enough to lead health problems and epidemics leading to kidney failure, neuronal disorders, and reproductive infertility. Superabsorbent hydrogels are loosely crosslinked hydrophilic polymers that in contact with aqueous solution can easily water and swell to several times to their initial volume without dissolving in aqueous medium. Superabsorbents are kind of hydrogels capable to swell and absorb a large amount of water in their three-dimensional networks. While the shapes of hydrogels do not change extensively during swelling, because of tremendously swelling capacity of superabsorbent, their shape will broadly change.Because of their superb response to changing environmental conditions including temperature pH, and solvent composition, superabsorbents have been attracting in numerous industrial applications. For instance, water retention property and subsequently. Natural-based superabsorbent hydrogels have attracted much attention in medical pharmaceutical, baby diapers, agriculture, and horticulture because of their non-toxicity, biocompatibility, and biodegradability. Novel superabsorbent hydrogel nanocomposites were prepared by graft copolymerization of acrylamide and itaconic acid in the presence of nanoclay (laponite), using methylene bisacrylamide (MBA) and potassium persulfate, former as a crosslinking agent and the second as an initiator. The superabsorbent hydrogel nanocomposites structure was characterized by FTIR spectroscopy, SEM and TGA Spectroscopy adsorption of metal ions on poly (AAm-co-IA). The equilibrium swelling values of copolymer was determined by gravimetric method. During the adsorption of metal ions on polymer, residual metal ion concentration in the solution and the solution pH were measured. The effects of the clay content of the hydrogel on its metal ions uptake behavior were studied. The NC hydrogels may be considered as a good candidate for environmental applications to retain more water and to remove heavy metals.

Keywords: adsorption, hydrogel, nanocomposite, super adsorbent

Procedia PDF Downloads 181
653 Hydroxyapatite Based Porous Scaffold for Tooth Tissue Engineering

Authors: Pakize Neslihan Taslı, Alev Cumbul, Gul Merve Yalcın, Fikrettin Sahin

Abstract:

A key experimental trial in the regeneration of large oral and craniofacial defects is the neogenesis of osseous and ligamentous interfacial structures. Currently, oral regenerative medicine strategies are unpredictable for repair of tooth supporting tissues destroyed as a consequence of trauma, chronic infection or surgical resection. A different approach combining the gel-casting method with Hydroxy Apatite HA-based scaffold and different cell lineages as a hybrid system leads to successively mimic the early stage of tooth development, in vitro. HA is widely accepted as a bioactive material for guided bone and tooth regeneration. In this study, it was reported that, HA porous scaffold preparation, characterization and evaluation of structural and chemical properties. HA is the main factor that exists in tooth and it is in harmony with structural, biological, and mechanical characteristics. Here, this study shows mimicking immature tooth at the late bell stage design and construction of HA scaffolds for cell transplantation of human Adipose Stem Cells (hASCs), human Bone Marrow Stem Cells (hBMSCs) and Gingival Epitelial cells for the formation of human tooth dentin-pulp-enamel complexes in vitro. Scaffold characterization was demonstrated by SEM, FTIR and pore size and density measurements. The biological contraction of dental tissues against each other was demonstrated by mRNA gene expressions, histopatologic observations and protein release profile by ELISA tecnique. The tooth shaped constructs with a pore size ranging from 150 to 300 µm arranged by gathering right amounts of materials provide interconnected macro-porous structure. The newly formed tissue like structures that grow and integrate within the HA designed constructs forming tooth cementum like tissue, pulp and bone structures. These findings are important as they emphasize the potential biological effect of the hybrid scaffold system. In conclusion, this in vitro study clearly demonstrates that designed 3D scaffolds shaped as a immature tooth at the late bell stage were essential to form enamel-dentin-pulp interfaces with an appropriate cell and biodegradable material combination. The biomimetic architecture achieved here is providing a promising platform for dental tissue engineering.

Keywords: tooth regeneration, tissue engineering, adipose stem cells, hydroxyapatite tooth engineering, porous scaffold

Procedia PDF Downloads 223
652 Describing Cognitive Decline in Alzheimer's Disease via a Picture Description Writing Task

Authors: Marielle Leijten, Catherine Meulemans, Sven De Maeyer, Luuk Van Waes

Abstract:

For the diagnosis of Alzheimer's disease (AD), a large variety of neuropsychological tests are available. In some of these tests, linguistic processing - both oral and written - is an important factor. Language disturbances might serve as a strong indicator for an underlying neurodegenerative disorder like AD. However, the current diagnostic instruments for language assessment mainly focus on product measures, such as text length or number of errors, ignoring the importance of the process that leads to written or spoken language production. In this study, it is our aim to describe and test differences between cognitive and impaired elderly on the basis of a selection of writing process variables (inter- and intrapersonal characteristics). These process variables are mainly related to pause times, because the number, length, and location of pauses have proven to be an important indicator of the cognitive complexity of a process. Method: Participants that were enrolled in our research were chosen on the basis of a number of basic criteria necessary to collect reliable writing process data. Furthermore, we opted to match the thirteen cognitively impaired patients (8 MCI and 5 AD) with thirteen cognitively healthy elderly. At the start of the experiment, participants were each given a number of tests, such as the Mini-Mental State Examination test (MMSE), the Geriatric Depression Scale (GDS), the forward and backward digit span and the Edinburgh Handedness Inventory (EHI). Also, a questionnaire was used to collect socio-demographic information (age, gender, eduction) of the subjects as well as more details on their level of computer literacy. The tests and questionnaire were followed by two typing tasks and two picture description tasks. For the typing tasks participants had to copy (type) characters, words and sentences from a screen, whereas the picture description tasks each consisted of an image they had to describe in a few sentences. Both the typing and the picture description tasks were logged with Inputlog, a keystroke logging tool that allows us to log and time stamp keystroke activity to reconstruct and describe text production processes. The main rationale behind keystroke logging is that writing fluency and flow reveal traces of the underlying cognitive processes. This explains the analytical focus on pause (length, number, distribution, location, etc.) and revision (number, type, operation, embeddedness, location, etc.) characteristics. As in speech, pause times are seen as indexical of cognitive effort. Results. Preliminary analysis already showed some promising results concerning pause times before, within and after words. For all variables, mixed effects models were used that included participants as a random effect and MMSE scores, GDS scores and word categories (such as determiners and nouns) as a fixed effect. For pause times before and after words cognitively impaired patients paused longer than healthy elderly. These variables did not show an interaction effect between the group participants (cognitively impaired or healthy elderly) belonged to and word categories. However, pause times within words did show an interaction effect, which indicates pause times within certain word categories differ significantly between patients and healthy elderly.

Keywords: Alzheimer's disease, keystroke logging, matching, writing process

Procedia PDF Downloads 359
651 Solar and Galactic Cosmic Ray Impacts on Ambient Dose Equivalent Considering a Flight Path Statistic Representative to World-Traffic

Authors: G. Hubert, S. Aubry

Abstract:

The earth is constantly bombarded by cosmic rays that can be of either galactic or solar origin. Thus, humans are exposed to high levels of galactic radiation due to altitude aircraft. The typical total ambient dose equivalent for a transatlantic flight is about 50 μSv during quiet solar activity. On the contrary, estimations differ by one order of magnitude for the contribution induced by certain solar particle events. Indeed, during Ground Level Enhancements (GLE) event, the Sun can emit particles of sufficient energy and intensity to raise radiation levels on Earth's surface. Analyses of GLE characteristics occurring since 1942 showed that for the worst of them, the dose level is of the order of 1 mSv and more. The largest of these events was observed on February 1956 for which the ambient dose equivalent rate is in the orders of 10 mSv/hr. The extra dose at aircraft altitudes for a flight during this event might have been about 20 mSv, i.e. comparable with the annual limit for aircrew. The most recent GLE, occurred on September 2017 resulting from an X-class solar flare, and it was measured on the surface of both the Earth and Mars using the Radiation Assessment Detector on the Mars Science Laboratory's Curiosity Rover. Recently, Hubert et al. proposed a GLE model included in a particle transport platform (named ATMORAD) describing the extensive air shower characteristics and allowing to assess the ambient dose equivalent. In this approach, the GCR is based on the Force-Field approximation model. The physical description of the Solar Cosmic Ray (i.e. SCR) considers the primary differential rigidity spectrum and the distribution of primary particles at the top of the atmosphere. ATMORAD allows to determine the spectral fluence rate of secondary particles induced by extensive showers, considering altitude range from ground to 45 km. Ambient dose equivalent can be determined using fluence-to-ambient dose equivalent conversion coefficients. The objective of this paper is to analyze the GCR and SCR impacts on ambient dose equivalent considering a high number statistic of world-flight paths. Flight trajectories are based on the Eurocontrol Demand Data Repository (DDR) and consider realistic flight plan with and without regulations or updated with Radar Data from CFMU (Central Flow Management Unit). The final paper will present exhaustive analyses implying solar impacts on ambient dose equivalent level and will propose detailed analyses considering route and airplane characteristics (departure, arrival, continent, airplane type etc.), and the phasing of the solar event. Preliminary results show an important impact of the flight path, particularly the latitude which drives the cutoff rigidity variations. Moreover, dose values vary drastically during GLE events, on the one hand with the route path (latitude, longitude altitude), on the other hand with the phasing of the solar event. Considering the GLE occurred on 23 February 1956, the average ambient dose equivalent evaluated for a flight Paris - New York is around 1.6 mSv, which is relevant to previous works This point highlights the importance of monitoring these solar events and of developing semi-empirical and particle transport method to obtain a reliable calculation of dose levels.

Keywords: cosmic ray, human dose, solar flare, aviation

Procedia PDF Downloads 202
650 Nanomechanical Characterization of Healthy and Tumor Lung Tissues at Cell and Extracellular Matrix Level

Authors: Valeria Panzetta, Ida Musella, Sabato Fusco, Paolo Antonio Netti

Abstract:

The study of the biophysics of living cells drew attention to the pivotal role of the cytoskeleton in many cell functions, such as mechanics, adhesion, proliferation, migration, differentiation and neoplastic transformation. In particular, during the complex process of malignant transformation and invasion cell cytoskeleton devolves from a rigid and organized structure to a more compliant state, which confers to the cancer cells a great ability to migrate and adapt to the extracellular environment. In order to better understand the malignant transformation process from a mechanical point of view, it is necessary to evaluate the direct crosstalk between the cells and their surrounding extracellular matrix (ECM) in a context which is close to in vivo conditions. In this study, human biopsy tissues of lung adenocarcinoma were analyzed in order to define their mechanical phenotype at cell and ECM level, by using particle tracking microrheology (PTM) technique. Polystyrene beads (500 nm) were introduced into the sample slice. The motion of beads was obtained by tracking their displacements across cell cytoskeleton and ECM structures and mean squared displacements (MSDs) were calculated from bead trajectories. It has been already demonstrated that the amplitude of MSD is inversely related to the mechanical properties of intracellular and extracellular microenvironment. For this reason, MSDs of particles introduced in cytoplasm and ECM of healthy and tumor tissues were compared. PTM analyses showed that cancerous transformation compromises mechanical integrity of cells and extracellular matrix. In particular, the MSD amplitudes in cells of adenocarcinoma were greater as compared to cells of normal tissues. The increased motion is probably associated to a less structured cytoskeleton and consequently to an increase of deformability of cells. Further, cancer transformation is also accompanied by extracellular matrix stiffening, as confirmed by the decrease of MSDs of matrix in tumor tissue, a process that promotes tumor proliferation and invasiveness, by activating typical oncogenic signaling pathways. In addition, a clear correlation between MSDs of cells and tumor grade was found. MSDs increase when tumor grade passes from 2 to 3, indicating that cells undergo to a trans-differentiation process during tumor progression. ECM stiffening is not dependent on tumor grade, but the tumor stage resulted to be strictly correlated with both cells and ECM mechanical properties. In fact, a greater stage is assigned to tumor spread to regional lymph nodes and characterized by an up-regulation of different ECM proteins, such as collagen I fibers. These results indicate that PTM can be used to get nanomechanical characterization at different scale levels in an interpretative and diagnostic context.

Keywords: cytoskeleton, extracellular matrix, mechanical properties, particle tracking microrheology, tumor

Procedia PDF Downloads 269
649 Uterine Torsion: A Rare Differential Diagnosis for Acute Abdominal Pain in Pregnancy

Authors: Tin Yee Ling, Kavita Maravar, Ruzica Ardalic

Abstract:

Background: Uterine torsion (UT) in pregnancy of more than 45-degree along the longitudinal axis is a rare occurrence, and the aetiology remains unclear. Case: A 34-year-old G2P1 woman with a history of one previous caesarean section presented at 36+2 weeks with sudden onset lower abdominal pain, syncopal episode, and tender abdomen on examination. She was otherwise haemodynamically stable. Cardiotocography showed a pathological trace with initial prolonged bradycardia followed by a subsequent tachycardia with reduced variability. An initial diagnosis of uterine dehiscence was made, given the history and clinical presentation. She underwent an emergency caesarean section which revealed a 180-degree UT along the longitudinal axis, with oedematous left round ligament lying transverse anterior to the uterus and a segment of large bowel inferior to the round ligament. Detorsion of uterus was performed prior to delivery of the foetus, and anterior uterine wall was intact with no signs of rupture. There were no anatomical uterine abnormalities found other than stretched left ovarian and round ligaments, which were repaired. Delivery was otherwise uneventful, and she was discharged on day 2 postpartum. Discussion: UT is rare as the number of reported cases is within the few hundreds worldwide. Generally, the uterus is supported in place by uterine ligaments, which limit the mobility of the structure. The causes of UT are unknown, but risk factors such as uterine abnormalities, increased uterine ligaments’ flexibility in pregnancy, and foetal malposition has been identified. UT causes occlusion of uterine vessels, which can lead to ischaemic injury of the placenta causing premature separation of the placenta, preterm labour, and foetal morbidity and mortality if delivery is delayed. Diagnosing UT clinically is difficult as most women present with symptoms similar to placenta abruption or uterine rupture (abdominal pain, vaginal bleeding, shock), and one-third are asymptomatic. The management of UT involves surgical detorsion of the uterus and delivery of foetus via caesarean section. Extra vigilance should be taken to identify the anatomy of the uterus experiencing torsion prior to hysterotomy. There have been a few cases reported with hysterotomy on posterior uterine wall for delivery of foetus as it may be difficult to identify and reverse a gravid UT when foetal well-being is at stake. Conclusion: UT should be considered a differential diagnosis of acute abdominal pain in pregnancy. It is crucial that the torsion is addressed immediately as it is associated with maternal and foetal morbidity and mortality.

Keywords: uterine torsion, pregnancy complication, abdominal pain, torted uterus

Procedia PDF Downloads 149
648 Response Surface Methodology for the Optimization of Radioactive Wastewater Treatment with Chitosan-Argan Nutshell Beads

Authors: Fatima Zahra Falah, Touria El. Ghailassi, Samia Yousfi, Ahmed Moussaif, Hasna Hamdane, Mouna Latifa Bouamrani

Abstract:

The management and treatment of radioactive wastewater pose significant challenges to environmental safety and public health. This study presents an innovative approach to optimizing radioactive wastewater treatment using a novel biosorbent: chitosan-argan nutshell beads. By employing Response Surface Methodology (RSM), we aimed to determine the optimal conditions for maximum removal efficiency of radioactive contaminants. Chitosan, a biodegradable and non-toxic biopolymer, was combined with argan nutshell powder to create composite beads. The argan nutshell, a waste product from argan oil production, provides additional adsorption sites and mechanical stability to the biosorbent. The beads were characterized using Fourier Transform Infrared Spectroscopy (FTIR), Scanning Electron Microscopy (SEM), and X-ray Diffraction (XRD) to confirm their structure and composition. A three-factor, three-level Box-Behnken design was utilized to investigate the effects of pH (3-9), contact time (30-150 minutes), and adsorbent dosage (0.5-2.5 g/L) on the removal efficiency of radioactive isotopes, primarily focusing on cesium-137. Batch adsorption experiments were conducted using synthetic radioactive wastewater with known concentrations of these isotopes. The RSM analysis revealed that all three factors significantly influenced the adsorption process. A quadratic model was developed to describe the relationship between the factors and the removal efficiency. The model's adequacy was confirmed through analysis of variance (ANOVA) and various diagnostic plots. Optimal conditions for maximum removal efficiency were pH 6.8, a contact time of 120 minutes, and an adsorbent dosage of 0.8 g/L. Under these conditions, the experimental removal efficiency for cesium-137 was 94.7%, closely matching the model's predictions. Adsorption isotherms and kinetics were also investigated to elucidate the mechanism of the process. The Langmuir isotherm and pseudo-second-order kinetic model best described the adsorption behavior, indicating a monolayer adsorption process on a homogeneous surface. This study demonstrates the potential of chitosan-argan nutshell beads as an effective and sustainable biosorbent for radioactive wastewater treatment. The use of RSM allowed for the efficient optimization of the process parameters, potentially reducing the time and resources required for large-scale implementation. Future work will focus on testing the biosorbent's performance with real radioactive wastewater samples and investigating its regeneration and reusability for long-term applications.

Keywords: adsorption, argan nutshell, beads, chitosan, mechanism, optimization, radioactive wastewater, response surface methodology

Procedia PDF Downloads 17
647 Application of Multiwall Carbon Nanotubes with Anionic Surfactant to Cement Paste

Authors: Maciej Szelag

Abstract:

The discovery of the carbon nanotubes (CNT), has led to a breakthrough in the material engineering. The CNT is characterized by very large surface area, very high Young's modulus (about 2 TPa), unmatched durability, high tensile strength (about 50 GPa) and bending strength. Their diameter usually oscillates in the range from 1 to 100 nm, and the length from 10 nm to 10-2 m. The relatively new approach is the CNT’s application in the concrete technology. The biggest problem in the use of the CNT to cement composites is their uneven dispersion and low adhesion to the cement paste. Putting the nanotubes alone into the cement matrix does not produce any effect because they tend to agglomerate, due to their large surface area. Most often, the CNT is used as an aqueous suspension in the presence of a surfactant that has previously been sonicated. The paper presents the results of investigations of the basic physical properties (apparent density, shrinkage) and mechanical properties (compression and tensile strength) of cement paste with the addition of the multiwall carbon nanotubes (MWCNT). The studies were carried out on four series of specimens (made of two different Portland Cement). Within each series, samples were made with three w/c ratios – 0.4, 0.5, 0.6 (water/cement). Two series were an unmodified cement matrix. In the remaining two series, the MWCNT was added in amount of 0.1% by cement’s weight. The MWCNT was used as an aqueous dispersion in the presence of a surfactant – SDS – sodium dodecyl sulfate (C₁₂H₂₅OSO₂ONa). So prepared aqueous solution was sonicated for 30 minutes. Then the MWCNT aqueous dispersion and cement were mixed using a mechanical stirrer. The parameters were tested after 28 days of maturation. Additionally, the change of these parameters was determined after samples temperature loading at 250°C for 4 hours (thermal shock). Measurement of the apparent density indicated that cement paste with the MWCNT addition was about 30% lighter than conventional cement matrix. This is due to the fact that the use of the MWCNT water dispersion in the presence of surfactant in the form of SDS resulted in the formation of air pores, which were trapped in the volume of the material. SDS as an anionic surfactant exhibits characteristics specific to blowing agents – gaseous and foaming substances. Because of the increased porosity of the cement paste with the MWCNT, they have obtained lower compressive and tensile strengths compared to the cement paste without additive. It has been observed, however, that the smallest decreases in the compressive and tensile strength after exposure to the elevated temperature achieved samples with the MWCNT. The MWCNT (well dispersed in the cement matrix) can form bridges between hydrates in a nanoscale of the material’s structure. Thus, this may result in an increase in the coherent cohesion of the cement material subjected to a thermal shock. The obtained material could be used for the production of an aerated concrete or using lightweight aggregates for the production of a lightweight concrete.

Keywords: cement paste, elevated temperature, mechanical parameters, multiwall carbon nanotubes, physical parameters, SDS

Procedia PDF Downloads 347
646 A Two-Step, Temperature-Staged, Direct Coal Liquefaction Process

Authors: Reyna Singh, David Lokhat, Milan Carsky

Abstract:

The world crude oil demand is projected to rise to 108.5 million bbl/d by the year 2035. With reserves estimated at 869 billion tonnes worldwide, coal is an abundant resource. This work was aimed at producing a high value hydrocarbon liquid product from the Direct Coal Liquefaction (DCL) process at, comparatively, mild operating conditions. Via hydrogenation, the temperature-staged approach was investigated. In a two reactor lab-scale pilot plant facility, the objectives included maximising thermal dissolution of the coal in the presence of a hydrogen donor solvent in the first stage, subsequently promoting hydrogen saturation and hydrodesulphurization (HDS) performance in the second. The feed slurry consisted of high grade, pulverized bituminous coal on a moisture-free basis with a size fraction of < 100μm; and Tetralin mixed in 2:1 and 3:1 solvent/coal ratios. Magnetite (Fe3O4) at 0.25wt% of the dry coal feed was added for the catalysed runs. For both stages, hydrogen gas was used to maintain a system pressure of 100barg. In the first stage, temperatures of 250℃ and 300℃, reaction times of 30 and 60 minutes were investigated in an agitated batch reactor. The first stage liquid product was pumped into the second stage vertical reactor, which was designed to counter-currently contact the hydrogen rich gas stream and incoming liquid flow in the fixed catalyst bed. Two commercial hydrotreating catalysts; Cobalt-Molybdenum (CoMo) and Nickel-Molybdenum (NiMo); were compared in terms of their conversion, selectivity and HDS performance at temperatures 50℃ higher than the respective first stage tests. The catalysts were activated at 300°C with a hydrogen flowrate of approximately 10 ml/min prior to the testing. A gas-liquid separator at the outlet of the reactor ensured that the gas was exhausted to the online VARIOplus gas analyser. The liquid was collected and sampled for analysis using Gas Chromatography-Mass Spectrometry (GC-MS). Internal standard quantification methods for the sulphur content, the BTX (benzene, toluene, and xylene) and alkene quality; alkanes and polycyclic aromatic hydrocarbon (PAH) compounds in the liquid products were guided by ASTM standards of practice for hydrocarbon analysis. In the first stage, using a 2:1 solvent/coal ratio, an increased coal to liquid conversion was favoured by a lower operating temperature of 250℃, 60 minutes and a system catalysed by magnetite. Tetralin functioned effectively as the hydrogen donor solvent. A 3:1 ratio favoured increased concentrations of the long chain alkanes undecane and dodecane, unsaturated alkenes octene and nonene and PAH compounds such as indene. The second stage product distribution showed an increase in the BTX quality of the liquid product, branched chain alkanes and a reduction in the sulphur concentration. As an HDS performer and selectivity to the production of long and branched chain alkanes, NiMo performed better than CoMo. CoMo is selective to a higher concentration of cyclohexane. For 16 days on stream each, NiMo had a higher activity than CoMo. The potential to cover the demand for low–sulphur, crude diesel and solvents from the production of high value hydrocarbon liquid in the said process, is thus demonstrated.

Keywords: catalyst, coal, liquefaction, temperature-staged

Procedia PDF Downloads 639
645 Cross-Sectoral Energy Demand Prediction for Germany with a 100% Renewable Energy Production in 2050

Authors: Ali Hashemifarzad, Jens Zum Hingst

Abstract:

The structure of the world’s energy systems has changed significantly over the past years. One of the most important challenges in the 21st century in Germany (and also worldwide) is the energy transition. This transition aims to comply with the recent international climate agreements from the United Nations Climate Change Conference (COP21) to ensure sustainable energy supply with minimal use of fossil fuels. Germany aims for complete decarbonization of the energy sector by 2050 according to the federal climate protection plan. One of the stipulations of the Renewable Energy Sources Act 2017 for the expansion of energy production from renewable sources in Germany is that they cover at least 80% of the electricity requirement in 2050; The Gross end energy consumption is targeted for at least 60%. This means that by 2050, the energy supply system would have to be almost completely converted to renewable energy. An essential basis for the development of such a sustainable energy supply from 100% renewable energies is to predict the energy requirement by 2050. This study presents two scenarios for the final energy demand in Germany in 2050. In the first scenario, the targets for energy efficiency increase and demand reduction are set very ambitiously. To build a comparison basis, the second scenario provides results with less ambitious assumptions. For this purpose, first, the relevant framework conditions (following CUTEC 2016) were examined, such as the predicted population development and economic growth, which were in the past a significant driver for the increase in energy demand. Also, the potential for energy demand reduction and efficiency increase (on the demand side) was investigated. In particular, current and future technological developments in energy consumption sectors and possible options for energy substitution (namely the electrification rate in the transport sector and the building renovation rate) were included. Here, in addition to the traditional electricity sector, the areas of heat, and fuel-based consumptions in different sectors such as households, commercial, industrial and transport are taken into account, supporting the idea that for a 100% supply from renewable energies, the areas currently based on (fossil) fuels must be almost completely be electricity-based by 2050. The results show that in the very ambitious scenario a final energy demand of 1,362 TWh/a is required, which is composed of 818 TWh/a electricity, 229 TWh/a ambient heat for electric heat pumps and approx. 315 TWh/a non-electric energy (raw materials for non-electrifiable processes). In the less ambitious scenario, in which the targets are not fully achieved by 2050, the final energy demand will need a higher electricity part of almost 1,138 TWh/a (from the total: 1,682 TWh/a). It has also been estimated that 50% of the electricity revenue must be saved to compensate for fluctuations in the daily and annual flows. Due to conversion and storage losses (about 50%), this would mean that the electricity requirement for the very ambitious scenario would increase to 1,227 TWh / a.

Keywords: energy demand, energy transition, German Energiewende, 100% renewable energy production

Procedia PDF Downloads 123
644 Analyzing Strategic Alliances of Museums: The Case of Girona (Spain)

Authors: Raquel Camprubí

Abstract:

Cultural tourism has been postulated as relevant motivation for tourist over the world during the last decades. In this context, museums are the main attraction for cultural tourists who are seeking to connect with the history and culture of the visited place. From the point of view of an urban destination, museums and other cultural resources are essential to have a strong tourist supply at the destination, in order to be capable of catching attention and interest of cultural tourists. In particular, museums’ challenge is to be prepared to offer the best experience to their visitors without to forget their mission-based mainly on protection of its collection and other social goals. Thus, museums individually want to be competitive and have good positioning to achieve their strategic goals. The life cycle of the destination and the level of maturity of its tourism product influence the need of tourism agents to cooperate and collaborate among them, in order to rejuvenate their product and become more competitive as a destination. Additionally, prior studies have considered an approach of different models of a public and private partnership, and collaborative and cooperative relations developed among the agents of a tourism destination. However, there are no studies that pay special attention to museums and the strategic alliances developed to obtain mutual benefits. Considering this background, the purpose of this study is to analyze in what extent museums of a given urban destination have established strategic links and relations among them, in order to improve their competitive position at both individual and destination level. In order to achieve the aim of this study, the city of Girona (Spain) and the museums located in this city are taken as a case study. Data collection was conducted using in-depth interviews, in order to collect all the qualitative data related to nature, strengthen and purpose of the relational ties established among the museums of the city or other relevant tourism agents of the city. To conduct data analysis, a Social Network Analysis (SNA) approach was taken using UCINET software. Position of the agents in the network and structure of the network was analyzed, and qualitative data from interviews were used to interpret SNA results. Finding reveals the existence of strong ties among some of the museums of the city, particularly to create and promote joint products. Nevertheless, there were detected outsiders who have an individual strategy, without collaboration and cooperation with other museums or agents of the city. Results also show that some relational ties have an institutional origin, while others are the result of a long process of cooperation with common projects. Conclusions put in evidence that collaboration and cooperation of museums had been positive to increase the attractiveness of the museum and the city as a cultural destination. Future research and managerial implications are also mentioned.

Keywords: cultural tourism, competitiveness, museums, Social Network analysis

Procedia PDF Downloads 108
643 The Impacts of New Digital Technology Transformation on Singapore Healthcare Sector: Case Study of a Public Hospital in Singapore from a Management Accounting Perspective

Authors: Junqi Zou

Abstract:

As one of the world’s most tech-ready countries, Singapore has initiated the Smart Nation plan to harness the full power and potential of digital technologies to transform the way people live and work, through the more efficient government and business processes, to make the economy more productive. The key evolutions of digital technology transformation in healthcare and the increasing deployment of Internet of Things (IoTs), Big Data, AI/cognitive, Robotic Process Automation (RPA), Electronic Health Record Systems (EHR), Electronic Medical Record Systems (EMR), Warehouse Management System (WMS in the most recent decade have significantly stepped up the move towards an information-driven healthcare ecosystem. The advances in information technology not only bring benefits to patients but also act as a key force in changing management accounting in healthcare sector. The aim of this study is to investigate the impacts of digital technology transformation on Singapore’s healthcare sector from a management accounting perspective. Adopting a Balanced Scorecard (BSC) analysis approach, this paper conducted an exploratory case study of a newly launched Singapore public hospital, which has been recognized as amongst the most digitally advanced healthcare facilities in Asia-Pacific region. Specifically, this study gains insights on how the new technology is changing healthcare organizations’ management accounting from four perspectives under the Balanced Scorecard approach, 1) Financial Perspective, 2) Customer (Patient) Perspective, 3) Internal Processes Perspective, and 4) Learning and Growth Perspective. Based on a thorough review of archival records from the government and public, and the interview reports with the hospital’s CIO, this study finds the improvements from all the four perspectives under the Balanced Scorecard framework as follows: 1) Learning and Growth Perspective: The Government (Ministry of Health) works with the hospital to open up multiple training pathways to health professionals that upgrade and develops new IT skills among the healthcare workforce to support the transformation of healthcare services. 2) Internal Process Perspective: The hospital achieved digital transformation through Project OneCare to integrate clinical, operational, and administrative information systems (e.g., EHR, EMR, WMS, EPIB, RTLS) that enable the seamless flow of data and the implementation of JIT system to help the hospital operate more effectively and efficiently. 3) Customer Perspective: The fully integrated EMR suite enhances the patient’s experiences by achieving the 5 Rights (Right Patient, Right Data, Right Device, Right Entry and Right Time). 4) Financial Perspective: Cost savings are achieved from improved inventory management and effective supply chain management. The use of process automation also results in a reduction of manpower costs and logistics cost. To summarize, these improvements identified under the Balanced Scorecard framework confirm the success of utilizing the integration of advanced ICT to enhance healthcare organization’s customer service, productivity efficiency, and cost savings. Moreover, the Big Data generated from this integrated EMR system can be particularly useful in aiding management control system to optimize decision making and strategic planning. To conclude, the new digital technology transformation has moved the usefulness of management accounting to both financial and non-financial dimensions with new heights in the area of healthcare management.

Keywords: balanced scorecard, digital technology transformation, healthcare ecosystem, integrated information system

Procedia PDF Downloads 151
642 India's Geothermal Energy Landscape and Role of Geophysical Methods in Unravelling Untapped Reserves

Authors: Satya Narayan

Abstract:

India, a rapidly growing economy with a burgeoning population, grapples with the dual challenge of meeting rising energy demands and reducing its carbon footprint. Geothermal energy, an often overlooked and underutilized renewable source, holds immense potential for addressing this challenge. Geothermal resources offer a valuable, consistent, and sustainable energy source, and may significantly contribute to India's energy. This paper discusses the importance of geothermal exploration in India, emphasizing its role in achieving sustainable energy production while mitigating environmental impacts. It also delves into the methodology employed to assess geothermal resource feasibility, including geophysical surveys and borehole drilling. The results and discussion sections highlight promising geothermal sites across India, illuminating the nation's vast geothermal potential. It detects potential geothermal reservoirs, characterizes subsurface structures, maps temperature gradients, monitors fluid flow, and estimates key reservoir parameters. Globally, geothermal energy falls into high and low enthalpy categories, with India mainly having low enthalpy resources, especially in hot springs. The northwestern Himalayan region boasts high-temperature geothermal resources due to geological factors. Promising sites, like Puga Valley, Chhumthang, and others, feature hot springs suitable for various applications. The Son-Narmada-Tapti lineament intersects regions rich in geological history, contributing to geothermal resources. Southern India, including the Godavari Valley, has thermal springs suitable for power generation. The Andaman-Nicobar region, linked to subduction and volcanic activity, holds high-temperature geothermal potential. Geophysical surveys, utilizing gravity, magnetic, seismic, magnetotelluric, and electrical resistivity techniques, offer vital information on subsurface conditions essential for detecting, evaluating, and exploiting geothermal resources. The gravity and magnetic methods map the depth of the mantle boundary (high-temperature) and later accurately determine the Curie depth. Electrical methods indicate the presence of subsurface fluids. Seismic surveys create detailed sub-surface images, revealing faults and fractures and establishing possible connections to aquifers. Borehole drilling is crucial for assessing geothermal parameters at different depths. Detailed geochemical analysis and geophysical surveys in Dholera, Gujarat, reveal untapped geothermal potential in India, aligning with renewable energy goals. In conclusion, geophysical surveys and borehole drilling play a pivotal role in economically viable geothermal site selection and feasibility assessments. With ongoing exploration and innovative technology, these surveys effectively minimize drilling risks, optimize borehole placement, aid in environmental impact evaluations, and facilitate remote resource exploration. Their cost-effectiveness informs decisions regarding geothermal resource location and extent, ultimately promoting sustainable energy and reducing India's reliance on conventional fossil fuels.

Keywords: geothermal resources, geophysical methods, exploration, exploitation

Procedia PDF Downloads 68
641 Online Postgraduate Students’ Perceptions and Experiences With Student to Student Interactions: A Case for Kamuzu University of Health Sciences in Malawi

Authors: Frazer McDonald Ng'oma

Abstract:

Online Learning in Malawi has only immersed in recent years due to the need to increase access to higher education, the need to accommodate upgrading students who wish to study on a part time basis while still continuing their work, and the COVID-19 pandemic, which forced the closure of schools resulting in academic institutions seeking alternative modes of teaching and Learning to ensure continued teaching and Learning. Realizing that this mode of Learning is becoming a norm, institutions of higher Learning have started pioneering online post-graduate programs from which they can draw lessons before fully implementing it in undergraduate programs. Online learning pedagogy has not been fully grasped and institutions are still experimenting with this mode of Learning until online Learning guiding policies are created and its standards improved. This single case descriptive qualitative research study sought to investigate online postgraduate students’ perceptions and experiences with Student to student interactive pedagogy in their programs. The results of the study are to inform institutions and educators how to structure their programs to ensure that their students get the full satisfaction. 25 Masters students in 3 recently introduced online programs at Kamuzu University of Health Sciences (KUHES), were engaged; 19 were interviewed and 6 responded to questionnaires. The findings from the students were presented and categorized in themes and subthemes that emerged from the qualitative data that was collected and analysed following Colaizzi’s framework for data analysis that resulted in themes formulation. Findings revealed that Student to student interactions occurred in the online programme during live sessions, on class Whatsapp group, in discussion boards as well as on emails. Majority of the students (n=18) felt the level of students’ interaction initiated by the institution was too much, referring to mandatory interactions activities like commenting in discussion boards and attending to live sessons. Some participants (n=7) were satisfied with the level of interaction and also pointed out that they would be fine with more program-initiated student–to–student interactions. These participants attributed having been out of school for some time as a reason for needing peer interactions citing that it is already difficult to get back to a traditional on-campus school after some time, let alone an online class where there is no physical interaction with other students. In general, majority of the participants (n=18) did not value Student to student interaction in online Learning. The students suggested that having intensive student-to-student interaction in postgraduate online studies does not need to be a high priority for the institution and they further recommended that if a lecturer decides to incorporate student-to-student activities into a class, they should be optional.

Keywords: online learning, interactions, student interactions, post graduate students

Procedia PDF Downloads 64
640 Occurrence and Habitat Status of Osmoderma barnabita in Lithuania

Authors: D. Augutis, M. Balalaikins, D. Bastyte, R. Ferenca, A. Gintaras, R. Karpuska, G. Svitra, U. Valainis

Abstract:

Osmoderma species complex (consisting of Osmoderma eremita, O. barnabita, O. lassallei and O. cristinae) is a scarab beetle serving as indicator species in nature conservation. Osmoderma inhabits cavities containing sufficient volume of wood mould usually caused by brown rot in veteran deciduous trees. As the species, having high demands for the habitat quality, they indicate the suitability of the habitat for a number of other specialized saproxylic species. Since typical habitat needed for Osmoderma and other species associated with hollow veteran trees is rapidly declining, the species complex is protected under various legislation, such as Bern Convention, EU Habitats Directive and the Red Lists of many European states. Natura 2000 sites are the main tool for conservation of O. barnabita in Lithuania, currently 17 Natura 2000 sites are designated for the species, where monitoring is implemented once in 3 years according to the approved methodologies. Despite these monitoring efforts in species reports, provided to EU according to the Article 17 of the Habitats Directive, it is defined on the national level, that overall assessment of O. barnabita is inadequate and future prospects are poor. Therefore, research on the distribution and habitat status of O. barnabita was launched on the national level in 2016, which was complemented by preparatory actions of LIFE OSMODERMA project. The research was implemented in the areas equally distributed in the whole area of Lithuania, where O. barnabita was previously not observed, or not observed in the last 10 years. 90 areas, such as Habitats of European importance (9070 Fennoscandian wooded pastures, 9180 Tilio-Acerion forests of slopes, screes, and ravines), Woodland key habitats (B1 broad-leaved forest, K1 single giant tree) and old manor parks, were chosen for the research after review of habitat data from the existing national databases. The first part of field inventory of the habitats was carried out in 2016 and 2017 autumn and winter seasons, when relative abundance of O. barnabita was estimated according to larval faecal pellets in the tree cavities or around the trees. The state of habitats was evaluated according to the density of suitable and potential trees, percentage of not overshadowed trees and amount of undergrowth. The second part of the field inventory was carried out in the summer with pheromone traps baited with (R)-(+)-γ –decalactone. Results of the research show not only occurrence and habitat status of O. barnabita, but also help to clarify O. barnabita habitat requirements in Lithuania, define habitat size, its structure and distribution. Also, it compares habitat needs between the regions in Lithuania and inside and outside Natura 2000 areas designated for the species.

Keywords: habitat status, insect conservation, Osmoderma barnabita, veteran trees

Procedia PDF Downloads 130
639 Framing the Dynamics and Functioning of Different Variants of Terrorist Organizations: A Business Model Perspective

Authors: Eisa Younes Alblooshi

Abstract:

Counterterrorism strategies, to be effective and efficient, require a sound understanding of the dynamics, the interlinked organizational elements of the terrorist outfits being combated, with a view to having cognizance of their strong points to be guarded against, as well as the vulnerable zones that can be targeted for optimal results in a timely fashion by counterterrorism agencies. A unique model regarding the organizational imperatives was evolved in this research through likening the terrorist organizations with the traditional commercial ones, with a view to understanding in detail the dynamics of interconnectivity and dependencies, and the related compulsions facing the leaderships of such outfits that provide counterterrorism agencies with opportunities for forging better strategies. It involved assessing the evolving organizational dynamics and imperatives of different types of terrorist organizations, to enable the researcher to construct a prototype model that defines the progression and linkages of the related organizational elements of such organizations. It required detailed analysis of how the various elements are connected, with sequencing identified, as any outfit positions itself with respect to its external environment and internal dynamics. A case study focusing on a transnational radical religious state-sponsored terrorist organization was conducted to validate the research findings and to further strengthen the specific counterterrorism strategies. Six different variants of the business model of terrorist organizations were identified, categorized based on their outreach, mission, and status of any state sponsorship. The variants represent vast majority of the range of terrorist organizations acting locally or globally. The model shows the progression and dynamics of these organizations through various dimensions including mission, leadership, outreach, state sponsorship status, resulting in the organizational structure, state of autonomy, preference divergence in its fold, recruitment core, propagation avenues, down to their capacity to adapt, resulting critically in their own life cycles. A major advantage of the model is the utility of mapping terrorist organizations according to their fits to the sundry identified variants, allowing for flexibility and differences within, enabling the researchers and counterterrorism agencies to observe a neat blueprint of the organization’s footprint, along with highlighting the areas to be evaluated for focused target zone selection and timing of counterterrorism interventions. Special consideration is given to the dimension of financing, keeping in context the latest developments regarding cryptocurrencies, hawala, and global anti-money laundering initiatives. Specific counterterrorism strategies and intervention points have been identified for each of the respective model variants, with a view to efficient and effective deployment of resources.

Keywords: terrorism, counterterrorism, model, strategy

Procedia PDF Downloads 150