Search results for: power structure
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13154

Search results for: power structure

1154 Collaboration versus Cooperation: Grassroots Activism in Divided Cities and Communication Networks

Authors: R. Barbour

Abstract:

Peace-building organisations act as a network of information for communities. Through fieldwork, it was highlighted that grassroots organisations and activists may cooperate with each other in their actions of peace-building; however, they would not collaborate. Within two divided societies; Nicosia in Cyprus and Jerusalem in Israel, there is a distinction made by organisations and activists with regards to activities being more ‘co-operative’ than ‘collaborative’. This theme became apparent when having informal conversations and semi-structured interviews with various members of the activist communities. This idea needs further exploration as these distinctions could impact upon the efficiency of peacebuilding activities within divided societies. Civil societies within divided landscapes, both physically and socially, play an important role in conflict resolution. How organisations and activists interact with each other has the possibility to be very influential with regards to peacebuilding activities. Working together sets a positive example for divided communities. Cooperation may be considered a primary level of interaction between CSOs. Therefore, at the beginning of a working relationship, organisations cooperate over basic agendas, parallel power structures and focus, which led to the same objective. Over time, in some instances, due to varying factors such as funding, more trust and understanding within the relationship, it could be seen that processes progressed to more collaborative ways. It is evident to see that NGOs and activist groups are highly independent and focus on their own agendas before coming together over shared issues. At this time, there appears to be more collaboration in Nicosia among CSOs and activists than Jerusalem. The aims and objectives of agendas also influence how organisations work together. In recent years, Nicosia, and Cyprus in general, have perhaps changed their focus from peace-building initiatives to more environmental issues which have become new-age reconciliation topics. Civil society does not automatically indicate like-minded organisations however solidarity within social groups can create ties that bring people and resources together. In unequal societies, such as those in Nicosia and Jerusalem, it is these ties that cut across groups and are essential for social cohesion. Societies are a collection of social groups; individuals who have come together over common beliefs. These groups in turn shape the identities and determine the values and structures within societies. At many different levels and stages, social groups work together through cooperation and collaboration. These structures in turn have the capabilities to open up networks to less powerful or excluded groups, with the aim to produce social cohesion which may contribute social stability and economic welfare over any extended period.

Keywords: collaboration, cooperation, grassroots activism, networks of communication

Procedia PDF Downloads 141
1153 Hydration of Three-Piece K Peptide Fragments Studied by Means of Fourier Transform Infrared Spectroscopy

Authors: Marcin Stasiulewicz, Sebastian Filipkowski, Aneta Panuszko

Abstract:

Background: The hallmark of neurodegenerative diseases, including Alzheimer's and Parkinson's diseases, is an aggregation of the abnormal forms of peptides and proteins. Water is essential to functioning biomolecules, and it is one of the key factors influencing protein folding and misfolding. However, the hydration studies of proteins are complicated due to the complexity of protein systems. The use of model compounds can facilitate the interpretation of results involving larger systems. Objectives: The goal of the research was to characterize the properties of the hydration water surrounding the two three-residue K peptide fragments INS (Isoleucine - Asparagine - Serine) and NSR (Asparagine - Serine - Arginine). Methods: Fourier-transform infrared spectra of aqueous solutions of the tripeptides were recorded on Nicolet 8700 spectrometer (Thermo Electron Co.) Measurements were carried out at 25°C for varying molality of solute. To remove oscillation couplings from water spectra and, consequently, obtain narrow O-D semi-heavy water bands (HDO), the isotopic dilution method of HDO in H₂O was used. The difference spectra method allowed us to isolate the tripeptide-affected HDO spectrum. Results: The structural and energetic properties of water affected by the tripeptides were compared to the properties of pure water. The shift of the values of the gravity center of bands (related to the mean energy of water hydrogen bonds) towards lower values with respect to the ones corresponding to pure water suggests that the energy of hydrogen bonds between water molecules surrounding tripeptides is higher than in pure water. A comparison of the values of the mean oxygen-oxygen distances in water affected by tripeptides and pure water indicates that water-water hydrogen bonds are shorter in the presence of these tripeptides. The analysis of differences in oxygen-oxygen distance distributions between the tripeptide-affected water and pure water indicates that around the tripeptides, the contribution of water molecules with the mean energy of hydrogen bonds decreases, and simultaneously the contribution of strong hydrogen bonds increases. Conclusions: It was found that hydrogen bonds between water molecules in the hydration sphere of tripeptides are shorter and stronger than in pure water. It means that in the presence of the tested tripeptides, the structure of water is strengthened compared to pure water. Moreover, it has been shown that in the vicinity of the Asparagine - Serine - Arginine, water forms stronger and shorter hydrogen bonds. Acknowledgments: This work was funded by the National Science Centre, Poland (grant 2017/26/D/NZ1/00497).

Keywords: amyloids, K-peptide, hydration, FTIR spectroscopy

Procedia PDF Downloads 163
1152 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards

Authors: Golnush Masghati-Amoli, Paul Chin

Abstract:

Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.

Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering

Procedia PDF Downloads 119
1151 Questioning the Predominant Feminism in Ahalya, a Short Film by Sujoy Ghosh

Authors: Somya Sharma

Abstract:

Ahalya, the critically acclaimed short film, is known to demolish the gender constructs of the age old myth of Ahalya. The paper tries to crack the overt meaning of the short film by reading between the dialogues and deconstructing the idea of the pseudo feminism in the short film Ahalya by Sujoy Ghosh. The film, by subverting the role of male character by making it seem submissive as compared to the female character's role seems to be just a surface level reading of the text. It seems that Sujoy Ghosh has played not just with changing the paradigm, but also trying to alter the history by doing so. The age old myth of putting Ahalya as a part of the five virgins (panchkanya) of Hindu mythology is explored in the paper. God's manoeuvre cannot be questioned and the two male characters tend to again shape the deed and the life of the female character, Ahalya. It is of importance to note that even in the 21st century, progressive actors like Radhika Apte fail to acknowledge the politics of altering history, not in a progressive way. The film blinds the viewer in the first watch to fall for the female strength and ownership of her sexuality, which is reflected in the opening scene itself where she opens the gate for the police man Indra Sen (representing God Indra who seduced her) who is charmed by her white dress. White, in Hindu mythology, stands for mourning, and this can be a hint towards the prophecy of what is about to come. Ahalya, bold, strong, and confident in this scene seems to be in total ownership of her sexual identity. Further, as the film progresses, control of Ahalya over her acts becomes even more dominant. In the myth of Ahalya, Gautama Maharishi, her husband, who wins her by Brahma's courtesy, curses her for her infidelity. She is then turned into a stone because of the curse and is redeemed when Lord Rama's foot brushes the stone. In the film, it is with the help of Ahalya that Goutam Sadhu turns Indra Sen into a stone doll. Ahalya is seen as a seductress who bewitches Indra Sen, and because the latter falls for the trap laid by the husband wife duo, he is turned into a doll. The attempt made by the paper is to read Ahalya as a character of the stand in wife who is yet again a pawn in the play of Goutama's revenge from Indra (who in the myth is able to escape from any curse or punishment for the act). The paper, therefore, reverts the idea which has till now been signified by the film and attempts to study the feminism this film appropriates. It is essential to break down the structure formed by such overt transgressing films in order to provide a real outlook of how feminism is twisted and moulded according to a man’s wishes.

Keywords: deconstructing, Hindu mythology, Panchkanya, predominant feminism, seductress, stone doll

Procedia PDF Downloads 233
1150 Sludge Marvel (Densification): The Ultimate Solution For Doing More With Less Effort!

Authors: Raj Chavan

Abstract:

At present, the United States is home to more than 14,000 Water Resource Recovery Facilities (WRRFs), of which approximately 35% have implemented nutrient limits of some kind. These WRRFs contribute 10 to 15% of the total nutrient burden to surface rivers in the United States and account for approximately 1% of total power demand and 2% of total greenhouse gas emissions (GHG). There are several factors that have influenced the development of densification technologies in the direction of more compact and energy-efficient nutrient removal processes. Prior to surface water discharge, existing facilities that necessitate capacity expansion or biomass densification for greater treatability within the same footprint are being subjected to stricter nutrient removal requirements. Densification of activated sludge as a method for nutrient removal and process intensification at WRRFs has garnered considerable attention in recent times. The biological processes take place within the aerobic sediment granules, which form the basis of the technology. The possibility of generating granular sludge through continuous (or conventional) activated sludge processes (CAS) or densification of biomass through the transfer of activated sludge flocs to a denser biomass aggregate as an exceptionally efficient intensification technique has generated considerable interest. This presentation aims to furnish attendees with a foundational comprehension of densification through the illustration of practical concerns and insights. The subsequent subjects will be deliberated upon. What are some potential techniques for producing and preserving densified granules? What processes are responsible for the densification of biological flocs? How do physical selectors contribute to the process of biological flocs becoming denser? What viable strategies exist for the management of densified biological flocs, and which design parameters of physical selection influence the retention of densified biological flocs? determining operational solutions for floc and granule customization in order to meet capacity and performance objectives? The answers to these pivotal questions will be derived from existing full-scale treatment facilities, bench-scale and pilot-scale investigations, and existing literature data. By the conclusion of the presentation, the audience will possess a fundamental comprehension of the densification concept and its significance in attaining effective effluent treatment. Additionally, case studies pertaining to the design and operation of densification procedures will be incorporated into the presentation.

Keywords: densification, intensification, nutrient removal, granular sludge

Procedia PDF Downloads 59
1149 Freshwater Source of Sapropel for Healthcare

Authors: Ilona Pavlovska, Aneka Klavina, Agris Auce, Ivars Vanadzins, Alise Silova, Laura Komarovska, Linda Paegle, Baiba Silamikele, Linda Dobkevica

Abstract:

Freshwater sapropel is a common material formed by complex biological transformations of Holocene sediments in the water basement of the lakes in Latvia that has the potential to be used as medical mud. Sapropel forms over a long period in shallow waters by slowly decomposing organic sediment and has different compositions depending on the location of the source, surroundings, the water regime, etc. Official geological survey of Latvia lakes, from Latvian lake database (ezeri.lv), used in the selection of the area of the exploration. The multifunctional effect of sapropel on the whole organism explained by its complex chemical and biological structure. This unique, organic substance and its ability to maintain heat for a long time ensures deep tissue warming and has a positive effect on the treatment of various joint and skin diseases. Sapropel is a valuable resource with multiple areas of application. Investigation of sapropel sediments and survey of the five sites selected according to the criteria performed in the current study. Also, our study includes sampling at different depths and their initial treatment, evaluation of external signs, and study of physical-chemical parameters, as well as analysis of biochemical parameters and evaluation of microbiological indicators. The main selection criteria were sapropel deposits depth, hydrological regime, the history of agriculture next to the lake, and the potential exposure to industrial waste. One hundred and five sapropel samples obtained from five lakes (Audzelu, Dunakla, Ivusku, Zielu, and Mazars Kivdalova) during the wintertime. The main goal of the study is to carry out detailed and systematic research on the medical properties of sapropel to be obtained in Latvia, to promote its scientifically based use in balneology, to develop new medical procedures and services, and to promote the development of new exportable products. Latvian freshwater sapropel could be used as raw material for getting sapropel extract and use it as a remedy. All mentioned above brings us to the main question for sapropel usage in medicine, balneology, and pharmacy “how to develop quality criteria for raw sapropel and its extracts. The research was co-financed by the project "Analysis of characteristics of medical sapropel and its usage for medical purposes and elaboration of industrial extraction methods" No.1.1.1.1/16/A/165.

Keywords: balneology, extracts, freshwater sapropel, Latvian lakes, medical mud, sapropel

Procedia PDF Downloads 245
1148 Possibilities and Challenges for District Heating

Authors: Louise Ödlund, Danica Djuric Ilic

Abstract:

From a system perspective, there are several benefits of DH. A possibility to utilize the excess heat from waste incineration and biomass-based combined heat and power (CHP) production (e.g. possibility to utilize the excess heat from electricity production) are two examples. However, in a future sustainable society, the benefits of DH may be less obvious. Due to the climate changes and increased energy efficiency of buildings, the demand for space heating is expected to decrease. Due to the society´s development towards circular economy, a larger amount of the waste will be material recycled, and the possibility for DH production by the energy recovery through waste incineration will be reduced. Furthermore, the benefits of biomass-based CHP production will be less obvious since the marginal electricity production will no longer be linked to high greenhouse gas emissions due to an increased share of renewable electricity capacity in the electricity system. The purpose of the study is (1) to provide an overview of the possible development of other sectors which may influence the DH in the future and (2) to detect new business strategies which would enable for DH to adapt to the future conditions and remain competitive to alternative heat production in the future. A system approach was applied where DH is seen as a part of an integrated system which consists of other sectors as well. The possible future development of other sectors and the possible business strategies for DH producers were searched through a systematic literature review In order to remain competitive to the alternative heat production in the future, DH producers need to develop new business strategies. While the demand for space heating is expected to decrease, the space cooling demand will probably increase due to the climate changes, but also due to the better insulation of buildings in the cases where the home appliances are the heat sources. This opens up a possibility for applying DH-driven absorption cooling, which would increase the annual capacity utilization of the DH plants. The benefits of the DH related to the energy recovery from the waste incineration will exist in the future since there will always be a need to take care of materials and waste that cannot be recycled (e.g. waste containing organic toxins, bacteria, such as diapers and hospital waste). Furthermore, by operating central controlled heat pumps, CHP plants, and heat storage depending on the intermittent electricity production variation, the DH companies may enable an increased share of intermittent electricity production in the national electricity grid. DH producers can also enable development of local biofuel supply chains and reduce biofuel production costs by integrating biofuel and DH production in local DH systems.

Keywords: district heating, sustainable business strategies, sustainable development, system approach

Procedia PDF Downloads 70
1147 “Teacher, You’re on Mute!”: Teachers as Cultivators of Trans-Literacies

Authors: Efleda Preclaro Tolentino

Abstract:

Research indicates that an educator’s belief system is reflected in the way they structure the learning environment. Their values and belief system have the potential to positively impact school readiness through an understanding of children’s development and the creation of a stable, motivating environment. Based on the premise that the social environment influences the development of social skills, knowledge construct, and shared values of young children, this study examined verbal and nonverbal exchanges between early childhood teachers and their preschool students within the context of remote learning. Using the qualitative method of data collection, the study determined the nature of interactions between preschoolers and their teachers within a remote learning environment at a preschool in Southeast Asia that utilized the Mother Tongue-based Multilingual Education (MTBMLE) Approach. From the lens of sociocultural theory, the study investigated preschoolers’ use of literacies to convey meaning and to interact within a remote learning environment. Using a Strengths Perspective, the study revealed the creativity and resourcefulness of preschoolers in expressing themselves through trans-literacies that were made possible by the use of online mode of learning within cultural and subcultural norms. The study likewise examined how social skills acquired by young children were transmitted (verbally or nonverbally) in their interactions with peers during Zoom meetings. By examining the dynamics of social exchanges between teachers and children, the findings of the study underscore the importance of providing support for preschool students as they apply acquired values and shared practices within a remote learning environment. The potential of distance learning in the early years will be explored, specifically in supporting young children’s language and literacy development. At the same time, the study examines the role of teachers as cultivators of trans-literacies. The teachers’ skillful use of technology in facilitating young children’s learning, as well as in supporting interactions with families, will be examined. The findings of this study will explore the potential of distance learning in early childhood education to establish continuity in learning, supporting young children’s social and emotional transitions, and nurturing trans-literacies that transcend prevailing definitions of learning contexts. The implications of teachers and parents working collaboratively to support student learning will be examined. The importance of preparing teachers to be resourceful, adaptable, and innovative to ensure that learning takes place across a variety of modes and settings will be discussed.

Keywords: transliteracy, preschoolers, remote learning, strengths perspective

Procedia PDF Downloads 73
1146 Influence of La0.1Sr0.9Co1-xFexO3-δ Catalysts on Oxygen Permeation Using Mixed Conductor

Authors: Y. Muto, S. Araki, H. Yamamoto

Abstract:

The separation of oxygen is one key technology to improve the efficiency and to reduce the cost for the processed of the partial oxidation of the methane and the condensation of the carbon dioxide. Particularly, carbon dioxide at high concentration would be obtained by the combustion using pure oxygen separated from air. However, the oxygen separation process occupied the large part of energy consumption. Therefore, it is considered that the membrane technologies enable to separation at lower cost and lower energy consumption than conventional methods. In this study, it is examined that the separation of oxygen using membranes of mixed conductors. Oxygen permeation through the membrane is occurred by the following three processes. At first, the oxygen molecules dissociate into oxygen ion at feed side of the membrane, subsequently, oxygen ions diffuse in the membrane. Finally, oxygen ions recombine to form the oxygen molecule. Therefore, it is expected that the membrane of thickness and material, or catalysts of the dissociation and recombination affect the membrane performance. However, there is little article about catalysts for the dissociation and recombination. We confirmed the performance of La0.6Sr0.4Co1.0O3-δ (LSC) based catalyst which was commonly used as the dissociation and recombination. It is known that the adsorbed amount of oxygen increase with the increase of doped Fe content in B site of LSC. We prepared the catalysts of La0.1Sr0.9Co0.9Fe0.1O3-δ(C9F1), La0.1Sr0.9Co0.5Fe0.5O3-δ(C5F5) and La0.1Sr0.9Co0.3Fe0.7O3-δ(C7F3). Also, we used Pr2NiO4 type mixed conductor as a membrane material. (Pr0.9La0.1)2(Ni0.74Cu0.21Ga0.05)O4+δ(PLNCG) shows the high oxygen permeability and the stability against carbon dioxide. Oxygen permeation experiments were carried out using a homemade apparatus at 850 -975 °C. The membrane was sealed with Pyrex glass at both end of the outside dense alumina tubes. To measure the oxygen permeation rate, air was fed to the film side at 50 ml min-1, helium as the sweep gas and reference gas was fed at 20 ml min-1. The flow rates of the sweep gas and the gas permeated through the membrane were measured using flow meter and the gas concentrations were determined using a gas chromatograph. Then, the permeance of the oxygen was determined using the flow rate and the concentration of the gas on the permeate side of the membrane. The increase of oxygen permeation was observed with increasing temperature. It is considered that this is due to the catalytic activities are increased with increasing temperature. Another reason is the increase of oxygen diffusivity in the bulk of membrane. The oxygen permeation rate is improved by using catalyst of LSC or LSCF. The oxygen permeation rate of membrane with LSCF showed higher than that of membrane with LSC. Furthermore, in LSCF catalysts, oxygen permeation rate increased with the increase of the doped amount of Fe. It is considered that this is caused by the increased of adsorbed amount of oxygen.

Keywords: membrane separation, oxygen permeation, K2NiF4-type structure, mixed conductor

Procedia PDF Downloads 506
1145 Modified Polysaccharide as Emulsifier in Oil-in-Water Emulsions

Authors: Tatiana Marques Pessanha, Aurora Perez-Gramatges, Regina Sandra Veiga Nascimento

Abstract:

Emulsions are commonly used in applications involving oil/water dispersions, where handling of interfaces becomes a crucial aspect. The use of emulsion technology has greatly evolved in the last decades to suit the most diverse uses, ranging from cosmetic products and biomedical adjuvants to complex industrial fluids. The stability of these emulsions is influenced by factors such as the amount of oil, size of droplets and emulsifiers used. While commercial surfactants are typically used as emulsifiers to reduce interfacial tension, and therefore increase emulsion stability, these organic amphiphilic compounds are often toxic and expensive. A suitable alternative for emulsifiers can be obtained from the chemical modification of polysaccharides. Our group has been working on modification of polysaccharides to be used as additives in a variety of fluid formulations. In particular, we have obtained promising results using chitosan, a natural and biodegradable polymer that can be easily modified due to the presence of amine groups in its chemical structure. In this way, it is possible to increase both the hydrophobic and hydrophilic character, which renders a water-soluble, amphiphilic polymer that can behave as an emulsifier. The aim of this work was the synthesis of chitosan derivatives structurally modified to act as surfactants in stable oil-in-water. The synthesis of chitosan derivatives occurred in two steps, the first being the hydrophobic modification with the insertion of long hydrocarbon chains, while the second step consisted in the cationization of the amino groups. All products were characterized by infrared spectroscopy (FTIR) and carbon magnetic resonance (13C-NMR) to evaluate the cationization and hydrofobization degrees. These modified polysaccharides were used to formulate oil-in water (O:W) emulsions with different oil/water ratios (i.e 25:75, 35:65, 60:40) using mineral paraffinic oil. The formulations were characterized according to the type of emulsion, density and rheology measurements, as well as emulsion stability at high temperatures. All emulsion formulations were stable for at least 30 days, at room temperature (25°C), and in the case of the high oil content emulsion (60:40), the formulation was also stable at temperatures up to 100°C. Emulsion density was in the range of 0.90-0.87 s.g. The rheological study showed a viscoelastic behaviour in all formulations at room temperature, which is in agreement with the high stability showed by the emulsions, since the polymer acts not only reducing interfacial tension, but also forming an elastic membrane at the oil/water interface that guarantees its integrity. The results obtained in this work are a strong evidence of the possibility of using chemically modified polysaccharides as environmentally friendly alternatives to commercial surfactants in the stabilization of oil-in water formulations.

Keywords: emulsion, polymer, polysaccharide, stability, chemical modification

Procedia PDF Downloads 339
1144 Numerical Simulation of Seismic Process Accompanying the Formation of Shear-Type Fault Zone in Chuya-Kuray Depressions

Authors: Mikhail O. Eremin

Abstract:

Seismic activity around the world is clearly a threat to people's lives, as well as infrastructure and capital construction. It is the instability of the latter to powerful earthquakes that most often causes human casualties. Therefore, during construction it is necessary to take into account the risks of large-scale natural disasters. The task of assessing the risks of natural disasters is one of the most urgent at the present time. The final goal of any study of earthquakes is forecasting. This is especially important for seismically active regions of the planet where earthquakes occur frequently. Gorni Altai is one of such regions. In work, we developed the physical-mathematical model of stress-strain state evolution of loaded geomedium with the purpose of numerical simulation of seismic process accompanying the formation of Chuya-Kuray fault zone Gorni Altay, Russia. We build a structural model on the base of seismotectonic and paleoseismogeological investigations, as well as SRTM-data. Base of mathematical model is the system of equations of solid mechanics which includes the fundamental conservation laws and constitutive equations for elastic (Hooke's law) and inelastic deformation (modified model of Drucker-Prager-Nikolaevskii). An initial stress state of the model correspond to gravitational. Then we simulate an activation of a buried dextral strike-slip paleo-fault located in the basement of the model. We obtain the stages of formation and the structure of Chuya-Kuray fault zone. It is shown that results of numerical simulation are in good agreement with field observations in statistical sense. Simulated seismic process is strongly bound to the faults - lineaments with high degree of inelastic strain localization. Fault zone represents en-echelon system of dextral strike-slips according to the Riedel model. The system of surface lineaments is represented with R-, R'-shear bands, X- and Y-shears, T-fractures. Simulated seismic process obeys the laws of Gutenberg-Richter and Omori. Thus, the model describes a self-similar character of deformation and fracture of rocks and geomedia. We also modified the algorithm of determination of separate slip events in the model due to the features of strain rates dependence vs time.

Keywords: Drucker-Prager model, fault zone, numerical simulation, Riedel bands, seismic process, strike-slip fault

Procedia PDF Downloads 125
1143 Biosensor for Determination of Immunoglobulin A, E, G and M

Authors: Umut Kokbas, Mustafa Nisari

Abstract:

Immunoglobulins, also known as antibodies, are glycoprotein molecules produced by activated B cells that transform into plasma cells and result in them. Antibodies are critical molecules of the immune response to fight, which help the immune system specifically recognize and destroy antigens such as bacteria, viruses, and toxins. Immunoglobulin classes differ in their biological properties, structures, targets, functions, and distributions. Five major classes of antibodies have been identified in mammals: IgA, IgD, IgE, IgG, and IgM. Evaluation of the immunoglobulin isotype can provide a useful insight into the complex humoral immune response. Evaluation and knowledge of immunoglobulin structure and classes are also important for the selection and preparation of antibodies for immunoassays and other detection applications. The immunoglobulin test measures the level of certain immunoglobulins in the blood. IgA, IgG, and IgM are usually measured together. In this way, they can provide doctors with important information, especially regarding immune deficiency diseases. Hypogammaglobulinemia (HGG) is one of the main groups of primary immunodeficiency disorders. HGG is caused by various defects in B cell lineage or function that result in low levels of immunoglobulins in the bloodstream. This affects the body's immune response, causing a wide range of clinical features, from asymptomatic diseases to severe and recurrent infections, chronic inflammation and autoimmunity Transient infant hypogammaglobulinemia (THGI), IgM deficiency (IgMD), Bruton agammaglobulinemia, IgA deficiency (SIgAD) HGG samples are a few. Most patients can continue their normal lives by taking prophylactic antibiotics. However, patients with severe infections require intravenous immune serum globulin (IVIG) therapy. The IgE level may rise to fight off parasitic infections, as well as a sign that the body is overreacting to allergens. Also, since the immune response can vary with different antigens, measuring specific antibody levels also aids in the interpretation of the immune response after immunization or vaccination. Immune deficiencies usually occur in childhood. In Immunology and Allergy clinics, apart from the classical methods, it will be more useful in terms of diagnosis and follow-up of diseases, if it is fast, reliable and especially in childhood hypogammaglobulinemia, sampling from children with a method that is more convenient and uncomplicated. The antibodies were attached to the electrode surface via the poly hydroxyethyl methacrylamide cysteine nanopolymer. It was used to evaluate the anodic peak results obtained in the electrochemical study. According to the data obtained, immunoglobulin determination can be made with a biosensor. However, in further studies, it will be useful to develop a medical diagnostic kit with biomedical engineering and to increase its sensitivity.

Keywords: biosensor, immunosensor, immunoglobulin, infection

Procedia PDF Downloads 77
1142 Corporate Female Entrepreneurship, Moving Boundaries

Authors: Morena Paulisic, Marli Gonan Bozac

Abstract:

Business organization and management in theory are typically presented as gender- neutral. Although in practice female contribution to corporation is not questionable, gender diversity in top management of corporation is and that especially in emerging countries like Croatia. This paper brings insights into obstacles and problems which should be overcome. Furthermore, gives an introspective view on the most important promotion and motivation factors of powerful female CEOs in Croatia. The goal was to clarify perception and performance of female CEOs that contributed to their success and to determine mutual characteristics of women in corporate entrepreneurship regarding the motivation. For our study we used survey instrument that was developed for this research. The research methods used were: table research, field research, generalization method, comparative method, and statistical method (descriptive statistics and Pearson’s Chi-square test). Some result showed us that today even more women in corporations are not likely to accept more engagement at work if it harms their families (2003 – 31.9% in 2013 – 33.8%) although their main motivating factor is still interested job (2003 – 95.8%; in 2013-100%). It is also significant that 78.8 % of Croatian top managers (2013) think that women managers in Croatia are insufficiently spoken and written about, and that the reasons for this are that: (1) the society underestimates their ability (37.9%); (2) women underestimate themselves (22.4%); (3) the society still mainly focuses on male managers (20.7%) and (4) women managers avoid interviews and appearing on front pages (19%). The environment still “blocks” the natural course of advancement of women managers in organisations (entrepreneurship in general) and the main obstacle is that women must always or almost always be more capable than men in order to succeed (96.6%). Based on survey results on longitudinal research conducted in 2003 (return rate 30,8%) and 2013 (return rate 29,2%) in Croatia we expand understanding of determination indicators of corporate female entrepreneurship. Theoretically in practice gender structure at the management level (executive management, management board and supervisory board) throw years (2011- 2014) have positive score but still women remain significantly underrepresented at those positions. Findings from different sources have shown that diversity at the top of corporations’ correlates with better performance. In this paper, we have contributed to research on gender in corporate entrepreneurship by offering experiences from successful female CEOs and explanation why in social responsible society women with their characteristics can support needed changes and construct different way forward for corporations. Based on research result we can conclude that in future underrepresentation of female in corporate entrepreneurship should be overcome.

Keywords: Croatia, female entrepreneurship, glass ceiling, motivation

Procedia PDF Downloads 313
1141 The Principle of a Thought Formation: The Biological Base for a Thought

Authors: Ludmila Vucolova

Abstract:

The thought is a process that underlies consciousness and cognition and understanding its origin and processes is a longstanding goal of many academic disciplines. By integrating over twenty novel ideas and hypotheses of this theoretical proposal, we can speculate that thought is an emergent property of coded neural events, translating the electro-chemical interactions of the body with its environment—the objects of sensory stimulation, X, and Y. The latter is a self- generated feedback entity, resulting from the arbitrary pattern of the motion of a body’s motor repertory (M). A culmination of these neural events gives rise to a thought: a state of identity between an observed object X and a symbol Y. It manifests as a “state of awareness” or “state of knowing” and forms our perception of the physical world. The values of the variables of a construct—X (object), S1 (sense for the perception of X), Y (object), S2 (sense for perception of Y), and M (motor repertory that produces Y)—will specify the particular conscious percept at any given time. The proposed principle of interaction between the elements of a construct (X, Y, S1, S2, M) is universal and applies for all modes of communication (normal, deaf, blind, deaf and blind people) and for various language systems (Chinese, Italian, English, etc.). The particular arrangement of modalities of each of the three modules S1 (5 of 5), S2 (1 of 3), and M (3 of 3) defines a specific mode of communication. This multifaceted paradigm demonstrates a predetermined pattern of relationships between X, Y, and M that passes from generation to generation. The presented analysis of a cognitive experience encompasses the key elements of embodied cognition theories and unequivocally accords with the scientific interpretation of cognition as the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses, and cognition means thinking and awareness. By assembling the novel ideas presented in twelve sections, we can reveal that in the invisible “chaos”, there is an order, a structure with landmarks and principles of operations and mental processes (thoughts) are physical and have a biological basis. This innovative proposal explains the phenomenon of mental imagery; give the first insight into the relationship between mental states and brain states, and support the notion that mind and body are inseparably connected. The findings of this theoretical proposal are supported by the current scientific data and are substantiated by the records of the evolution of language and human intelligence.

Keywords: agent, awareness, cognitive, element, experience, feedback, first person, imagery, language, mental, motor, object, sensory, symbol, thought

Procedia PDF Downloads 367
1140 Rain Gauges Network Optimization in Southern Peninsular Malaysia

Authors: Mohd Khairul Bazli Mohd Aziz, Fadhilah Yusof, Zulkifli Yusop, Zalina Mohd Daud, Mohammad Afif Kasno

Abstract:

Recent developed rainfall network design techniques have been discussed and compared by many researchers worldwide due to the demand of acquiring higher levels of accuracy from collected data. In many studies, rain-gauge networks are designed to provide good estimation for areal rainfall and for flood modelling and prediction. In a certain study, even using lumped models for flood forecasting, a proper gauge network can significantly improve the results. Therefore existing rainfall network in Johor must be optimized and redesigned in order to meet the required level of accuracy preset by rainfall data users. The well-known geostatistics method (variance-reduction method) that is combined with simulated annealing was used as an algorithm of optimization in this study to obtain the optimal number and locations of the rain gauges. Rain gauge network structure is not only dependent on the station density; station location also plays an important role in determining whether information is acquired accurately. The existing network of 84 rain gauges in Johor is optimized and redesigned by using rainfall, humidity, solar radiation, temperature and wind speed data during monsoon season (November – February) for the period of 1975 – 2008. Three different semivariogram models which are Spherical, Gaussian and Exponential were used and their performances were also compared in this study. Cross validation technique was applied to compute the errors and the result showed that exponential model is the best semivariogram. It was found that the proposed method was satisfied by a network of 64 rain gauges with the minimum estimated variance and 20 of the existing ones were removed and relocated. An existing network may consist of redundant stations that may make little or no contribution to the network performance for providing quality data. Therefore, two different cases were considered in this study. The first case considered the removed stations that were optimally relocated into new locations to investigate their influence in the calculated estimated variance and the second case explored the possibility to relocate all 84 existing stations into new locations to determine the optimal position. The relocations of the stations in both cases have shown that the new optimal locations have managed to reduce the estimated variance and it has proven that locations played an important role in determining the optimal network.

Keywords: geostatistics, simulated annealing, semivariogram, optimization

Procedia PDF Downloads 287
1139 Residual Analysis and Ground Motion Prediction Equation Ranking Metrics for Western Balkan Strong Motion Database

Authors: Manuela Villani, Anila Xhahysa, Christopher Brooks, Marco Pagani

Abstract:

The geological structure of Western Balkans is strongly affected by the collision between Adria microplate and the southwestern Euroasia margin, resulting in a considerably active seismic region. The Harmonization of Seismic Hazard Maps in the Western Balkan Countries Project (BSHAP) (2007-2011, 2012-2015) by NATO supported the preparation of new seismic hazard maps of the Western Balkan, but when inspecting the seismic hazard models produced later by these countries on a national scale, significant differences in design PGA values are observed in the border, for instance, North Albania-Montenegro, South Albania- Greece, etc. Considering the fact that the catalogues were unified and seismic sources were defined within BSHAP framework, obviously, the differences arise from the Ground Motion Prediction Equations selection, which are generally the component with highest impact on the seismic hazard assessment. At the time of the project, a modest database was present, namely 672 three-component records, whereas nowadays, this strong motion database has increased considerably up to 20,939 records with Mw ranging in the interval 3.7-7 and epicentral distance distribution from 0.47km to 490km. Statistical analysis of the strong motion database showed the lack of recordings in the moderate-to-large magnitude and short distance ranges; therefore, there is need to re-evaluate the Ground Motion Prediction Equation in light of the recently updated database and the new generations of GMMs. In some cases, it was observed that some events were more extensively documented in one database than the other, like the 1979 Montenegro earthquake, with a considerably larger number of records in the BSHAP Analogue SM database when compared to ESM23. Therefore, the strong motion flat-file provided from the Harmonization of Seismic Hazard Maps in the Western Balkan Countries Project was merged with the ESM23 database for the polygon studied in this project. After performing the preliminary residual analysis, the candidate GMPE-s were identified. This process was done using the GMPE performance metrics available within the SMT in the OpenQuake Platform. The Likelihood Model and Euclidean Distance Based Ranking (EDR) were used. Finally, for this study, a GMPE logic tree was selected and following the selection of candidate GMPEs, model weights were assigned using the average sample log-likelihood approach of Scherbaum.

Keywords: residual analysis, GMPE, western balkan, strong motion, openquake

Procedia PDF Downloads 62
1138 Psychotherapeutic Narratives and the Importance of Truth

Authors: Spencer Jay Knafelc

Abstract:

Some mental health practitioners and theorists have suggested that we approach remedying psychological problems by centering and intervening upon patients’ narrations. Such theorists and their corresponding therapeutic approaches see persons as narrators of their lives, where the stories they tell constitute and reflect their sense-making of the world. Psychological problems, according to these approaches to therapy, are often the result of problematic narratives. The solution is the construction of more salubrious narratives through therapy. There is trouble lurking within the history of these narrative approaches. These thinkers tend to denigrate the importance of truth, insisting that narratives are not to be thought of as aiming at truth, and thus the truth of our self-narratives is not important. There are multiple motivations for the tendency to eschew truth’s importance within the tradition of narrative approaches to therapy. The most plausible and interesting motivation comes from the observation that, in general, all dominant approaches to therapy are equally effective. The theoretical commitments of each approach are quite different and are often ostensibly incompatible (psychodynamic therapists see psychological problems as resulting from unconscious conflict and repressed desires, Cognitive-Behavioral approaches see them as resulting from distorted cognitions). This strongly suggests that there must be some cases in which therapeutic efficacy does not depend on truth and that insisting that patient’s therapeutic narratives be true in all instances is a mistake. Lewis’ solution is to suggest that narratives are metaphors. Lewis’ account appreciates that there are many ways to tell a story and that many different approaches to mental health treatment can be appropriate without committing us to any contradictions, providing us with an ostensibly coherent way to treat narratives as non-literal, instead of seeing them as tools that can be more or less apt. Here, it is argued that Lewis’ metaphor approach fails. Narratives do not have the right kind of structure to be metaphors. Still, another way to understand Lewis’ view might be that self-narratives, especially when articulated in the language of any specific approach, should not be taken literally. This is an idea at the core of the narrative theorists’ tendency to eschew the importance of the ordinary understanding of truth. This very tendency will be critiqued. The view defended in this paper more accurately captures the nature of self-narratives. The truth of one’s self-narrative is important. Not only do people care about having the right conception of their abilities, who they are, and the way the world is, but self-narratives are composed of beliefs, and the nature of belief is to aim at truth. This view also allows the recognition of the importance of developing accurate representations of oneself and reality for one’s psychological well-being. It is also argued that in many cases, truth factors in as a mechanism of change over the course of therapy. Therapeutic benefit can be achieved by coming to have a better understanding of the nature of oneself and the world. Finally, the view defended here allows for the recognition of the nature of the tension between values: truth and efficacy. It is better to recognize this tension and develop strategies to navigate it as opposed to insisting that it doesn’t exist.

Keywords: philosophy, narrative, psychotherapy, truth

Procedia PDF Downloads 87
1137 Application of Metarhizium anisopliae against Meloidogyne javanica in Soil Amended with Oak Debris

Authors: Mohammad Abdollahi

Abstract:

Tomato (Lycopersicon esculentum Mill.) is one of the most popular, widely grown and the second most important vegetable crop, after potatoes. Nematodes have been identified as one of the major pests affecting tomato production throughout the world. The most destructive nematodes are the genus Meloidogyne. Most widespread and devastating species of this genus are M. incognita, M. javanica, and M. arenaria. These species can cause complete crop loss under adverse growing conditions. There are several potential methods for management of the root knot nematodes. Although the chemicals are widely used against the phytonematodes, because of hazardous effects of these compounds on non-target organisms and on the environment, there is a need to develop other control strategies. Nowadays, non-chemical measures are widely used to control the plant parasitic nematodes. Biocontrol of phytonematodes is an important method among environment-friendly measures of nematode management. There are some soil-inhabiting fungi that have biocontrol potential on phytonematodes, which can be used in nematode management program. The fungus Metarhizium anisopliae, originally is an entomopathogenic bioagent. Biocontrol potential of this fungus on some phytonematodes has been reported earlier. Recently, use of organic soil amendments as well as the use of bioagents is under special attention in sustainable agriculture. This research aimed to reduce the pesticide use in control of root-knot nematode, Meloidogyne javanica in tomato. The effects of M. anisopliae IMI 330189 and different levels of oak tree debris on M. javanica were determined. The combination effect of the fungus as well as the different rates of soil amendments was determined. Pots were filled with steam pasteurized soil mixture and the six leaf tomato seedlings were inoculated with 3000 second stage larvae of M. javanica/kg of soil. After eight weeks, plant growth parameters and nematode reproduction factors were compared. Based on the results of our experiment, combination of M. anisopliae IMI 330189 and oak debris caused more than 90% reduction in reproduction factor of nematode, at the rates of 100 and 150 g/kg soil (P ≤ 0.05). As compared to control, the reduction in number of galls was 76%. It was 86% for nematode reproduction factor, showing the significance of combined effect of both tested agents. Our results showed that plant debris can increase the biological activity of the tested bioagent. It was also proved that there was no adverse effect of oak debris, which potentially has antimicrobial activity, on antagonistic power of applied bioagent.

Keywords: biological control, nematode management, organic soil, Quercus branti, root knot nematode, soil amendment

Procedia PDF Downloads 155
1136 Air Breakdown Voltage Prediction in Post-arcing Conditions for Compact Circuit Breakers

Authors: Jing Nan

Abstract:

The air breakdown voltage in compact circuit breakers is a critical factor in the design and reliability of electrical distribution systems. This voltage determines the threshold at which the air insulation between conductors will fail or 'break down,' leading to an arc. This phenomenon is highly sensitive to the conditions within the breaker, such as the temperature and the distance between electrodes. Typically, air breakdown voltage models have been reliable for predicting failure under standard operational temperatures. However, in conditions post-arcing, where temperatures can soar above 2000K, these models face challenges due to the complex physics of ionization and electron behaviour at such high-energy states. Building upon the foundational understanding that the breakdown mechanism is initiated by free electrons and propelled by electric fields, which lead to ionization and, potentially, to avalanche or streamer formation, we acknowledge the complexity introduced by high-temperature environments. Recognizing the limitations of existing experimental data, a notable research gap exists in the accurate prediction of breakdown voltage at elevated temperatures, typically observed post-arcing, where temperatures exceed 2000K.To bridge this knowledge gap, we present a method that integrates gap distance and high-temperature effects into air breakdown voltage assessment. The proposed model is grounded in the physics of ionization, accounting for the dynamic behaviour of free electrons which, under intense electric fields at elevated temperatures, lead to thermal ionization and potentially reach the threshold for streamer formation as Meek's criterion. Employing the Saha equation, our model calculates equilibrium electron densities, adapting to the atmospheric pressure and the hot temperature regions indicative of post-arc temperature conditions. Our model is rigorously validated against established experimental data, demonstrating substantial improvements in predicting air breakdown voltage in the high-temperature regime. This work significantly improves the predictive power for air breakdown voltage under conditions that closely mimic operational stressors in compact circuit breakers. Looking ahead, the proposed methods are poised for further exploration in alternative insulating media, like SF6, enhancing the model's utility for a broader range of insulation technologies and contributing to the future of high-temperature electrical insulation research.

Keywords: air breakdown voltage, high-temperature insulation, compact circuit breakers, electrical discharge, saha equation

Procedia PDF Downloads 65
1135 Principles for the Realistic Determination of the in-situ Concrete Compressive Strength under Consideration of Rearrangement Effects

Authors: Rabea Sefrin, Christian Glock, Juergen Schnell

Abstract:

The preservation of existing structures is of great economic interest because it contributes to higher sustainability and resource conservation. In the case of existing buildings, in addition to repair and maintenance, modernization or reconstruction works often take place in the course of adjustments or changes in use. Since the structural framework and the associated load level are usually changed in the course of the structural measures, the stability of the structure must be verified in accordance with the currently valid regulations. The concrete compressive strength of the existing structures concrete and the derived mechanical parameters are of central importance for the recalculation and verification. However, the compressive strength of the existing concrete is usually set comparatively low and thus underestimated. The reasons for this are too small numbers, and large scatter of material properties of the drill cores, which are used for the experimental determination of the design value of the compressive strength. Within a structural component, the load is usually transferred over the area with higher stiffness and consequently with higher compressive strength. Therefore, existing strength variations within a component only play a subordinate role due to rearrangement effects. This paper deals with the experimental and numerical determination of such rearrangement effects in order to calculate the concrete compressive strength of existing structures more realistic and economical. The influence of individual parameters such as the specimen geometry (prism or cylinder) or the coefficient of variation of the concrete compressive strength is analyzed in experimental small-part tests. The coefficients of variation commonly used in practice are adjusted by dividing the test specimens into several layers consisting of different concretes, which are monolithically connected to each other. From each combination, a sufficient number of the test specimen is produced and tested to enable evaluation on a statistical basis. Based on the experimental tests, FE simulations are carried out to validate the test results. In the frame of a subsequent parameter study, a large number of combinations is considered, which had not been investigated in the experimental tests yet. Thus, the influence of individual parameters on the size and characteristic of the rearrangement effect is determined and described more detailed. Based on the parameter study and the experimental results, a calculation model for a more realistic determination of the in situ concrete compressive strength is developed and presented. By considering rearrangement effects in concrete during recalculation, a higher number of existing structures can be maintained without structural measures. The preservation of existing structures is not only decisive from an economic, sustainable, and resource-saving point of view but also represents an added value for cultural and social aspects.

Keywords: existing structures, in-situ concrete compressive strength, rearrangement effects, recalculation

Procedia PDF Downloads 99
1134 Assessment of Pedestrian Comfort in a Portuguese City Using Computational Fluid Dynamics Modelling and Wind Tunnel

Authors: Bruno Vicente, Sandra Rafael, Vera Rodrigues, Sandra Sorte, Sara Silva, Ana Isabel Miranda, Carlos Borrego

Abstract:

Wind comfort for pedestrians is an important condition in urban areas. In Portugal, a country with 900 km of coastline, the wind direction are predominantly from Nor-Northwest with an average speed of 2.3 m·s -1 (at 2 m height). As a result, a set of city authorities have been requesting studies of pedestrian wind comfort for new urban areas/buildings, as well as to mitigate wind discomfort issues related to existing structures. This work covers the efficiency evaluation of a set of measures to reduce the wind speed in an outdoor auditorium (open space) located in a coastal Portuguese urban area. These measures include the construction of barriers, placed at upstream and downstream of the auditorium, and the planting of trees, placed upstream of the auditorium. The auditorium is constructed in the form of a porch, aligned with North direction, driving the wind flow within the auditorium, promoting channelling effects and increasing its speed, causing discomfort in the users of this structure. To perform the wind comfort assessment, two approaches were used: i) a set of experiments using the wind tunnel (physical approach), with a representative mock-up of the study area; ii) application of the CFD (Computational Fluid Dynamics) model VADIS (numerical approach). Both approaches were used to simulate the baseline scenario and the scenarios considering a set of measures. The physical approach was conducted through a quantitative method, using hot-wire anemometer, and through a qualitative analysis (visualizations), using the laser technology and a fog machine. Both numerical and physical approaches were performed for three different velocities (2, 4 and 6 m·s-1 ) and two different directions (NorNorthwest and South), corresponding to the prevailing wind speed and direction of the study area. The numerical results show an effective reduction (with a maximum value of 80%) of the wind speed inside the auditorium, through the application of the proposed measures. A wind speed reduction in a range of 20% to 40% was obtained around the audience area, for a wind direction from Nor-Northwest. For southern winds, in the audience zone, the wind speed was reduced from 60% to 80%. Despite of that, for southern winds, the design of the barriers generated additional hot spots (high wind speed), namely, in the entrance to the auditorium. Thus, a changing in the location of the entrance would minimize these effects. The results obtained in the wind tunnel compared well with the numerical data, also revealing the high efficiency of the purposed measures (for both wind directions).

Keywords: urban microclimate, pedestrian comfort, numerical modelling, wind tunnel experiments

Procedia PDF Downloads 212
1133 Response Analysis of a Steel Reinforced Concrete High-Rise Building during the 2011 Tohoku Earthquake

Authors: Naohiro Nakamura, Takuya Kinoshita, Hiroshi Fukuyama

Abstract:

The 2011 off The Pacific Coast of Tohoku Earthquake caused considerable damage to wide areas of eastern Japan. A large number of earthquake observation records were obtained at various places. To design more earthquake-resistant buildings and improve earthquake disaster prevention, it is necessary to utilize these data to analyze and evaluate the behavior of a building during an earthquake. This paper presents an earthquake response simulation analysis (hereafter a seismic response analysis) that was conducted using data recorded during the main earthquake (hereafter the main shock) as well as the earthquakes before and after it. The data were obtained at a high-rise steel-reinforced concrete (SRC) building in the bay area of Tokyo. We first give an overview of the building, along with the characteristics of the earthquake motion and the building during the main shock. The data indicate that there was a change in the natural period before and after the earthquake. Next, we present the results of our seismic response analysis. First, the analysis model and conditions are shown, and then, the analysis result is compared with the observational records. Using the analysis result, we then study the effect of soil-structure interaction on the response of the building. By identifying the characteristics of the building during the earthquake (i.e., the 1st natural period and the 1st damping ratio) by the Auto-Regressive eXogenous (ARX) model, we compare the analysis result with the observational records so as to evaluate the accuracy of the response analysis. In this study, a lumped-mass system SR model was used to conduct a seismic response analysis using observational data as input waves. The main results of this study are as follows: 1) The observational records of the 3/11 main shock put it between a level 1 and level 2 earthquake. The result of the ground response analysis showed that the maximum shear strain in the ground was about 0.1% and that the possibility of liquefaction occurring was low. 2) During the 3/11 main shock, the observed wave showed that the eigenperiod of the building became longer; this behavior could be generally reproduced in the response analysis. This prolonged eigenperiod was due to the nonlinearity of the superstructure, and the effect of the nonlinearity of the ground seems to have been small. 3) As for the 4/11 aftershock, a continuous analysis in which the subject seismic wave was input after the 3/11 main shock was input was conducted. The analyzed values generally corresponded well with the observed values. This means that the effect of the nonlinearity of the main shock was retained by the building. It is important to consider this when conducting the response evaluation. 4) The first period and the damping ratio during a vibration were evaluated by an ARX model. Our results show that the response analysis model in this study is generally good at estimating a change in the response of the building during a vibration.

Keywords: ARX model, response analysis, SRC building, the 2011 off the Pacific Coast of Tohoku Earthquake

Procedia PDF Downloads 152
1132 Reclaiming The Sahara as a Bridge to Afro-Arab solidarity

Authors: Radwa Saad

Abstract:

The Sahara is normatively treated as a barrier separating “two Africas"; one to the North with closer affinity to the Arab world, and one to the South that encompasses a diverse range of racial, ethnic and religious groups, commonly referred to as "Sub-Saharan Africa". This dichotomy however was challenged by many anticolonial leaders and intellectuals seeking to advance counter-hegemonic narratives that treat the Sahara as a bridge facilitating a long history of exchange, collaboration, and fusion between different civilizations on the continent. This paper reexamines the discourses governing the geographic distinction between North Africa and Sub-Saharan Africa. It argues that demarcating the African continent along the lines of the Sahara is part-and-parcel of a Euro-centric spatial imaginary that has served to enshrine a racialized global hierarchy of power. By drawing on Edward Said’s concept of ‘imagined geography’ and Charles Mill’s notion of “the racial contract”, it demonstrates how spatial boundaries often coincide with racial epistemologies to reinforce certain geopolitical imaginaries, whilst silencing others. It further draws on the works of two notable post-colonial figures - Gamal Abdel Nasser and Leopold Senghor - to explore alternative spatial imaginaries while highlighting some of the tensions embedded in advancing a trans-Saharan political project. Firstly, it deconstructs some of the normative claims used to justify the distinction between North and “sub-Saharan” Africa across political, literary and disciplinary boundaries. Secondly, it draws parallels between Said’s and Mills’ work to demonstrate how geographical boundaries and demarcations have been constructed to create racialized subjects and reinforce a hierarchy of color that favors European standpoints and epistemologies. Third, it draw on Leopard Senghor’s The Foundations of Africanité and Gamal Abdel Nasser’s The Philosophy of the Egyptian Revolution to examine some of the competing strands of unity that emerged out of the Saharan discourse. In these texts, one can identify a number of convergences and divergences in how post-colonial African elites attempts to reclaim and rearticulate the function of the Sahara along different epistemic, political and cultural premises. It concludes with reflections on some of the policy challenges that emerge from reinforcing the Saharan divide, particularly in the realm of peace and security.

Keywords: regional integration, politics of knowledge production, arab-african relations, african solutions to african problems

Procedia PDF Downloads 73
1131 Creating Renewable Energy Investment Portfolio in Turkey between 2018-2023: An Approach on Multi-Objective Linear Programming Method

Authors: Berker Bayazit, Gulgun Kayakutlu

Abstract:

The World Energy Outlook shows that energy markets will substantially change within a few forthcoming decades. First, determined action plans according to COP21 and aim of CO₂ emission reduction have already impact on policies of countries. Secondly, swiftly changed technological developments in the field of renewable energy will be influential upon medium and long-term energy generation and consumption behaviors of countries. Furthermore, share of electricity on global energy consumption is to be expected as high as 40 percent in 2040. Electrical vehicles, heat pumps, new electronical devices and digital improvements will be outstanding technologies and innovations will be the testimony of the market modifications. In order to meet highly increasing electricity demand caused by technologies, countries have to make new investments in the field of electricity production, transmission and distribution. Specifically, electricity generation mix becomes vital for both prevention of CO₂ emission and reduction of power prices. Majority of the research and development investments are made in the field of electricity generation. Hence, the prime source diversity and source planning of electricity generation are crucial for improving the wealth of citizen life. Approaches considering the CO₂ emission and total cost of generation, are necessary but not sufficient to evaluate and construct the product mix. On the other hand, employment and positive contribution to macroeconomic values are important factors that have to be taken into consideration. This study aims to constitute new investments in renewable energies (solar, wind, geothermal, biogas and hydropower) between 2018-2023 under 4 different goals. Therefore, a multi-objective programming model is proposed to optimize the goals of minimizing the CO₂ emission, investment amount and electricity sales price while maximizing the total employment and positive contribution to current deficit. In order to avoid the user preference among the goals, Dinkelbach’s algorithm and Guzel’s approach have been combined. The achievements are discussed with comparison to the current policies. Our study shows that new policies like huge capacity allotment might be discussible although obligation for local production is positive. The improvements in grid infrastructure and re-design support for the biogas and geothermal can be recommended.

Keywords: energy generation policies, multi-objective linear programming, portfolio planning, renewable energy

Procedia PDF Downloads 231
1130 Resonant Tunnelling Diode Output Characteristics Dependence on Structural Parameters: Simulations Based on Non-Equilibrium Green Functions

Authors: Saif Alomari

Abstract:

The paper aims at giving physical and mathematical descriptions of how the structural parameters of a resonant tunnelling diode (RTD) affect its output characteristics. Specifically, the value of the peak voltage, peak current, peak to valley current ratio (PVCR), and the difference between peak and valley voltages and currents ΔV and ΔI. A simulation-based approach using the Non-Equilibrium Green Function (NEGF) formalism based on the Silvaco ATLAS simulator is employed to conduct a series of designed experiments. These experiments show how the doping concentration in the emitter and collector layers, their thicknesses, and the width of the barriers and the quantum well influence the above-mentioned output characteristics. Each of these parameters was systematically changed while holding others fixed in each set of experiments. Factorial experiments are outside the scope of this work and will be investigated in future. The physics involved in the operation of the device is thoroughly explained and mathematical models based on curve fitting and underlaying physical principles are deduced. The models can be used to design devices with predictable output characteristics. These models were found absent in the literature that the author acanned. Results show that the doping concentration in each region has an effect on the value of the peak voltage. It is found that increasing the carrier concentration in the collector region shifts the peak to lower values, whereas increasing it in the emitter shifts the peak to higher values. In the collector’s case, the shift is either controlled by the built-in potential resulting from the concentration gradient or the conductivity enhancement in the collector. The shift to higher voltages is found to be also related to the location of the Fermi-level. The thicknesses of these layers play a role in the location of the peak as well. It was found that increasing the thickness of each region shifts the peak to higher values until a specific characteristic length, afterwards the peak becomes independent of the thickness. Finally, it is shown that the thickness of the barriers can be optimized for a particular well width to produce the highest PVCR or the highest ΔV and ΔI. The location of the peak voltage is important in optoelectronic applications of RTDs where the operating point of the device is usually the peak voltage point. Furthermore, the PVCR, ΔV, and ΔI are of great importance for building RTD-based oscillators as they affect the frequency response and output power of the oscillator.

Keywords: peak to valley ratio, peak voltage shift, resonant tunneling diodes, structural parameters

Procedia PDF Downloads 128
1129 Nondestructive Inspection of Reagents under High Attenuated Cardboard Box Using Injection-Seeded THz-Wave Parametric Generator

Authors: Shin Yoneda, Mikiya Kato, Kosuke Murate, Kodo Kawase

Abstract:

In recent years, there have been numerous attempts to smuggle narcotic drugs and chemicals by concealing them in international mail. Combatting this requires a non-destructive technique that can identify such illicit substances in mail. Terahertz (THz) waves can pass through a wide variety of materials, and many chemicals show specific frequency-dependent absorption, known as a spectral fingerprint, in the THz range. Therefore, it is reasonable to investigate non-destructive mail inspection techniques that use THz waves. For this reason, in this work, we tried to identify reagents under high attenuation shielding materials using injection-seeded THz-wave parametric generator (is-TPG). Our THz spectroscopic imaging system using is-TPG consisted of two non-linear crystals for emission and detection of THz waves. A micro-chip Nd:YAG laser and a continuous wave tunable external cavity diode laser were used as the pump and seed source, respectively. The pump beam and seed beam were injected to the LiNbO₃ crystal satisfying the noncollinear phase matching condition in order to generate high power THz-wave. The emitted THz wave was irradiated to the sample which was raster scanned by the x-z stage while changing the frequencies, and we obtained multispectral images. Then the transmitted THz wave was focused onto another crystal for detection and up-converted to the near infrared detection beam based on nonlinear optical parametric effects, wherein the detection beam intensity was measured using an infrared pyroelectric detector. It was difficult to identify reagents in a cardboard box because of high noise levels. In this work, we introduce improvements for noise reduction and image clarification, and the intensity of the near infrared detection beam was converted correctly to the intensity of the THz wave. A Gaussian spatial filter is also introduced for a clearer THz image. Through these improvements, we succeeded in identification of reagents hidden in a 42-mm thick cardboard box filled with several obstacles, which attenuate 56 dB at 1.3 THz, by improving analysis methods. Using this system, THz spectroscopic imaging was possible for saccharides and may also be applied to cases where illicit drugs are hidden in the box, and multiple reagents are mixed together. Moreover, THz spectroscopic imaging can be achieved through even thicker obstacles by introducing an NIR detector with higher sensitivity.

Keywords: nondestructive inspection, principal component analysis, terahertz parametric source, THz spectroscopic imaging

Procedia PDF Downloads 163
1128 Simulation of Technological, Energy and GHG Comparison between a Conventional Diesel Bus and E-bus: Feasibility to Promote E-bus Change in High Lands Cities

Authors: Riofrio Jonathan, Fernandez Guillermo

Abstract:

Renewable energy represented around 80% of the energy matrix for power generation in Ecuador during 2020, so the deployment of current public policies is focused on taking advantage of the high presence of renewable sources to carry out several electrification projects. These projects are part of the portfolio sent to the United Nations Framework on Climate Change (UNFCCC) as a commitment to reduce greenhouse gas emissions (GHG) in the established national determined contribution (NDC). In this sense, the Ecuadorian Organic Energy Efficiency Law (LOEE) published in 2019 promotes E-mobility as one of the main milestones. In fact, it states that the new vehicles for urban and interurban usage must be E-buses since 2025. As a result, and for a successful implementation of this technological change in a national context, it is important to deploy land surveys focused on technical and geographical areas to keep the quality of services in both the electricity and transport sectors. Therefore, this research presents a technological and energy comparison between a conventional diesel bus and its equivalent E-bus. Both vehicles fulfill all the technical requirements to ride in the study-case city, which is Ambato in the province of Tungurahua-Ecuador. In addition, the analysis includes the development of a model for the energy estimation of both technologies that are especially applied in a highland city such as Ambato. The altimetry of the most important bus routes in the city varies from 2557 to 3200 m.a.s.l., respectively, for the lowest and highest points. These operation conditions provide a grade of novelty to this paper. Complementary, the technical specifications of diesel buses are defined following the common features of buses registered in Ambato. On the other hand, the specifications for E-buses come from the most common units introduced in Latin America because there is not enough evidence in similar cities at the moment. The achieved results will be good input data for decision-makers since electric demand forecast, energy savings, costs, and greenhouse gases emissions are computed. Indeed, GHG is important because it allows reporting the transparency framework that it is part of the Paris Agreement. Finally, the presented results correspond to stage I of the called project “Analysis and Prospective of Electromobility in Ecuador and Energy Mix towards 2030” supported by Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ).

Keywords: high altitude cities, energy planning, NDC, e-buses, e-mobility

Procedia PDF Downloads 134
1127 Evaluation of Antarctic Bacteria as Potential Producers of Cellulolytic Enzymes of Industrial Interest

Authors: Claudio Lamilla, Andrés Santos, Vicente Llanquinao, Jocelyn Hermosilla, Leticia Barrientos

Abstract:

The industry in general is very interested in improving and optimizing industrial processes in order to reduce the costs involved in obtaining raw materials and production. Thus, an interesting and cost-effective alternative is the incorporation of bioactive metabolites in such processes, being an example of this enzymes which catalyze efficiently a large number of enzymatic reactions of industrial and biotechnological interest. In the search for new sources of these active metabolites, Antarctica is one of the least explored places on our planet where the most drastic cold conditions, salinity, UVA-UVB and liquid water available are present, features that have shaped all life in this very harsh environment, especially bacteria that live in different Antarctic ecosystems, which have had to develop different strategies to adapt to these conditions, producing unique biochemical strategies. In this work the production of cellulolytic enzymes of seven bacterial strains isolated from marine sediments at different sites in the Antarctic was evaluated. Isolation of the strains was performed using serial dilutions in the culture medium at M115°C. The identification of the strains was performed using universal primers (27F and 1492R). The enzyme activity assays were performed on R2A medium, carboxy methyl cellulose (CMC)was added as substrate. Degradation of the substrate was revealed by adding Lugol. The results show that four of the tested strains produce enzymes which degrade CMC substrate. The molecular identifications, showed that these bacteria belong to the genus Streptomyces and Pseudoalteromonas, being Streptomyces strain who showed the highest activity. Only some bacteria in marine sediments have the ability to produce these enzymes, perhaps due to their greater adaptability to degrade at temperatures bordering zero degrees Celsius, some algae that are abundant in this environment and have cellulose as the main structure. The discovery of new enzymes adapted to cold is of great industrial interest, especially for paper, textiles, detergents, biofuels, food and agriculture. These enzymes represent 8% of industrial demand worldwide and is expected to increase their demand in the coming years. Mainly in the paper and food industry are required in extraction processes starch, protein and juices, as well as the animal feed industry where treating vegetables and grains helps improve the nutritional value of the food, all this clearly puts Antarctic microorganisms and their enzymes specifically as a potential contribution to industry and the novel biotechnological applications.

Keywords: antarctic, bacteria, biotechnological, cellulolytic enzymes

Procedia PDF Downloads 282
1126 Landslide Hazard Zonation Using Satellite Remote Sensing and GIS Technology

Authors: Ankit Tyagi, Reet Kamal Tiwari, Naveen James

Abstract:

Landslide is the major geo-environmental problem of Himalaya because of high ridges, steep slopes, deep valleys, and complex system of streams. They are mainly triggered by rainfall and earthquake and causing severe damage to life and property. In Uttarakhand, the Tehri reservoir rim area, which is situated in the lesser Himalaya of Garhwal hills, was selected for landslide hazard zonation (LHZ). The study utilized different types of data, including geological maps, topographic maps from the survey of India, Landsat 8, and Cartosat DEM data. This paper presents the use of a weighted overlay method in LHZ using fourteen causative factors. The various data layers generated and co-registered were slope, aspect, relative relief, soil cover, intensity of rainfall, seismic ground shaking, seismic amplification at surface level, lithology, land use/land cover (LULC), normalized difference vegetation index (NDVI), topographic wetness index (TWI), stream power index (SPI), drainage buffer and reservoir buffer. Seismic analysis is performed using peak horizontal acceleration (PHA) intensity and amplification factors in the evaluation of the landslide hazard index (LHI). Several digital image processing techniques such as topographic correction, NDVI, and supervised classification were widely used in the process of terrain factor extraction. Lithological features, LULC, drainage pattern, lineaments, and structural features are extracted using digital image processing techniques. Colour, tones, topography, and stream drainage pattern from the imageries are used to analyse geological features. Slope map, aspect map, relative relief are created by using Cartosat DEM data. DEM data is also used for the detailed drainage analysis, which includes TWI, SPI, drainage buffer, and reservoir buffer. In the weighted overlay method, the comparative importance of several causative factors obtained from experience. In this method, after multiplying the influence factor with the corresponding rating of a particular class, it is reclassified, and the LHZ map is prepared. Further, based on the land-use map developed from remote sensing images, a landslide vulnerability study for the study area is carried out and presented in this paper.

Keywords: weighted overlay method, GIS, landslide hazard zonation, remote sensing

Procedia PDF Downloads 114
1125 Governance in the Age of Artificial intelligence and E- Government

Authors: Mernoosh Abouzari, Shahrokh Sahraei

Abstract:

Electronic government is a way for governments to use new technology that provides people with the necessary facilities for proper access to government information and services, improving the quality of services and providing broad opportunities to participate in democratic processes and institutions. That leads to providing the possibility of easy use of information technology in order to distribute government services to the customer without holidays, which increases people's satisfaction and participation in political and economic activities. The expansion of e-government services and its movement towards intelligentization has the ability to re-establish the relationship between the government and citizens and the elements and components of the government. Electronic government is the result of the use of information and communication technology (ICT), which by implementing it at the government level, in terms of the efficiency and effectiveness of government systems and the way of providing services, tremendous commercial changes are created, which brings people's satisfaction at the wide level will follow. The main level of electronic government services has become objectified today with the presence of artificial intelligence systems, which recent advances in artificial intelligence represent a revolution in the use of machines to support predictive decision-making and Classification of data. With the use of deep learning tools, artificial intelligence can mean a significant improvement in the delivery of services to citizens and uplift the work of public service professionals while also inspiring a new generation of technocrats to enter government. This smart revolution may put aside some functions of the government, change its components, and concepts such as governance, policymaking or democracy will change in front of artificial intelligence technology, and the top-down position in governance may face serious changes, and If governments delay in using artificial intelligence, the balance of power will change and private companies will monopolize everything with their pioneering in this field, and the world order will also depend on rich multinational companies and in fact, Algorithmic systems will become the ruling systems of the world. It can be said that currently, the revolution in information technology and biotechnology has been started by engineers, large economic companies, and scientists who are rarely aware of the political complexities of their decisions and certainly do not represent anyone. Therefore, it seems that if liberalism, nationalism, or any other religion wants to organize the world of 2050, it should not only rationalize the concept of artificial intelligence and complex data algorithm but also mix them in a new and meaningful narrative. Therefore, the changes caused by artificial intelligence in the political and economic order will lead to a major change in the way all countries deal with the phenomenon of digital globalization. In this paper, while debating the role and performance of e-government, we will discuss the efficiency and application of artificial intelligence in e-government, and we will consider the developments resulting from it in the new world and the concepts of governance.

Keywords: electronic government, artificial intelligence, information and communication technology., system

Procedia PDF Downloads 80