Search results for: structure of the art field in Istanbul
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14963

Search results for: structure of the art field in Istanbul

10823 Nonlinear Optics of Dirac Fermion Systems

Authors: Vipin Kumar, Girish S. Setlur

Abstract:

Graphene has been recognized as a promising 2D material with many new properties. However, pristine graphene is gapless which hinders its direct application towards graphene-based semiconducting devices. Graphene is a zero-gapp and linearly dispersing semiconductor. Massless charge carriers (quasi-particles) in graphene obey the relativistic Dirac equation. These Dirac fermions show very unusual physical properties such as electronic, optical and transport. Graphene is analogous to two-level atomic systems and conventional semiconductors. We may expect that graphene-based systems will also exhibit phenomena that are well-known in two-level atomic systems and in conventional semiconductors. Rabi oscillation is a nonlinear optical phenomenon well-known in the context of two-level atomic systems and also in conventional semiconductors. It is the periodic exchange of energy between the system of interest and the electromagnetic field. The present work describes the phenomenon of Rabi oscillations in graphene based systems. Rabi oscillations have already been described theoretically and experimentally in the extensive literature available on this topic. To describe Rabi oscillations they use an approximation known as rotating wave approximation (RWA) well-known in studies of two-level systems. RWA is valid only near conventional resonance (small detuning)- when the frequency of the external field is nearly equal to the particle-hole excitation frequency. The Rabi frequency goes through a minimum close to conventional resonance as a function of detuning. Far from conventional resonance, the RWA becomes rather less useful and we need some other technique to describe the phenomenon of Rabi oscillation. In conventional systems, there is no second minimum - the only minimum is at conventional resonance. But in graphene we find anomalous Rabi oscillations far from conventional resonance where the Rabi frequency goes through a minimum that is much smaller than the conventional Rabi frequency. This is known as anomalous Rabi frequency and is unique to graphene systems. We have shown that this is attributable to the pseudo-spin degree of freedom in graphene systems. A new technique, which is an alternative to RWA called asymptotic RWA (ARWA), has been invoked by our group to discuss the phenomenon of Rabi oscillation. Experimentally accessible current density shows different types of threshold behaviour in frequency domain close to the anomalous Rabi frequency depending on the system chosen. For single layer graphene, the exponent at threshold is equal to 1/2 while in case of bilayer graphene, it is computed to be equal to 1. Bilayer graphene shows harmonic (anomalous) resonances absent in single layer graphene. The effect of asymmetry and trigonal warping (a weak direct inter-layer hopping in bilayer graphene) on these oscillations is also studied in graphene systems. Asymmetry has a remarkable effect only on anomalous Rabi oscillations whereas the Rabi frequency near conventional resonance is not significantly affected by the asymmetry parameter. In presence of asymmetry, these graphene systems show Rabi-like oscillations (offset oscillations) even for vanishingly small applied field strengths (less than the gap parameter). The frequency of offset oscillations may be identified with the asymmetry parameter.

Keywords: graphene, Bilayer graphene, Rabi oscillations, Dirac fermion systems

Procedia PDF Downloads 284
10822 The High Quality Colored Wind Chimes by Anodization on Aluminum Alloy

Authors: Chia-Chih Wei, Yun-Qi Li, Ssu-Ying Chen, Hsuan-Jung Chen, Hsi-Wen Yang, Chih-Yuan Chen, Chien-Chon Chen

Abstract:

In this paper we used high quality anodization technique to make colored wind chime with a nano-tube structure anodic film, which controls the length to diameter ratio of an aluminum rod and controls the oxide film structure on the surface of the aluminum rod by anodizing method. The research experiment used hard anodization to grow a controllable thickness of anodic film on aluminum alloy surface. The hard anodization film has high hardness, high insulation, high temperature resistance, good corrosion resistance, colors, and mass production properties can be further applied to transportation, electronic products, biomedical fields, or energy industry applications. This study also in-depth research and detailed discussion in the related process of aluminum alloy surface hard anodizing including pre-anodization, anodization, and post-anodization. The experiment parameters of anodization including using a mixed acid solution of sulfuric acid and oxalic acid as an anodization electrolyte, and control the temperature, time, current density, and final voltage to obtain the anodic film. In the experiments results, the properties of anodic film including thickness, hardness, insulation, and corrosion characteristics, microstructure of the anode film were measured and the hard anodization efficiency was calculated. Thereby obtaining different transmission speeds of sound in the aluminum rod and different audio sounds can be presented on the aluminum rod. Another feature of the present invention is the use of anodizing method dyeing method, laser engraving patterning and electrophoresis method to make colored aluminum wind chimes.

Keywords: anodization, colored, high quality, wind chime, nano-tube

Procedia PDF Downloads 228
10821 Plant Growth, Symbiotic Performance and Grain Yield of 63 Common Bean Genotypes Grown Under Field Conditions at Malkerns Eswatini

Authors: Rotondwa P. Gunununu, Mustapha Mohammed, Felix D. Dakora

Abstract:

Common bean is the most importantly high protein grain legume grown in Southern Africa for human consumption and income generation. Although common bean can associate with rhizobia to fix N₂ for bacterial use and plant growth, it is reported to be a poor nitrogen fixer when compared to other legumes. N₂ fixation can vary with legume species, genotype and rhizobial strain. Therefore, screening legume germplasm can reveal rhizobia/genotype combinations with high N₂-fixing efficiency for use by farmers. This study assessed symbiotic performance and N₂ fixation in 63 common bean genotypes under field conditions at Malkerns Station in Eswatini, using the ¹⁵N natural abundance technique. The shoots of common bean genotypes were sampled at a pod-filling stage, oven-dried (65oC for 72h), weighed, ground into a fine powder (0.50 mm sieve), and subjected to ¹⁵N/¹⁴N isotopic analysis using mass spectrometry. At maturity, plants from the inner rows were harvested for the determination of grain yield. The results revealed significantly higher modulation (p≤0.05) in genotypes MCA98 and CIM-RM01-97-8 relative to the other genotypes. Shoot N concentration was highest in genotype MCA 98, followed by KAB 10 F2.8-84, with most genotypes showing shoot N concentrations below 2%. Percent N derived from atmospheric N₂ fixation (%Ndfa) differed markedly among genotypes, with CIM-RM01-92-3 and DAB 174, respectively, recording the highest values of 66.65% and 66.22 % N derived from fixation. There were also significant differences in grain yield, with CIM-RM02-79-1 producing the highest yield (3618.75 kg/ha). These results represent an important contribution in the profiling of symbiotic functioning of common bean germplasm for improved N₂ fixation.

Keywords: nitrogen fixation, %Ndfa, ¹⁵N natural abundance, grain yield

Procedia PDF Downloads 205
10820 Analyzing the Performance of Different Cost-Based Methods for the Corrective Maintenance of a System in Thermal Power Plants

Authors: Demet Ozgur-Unluakin, Busenur Turkali, S. Caglar Aksezer

Abstract:

Since the age of industrialization, maintenance has always been a very crucial element for all kinds of factories and plants. With today’s increasingly developing technology, the system structure of such facilities has become more complicated, and even a small operational disruption may return huge losses in profits for the companies. In order to reduce these costs, effective maintenance planning is crucial, but at the same time, it is a difficult task because of the complexity of systems. The most important aspect of correct maintenance planning is to understand the structure of the system, not to ignore the dependencies among the components and as a result, to model the system correctly. In this way, it will be better to understand which component improves the system more when it is maintained. Undoubtedly, proactive maintenance at a scheduled time reduces costs because the scheduled maintenance prohibits high losses in profits. But the necessity of corrective maintenance, which directly affects the situation of the system and provides direct intervention when the system fails, should not be ignored. When a fault occurs in the system, if the problem is not solved immediately and proactive maintenance time is awaited, this may result in increased costs. This study proposes various maintenance methods with different efficiency measures under corrective maintenance strategy on a subsystem of a thermal power plant. To model the dependencies between the components, dynamic Bayesian Network approach is employed. The proposed maintenance methods aim to minimize the total maintenance cost in a planning horizon, as well as to find the most appropriate component to be attacked on, which improves the system reliability utmost. Performances of the methods are compared under corrective maintenance strategy. Furthermore, sensitivity analysis is also applied under different cost values. Results show that all fault effect methods perform better than the replacement effect methods and this conclusion is also valid under different downtime cost values.

Keywords: dynamic Bayesian networks, maintenance, multi-component systems, reliability

Procedia PDF Downloads 110
10819 A Review of Soil Stabilization Techniques

Authors: Amin Chegenizadeh, Mahdi Keramatikerman

Abstract:

Soil stabilization is a crucial issue that helps to remove of risks associated with the soil failure. As soil has applications in different industries such as construction, pavement and railways, the means of stabilizing soil are varied. This paper will focus on the techniques of stabilizing soils. It will do so by gathering useful information on the state of the art in the field of soil stabilization, investigating both traditional and advanced methods. To inquire into the current knowledge, the existing literature will be divided into categories addressing the different techniques.

Keywords: review, soil, stabilization, techniques

Procedia PDF Downloads 526
10818 Self-Sensing Concrete Nanocomposites for Smart Structures

Authors: A. D'Alessandro, F. Ubertini, A. L. Materazzi

Abstract:

In the field of civil engineering, Structural Health Monitoring is a topic of growing interest. Effective monitoring instruments permit the control of the working conditions of structures and infrastructures, through the identification of behavioral anomalies due to incipient damages, especially in areas of high environmental hazards as earthquakes. While traditional sensors can be applied only in a limited number of points, providing a partial information for a structural diagnosis, novel transducers may allow a diffuse sensing. Thanks to the new tools and materials provided by nanotechnology, new types of multifunctional sensors are developing in the scientific panorama. In particular, cement-matrix composite materials capable of diagnosing their own state of strain and tension, could be originated by the addition of specific conductive nanofillers. Because of the nature of the material they are made of, these new cementitious nano-modified transducers can be inserted within the concrete elements, transforming the same structures in sets of widespread sensors. This paper is aimed at presenting the results of a research about a new self-sensing nanocomposite and about the implementation of smart sensors for Structural Health Monitoring. The developed nanocomposite has been obtained by inserting multi walled carbon nanotubes within a cementitious matrix. The insertion of such conductive carbon nanofillers provides the base material with piezoresistive characteristics and peculiar sensitivity to mechanical modifications. The self-sensing ability is achieved by correlating the variation of the external stress or strain with the variation of some electrical properties, such as the electrical resistance or conductivity. Through the measurement of such electrical characteristics, the performance and the working conditions of an element or a structure can be monitored. Among conductive carbon nanofillers, carbon nanotubes seem to be particularly promising for the realization of self-sensing cement-matrix materials. Some issues related to the nanofiller dispersion or to the influence of the nano-inclusions amount in the cement matrix need to be carefully investigated: the strain sensitivity of the resulting sensors is influenced by such factors. This work analyzes the dispersion of the carbon nanofillers, the physical properties of the fresh dough, the electrical properties of the hardened composites and the sensing properties of the realized sensors. The experimental campaign focuses specifically on their dynamic characterization and their applicability to the monitoring of full-scale elements. The results of the electromechanical tests with both slow varying and dynamic loads show that the developed nanocomposite sensors can be effectively used for the health monitoring of structures.

Keywords: carbon nanotubes, self-sensing nanocomposites, smart cement-matrix sensors, structural health monitoring

Procedia PDF Downloads 216
10817 Carbonyl Iron Particles Modified with Pyrrole-Based Polymer and Electric and Magnetic Performance of Their Composites

Authors: Miroslav Mrlik, Marketa Ilcikova, Martin Cvek, Josef Osicka, Michal Sedlacik, Vladimir Pavlinek, Jaroslav Mosnacek

Abstract:

Magnetorheological elastomers (MREs) are a unique type of materials consisting of two components, magnetic filler, and elastomeric matrix. Their properties can be tailored upon application of an external magnetic field strength. In this case, the change of the viscoelastic properties (viscoelastic moduli, complex viscosity) are influenced by two crucial factors. The first one is magnetic performance of the particles and the second one is off-state stiffness of the elastomeric matrix. The former factor strongly depends on the intended applications; however general rule is that higher magnetic performance of the particles provides higher MR performance of the MRE. Since magnetic particles possess low stability properties against temperature and acidic environment, several methods how to improve these drawbacks have been developed. In the most cases, the preparation of the core-shell structures was employed as a suitable method for preservation of the magnetic particles against thermal and chemical oxidations. However, if the shell material is not single-layer substance, but polymer material, the magnetic performance is significantly suppressed, due to the in situ polymerization technique, when it is very difficult to control the polymerization rate and the polymer shell is too thick. The second factor is the off-state stiffness of the elastomeric matrix. Since the MR effectivity is calculated as the relative value of the elastic modulus upon magnetic field application divided by elastic modulus in the absence of the external field, also the tuneability of the cross-linking reaction is highly desired. Therefore, this study is focused on the controllable modification of magnetic particles using a novel monomeric system based on 2-(1H-pyrrol-1-yl)ethyl methacrylate. In this case, the short polymer chains of different chain lengths and low polydispersity index will be prepared, and thus tailorable stability properties can be achieved. Since the relatively thin polymer chains will be grafted on the surface of magnetic particles, their magnetic performance will be affected only slightly. Furthermore, also the cross-linking density will be affected, due to the presence of the short polymer chains. From the application point of view, such MREs can be utilized for, magneto-resistors, piezoresistors or pressure sensors especially, when the conducting shell on the magnetic particles will be created. Therefore, the selection of the pyrrole-based monomer is very crucial and controllably thin layer of conducting polymer can be prepared. Finally, such composite particle consisting of magnetic core and conducting shell dispersed in elastomeric matrix can find also the utilization in shielding application of electromagnetic waves.

Keywords: atom transfer radical polymerization, core-shell, particle modification, electromagnetic waves shielding

Procedia PDF Downloads 198
10816 Drought Detection and Water Stress Impact on Vegetation Cover Sustainability Using Radar Data

Authors: E. Farg, M. M. El-Sharkawy, M. S. Mostafa, S. M. Arafat

Abstract:

Mapping water stress provides important baseline data for sustainable agriculture. Recent developments in the new Sentinel-1 data which allow the acquisition of high resolution images and varied polarization capabilities. This study was conducted to detect and quantify vegetation water content from canopy backscatter for extracting spatial information to encourage drought mapping activities throughout new reclaimed sandy soils in western Nile delta, Egypt. The performance of radar imagery in agriculture strongly depends on the sensor polarization capability. The dual mode capabilities of Sentinel-1 improve the ability to detect water stress and the backscatter from the structure components improves the identification and separation of vegetation types with various canopy structures from other features. The fieldwork data allowed identifying of water stress zones based on land cover structure; those classes were used for producing harmonious water stress map. The used analysis techniques and results show high capability of active sensors data in water stress mapping and monitoring especially when integrated with multi-spectral medium resolution images. Also sub soil drip irrigation systems cropped areas have lower drought and water stress than center pivot sprinkler irrigation systems. That refers to high level of evaporation from soil surface in initial growth stages. Results show that high relationship between vegetation indices such as Normalized Difference Vegetation Index NDVI the observed radar backscattering. In addition to observational evidence showed that the radar backscatter is highly sensitive to vegetation water stress, and essentially potential to monitor and detect vegetative cover drought.

Keywords: canopy backscatter, drought, polarization, NDVI

Procedia PDF Downloads 133
10815 A Review on Agricultural Landscapes as a Habitat of Rodents

Authors: Nadeem Munawar, Tariq Mahmood, Paula Rivadeneira, Ali Akhter

Abstract:

In this paper, we review on rodent species which are common inhabitants of agricultural landscapes where they are an important prey source for a wide variety of avian, reptilian, and mammalian predators. Agricultural fields are surrounded by fallow land, which provide suitable sites for shelter and breeding for rodents, while shrubs, grasses, annual weeds and forbs may provide supplementary food. The assemblage of rodent’s fauna in the cropland habitats including cropped fields, meadows and adjacent field structures like hedgerows, woodland and field margins fluctuates seasonally. The mature agricultural crops provides good source of food and shelter to the rodents and these factors along with favorable climatic factors/season facilitate breeding activities of these rodent species. Changes in vegetation height and vegetative cover affect two important aspects of a rodent’s life: food and shelter. In addition, during non-crop period vegetation can be important for building nests above or below ground and it provides thermal protection for rodents from heat and cold. The review revealed that rodents form a very diverse group of mammals, ranging from tiny pigmy mice to big capybaras, from arboreal flying squirrels to subterranean mole rats, from opportunistic omnivores (e.g. Norway rats) to specialist feeders (e.g. the North African fat sand rats that feed on a single family of plants only). It is therefore no surprise that some species thrive well under the conditions that are found in agricultural fields. The review on the population dynamics of the rodent species indicated that they are agricultural pests probably due to the heterogeneous landscape and to the high rotativity of vegetable crop cultivation. They also cause damage to various crops, directly and indirectly, by gnawing, spoilage, contamination and hoarding activities, besides this behavior they have also significance importance in agricultural habitat. The burrowing activities of rodents alter the soil properties around their burrows which improve its aeration, infiltration, increase the water holding capacity and thus encourage plant growth. These properties are beneficial for the soil because they affect absorption of phosphorus, absorption zinc, copper, other nutrients and the uptake of water and thus rodents are known as indicator species in agricultural fields. Our review suggests that wide crop field’s borders, particularly those contiguous to various cropland fields, should be understood as priority sites for nesting, feeding, and cover for the rodent’s fauna. The goal of this review paper is to provide a comprehensive synthesis of understanding regarding rodent habitat and biodiversity in agricultural landscapes.

Keywords: agricultural landscapes, food, indicator species, shelter

Procedia PDF Downloads 153
10814 Toxicity of Bisphenol-A: Effects on Health and Regulations

Authors: Tuğba Özdal, Neşe Şahin Yeşilçubuk

Abstract:

Bisphenol-A (BPA) is one of the highest volume chemicals produced worldwide in the plastic industry. This compound is mostly used in producing polycarbonate plastics that are often used for food and beverage storage, and BPA is also a component of epoxy resins that are used to line food and beverage containers. Studies performed in this area indicated that BPA could be extracted from such products while they are in contact with food. Therefore, BPA exposure is presumed. In this paper, the chemical structure of BPA, factors affecting BPA migration to food and beverages, effects on health, and recent regulations will be reviewed.

Keywords: BPA, health, regulations, toxicity

Procedia PDF Downloads 321
10813 Eco-Literacy and Pedagogical Praxis in the Multidisciplinary University Greenhouse toward the Food Security Strengthening

Authors: Citlali Aguilera Lira, David Lynch Steinicke, Andrea León García

Abstract:

One of the challenges that higher education faces is to find how to approach the sustainability in an inclusive way to the student within all the different academic areas, how to move the sustainable development from the abstract field to the operational field. This research comes from the ecoliteracy and the pedagogical praxis as tools for rebuilding the teaching processes inside of universities. The purpose is to determine and describe which are the factors involved in the process of learning particularly in the Greenhouse-School Siembra UV. In the Greenhouse-School Siembra UV, of the University of Veracruz, are cultivated vegetables, medicinal plants and small cornfields under the usage of eco-technologies such as hydroponics, Wickingbed and Hugelkultur, which main purpose is the saving of space, labor and natural resources, as well as function as agricultural production alternatives in the urban and periurban zones. The sample was formed with students from different academic areas and who are actively involved in the greenhouse, as well as institutes from the University of Veracruz and governmental and non-governmental departments. This project comes from a pedagogic praxis approach, from filling the needs that the different professional profiles of the university students have. All this with the purpose of generate a pragmatic dialogue with the sustainability. It also comes from the necessity to understand the factors that intervene in the students’ praxis. In this manner is how the students are the fundamental unit in the sphere of sustainability. As a result, it is observed that those University of Veracruz students who are involved in the Greenhouse-school, Siembra UV, have enriched in different levels the sense of urban and periurban agriculture because of the diverse academic approaches they have and the interaction between them. It is concluded that the eco-technologies act as fundamental tools for ecoliteracy in society, where it is strengthen the nutritional and food security from a sustainable development approach.

Keywords: farming eco-technologies, food security, multidisciplinary, pedagogical praxis

Procedia PDF Downloads 308
10812 Monocoque Systems: The Reuniting of Divergent Agencies for Wood Construction

Authors: Bruce Wrightsman

Abstract:

Construction and design are inexorably linked. Traditional building methodologies, including those using wood, comprise a series of material layers differentiated and separated from each other. This results in the separation of two agencies of building envelope (skin) separate from the structure. However, from a material performance position reliant on additional materials, this is not an efficient strategy for the building. The merits of traditional platform framing are well known. However, its enormous effectiveness within wood-framed construction has seldom led to serious questioning and challenges in defining what it means to build. There are several downsides of using this method, which is less widely discussed. The first and perhaps biggest downside is waste. Second, its reliance on wood assemblies forming walls, floors and roofs conventionally nailed together through simple plate surfaces is structurally inefficient. It requires additional material through plates, blocking, nailers, etc., for stability that only adds to the material waste. In contrast, when we look back at the history of wood construction in airplane and boat manufacturing industries, we will see a significant transformation in the relationship of structure with skin. The history of boat construction transformed from indigenous wood practices of birch bark canoes to copper sheathing over wood to improve performance in the late 18th century and the evolution of merged assemblies that drives the industry today. In 1911, Swiss engineer Emile Ruchonnet designed the first wood monocoque structure for an airplane called the Cigare. The wing and tail assemblies consisted of thin, lightweight, and often fabric skin stretched tightly over a wood frame. This stressed skin has evolved into semi-monocoque construction, in which the skin merges with structural fins that take additional forces. It provides even greater strength with less material. The monocoque, which translates to ‘mono or single shell,’ is a structural system that supports loads and transfers them through an external enclosure system. They have largely existed outside the domain of architecture. However, this uniting of divergent systems has been demonstrated to be lighter, utilizing less material than traditional wood building practices. This paper will examine the role monocoque systems have played in the history of wood construction through lineage of boat and airplane building industries and its design potential for wood building systems in architecture through a case-study examination of a unique wood construction approach. The innovative approach uses a wood monocoque system comprised of interlocking small wood members to create thin shell assemblies for the walls, roof and floor, increasing structural efficiency and wasting less than 2% of the wood. The goal of the analysis is to expand the work of practice and the academy in order to foster deeper, more honest discourse regarding the limitations and impact of traditional wood framing.

Keywords: wood building systems, material histories, monocoque systems, construction waste

Procedia PDF Downloads 68
10811 A Proper Continuum-Based Reformulation of Current Problems in Finite Strain Plasticity

Authors: Ladislav Écsi, Roland Jančo

Abstract:

Contemporary multiplicative plasticity models assume that the body's intermediate configuration consists of an assembly of locally unloaded neighbourhoods of material particles that cannot be reassembled together to give the overall stress-free intermediate configuration since the neighbourhoods are not necessarily compatible with each other. As a result, the plastic deformation gradient, an inelastic component in the multiplicative split of the deformation gradient, cannot be integrated, and the material particle moves from the initial configuration to the intermediate configuration without a position vector and a plastic displacement field when plastic flow occurs. Such behaviour is incompatible with the continuum theory and the continuum physics of elastoplastic deformations, and the related material models can hardly be denoted as truly continuum-based. The paper presents a proper continuum-based reformulation of current problems in finite strain plasticity. It will be shown that the incompatible neighbourhoods in real material are modelled by the product of the plastic multiplier and the yield surface normal when the plastic flow is defined in the current configuration. The incompatible plastic factor can also model the neighbourhoods as the solution of the system of differential equations whose coefficient matrix is the above product when the plastic flow is defined in the intermediate configuration. The incompatible tensors replace the compatible spatial plastic velocity gradient in the former case or the compatible plastic deformation gradient in the latter case in the definition of the plastic flow rule. They act as local imperfections but have the same position vector as the compatible plastic velocity gradient or the compatible plastic deformation gradient in the definitions of the related plastic flow rules. The unstressed intermediate configuration, the unloaded configuration after the plastic flow, where the residual stresses have been removed, can always be calculated by integrating either the compatible plastic velocity gradient or the compatible plastic deformation gradient. However, the corresponding plastic displacement field becomes permanent with both elastic and plastic components. The residual strains and stresses originate from the difference between the compatible plastic/permanent displacement field gradient and the prescribed incompatible second-order tensor characterizing the plastic flow in the definition of the plastic flow rule, which becomes an assignment statement rather than an equilibrium equation. The above also means that the elastic and plastic factors in the multiplicative split of the deformation gradient are, in reality, gradients and that there is no problem with the continuum physics of elastoplastic deformations. The formulation is demonstrated in a numerical example using the regularized Mooney-Rivlin material model and modified equilibrium statements where the intermediate configuration is calculated, whose analysis results are compared with the identical material model using the current equilibrium statements. The advantages and disadvantages of each formulation, including their relationship with multiplicative plasticity, are also discussed.

Keywords: finite strain plasticity, continuum formulation, regularized Mooney-Rivlin material model, compatibility

Procedia PDF Downloads 109
10810 Tailoring and Characterization of Lithium Manganese Ferrite- Polypyrrole Nanocomposite (LixMnxFe₂O₄-PPY) to Evaluate Their Performance as an Energy Storage Device

Authors: Muhammad Waheed Mushtaq, Shahid bashir, Atta Ur Rehman

Abstract:

In the past decade, the growing demand for capital and the increased utilization of supercapacitors reflect advancements in energy-producing systems and energy storage devices. Metal oxides and ferrites have emerged as promising candidates for supercapacitors and batteries. In our current study, we synthesized Lithium manganese nanoferrite, denoted as LixMnxFe₂O₄, using the hydrothermal technique. Subsequently, we treated it with sodium dodecyl benzene sulphonate (SDBS) surfactant to create nanocomposites of Lithium manganese nano ferrite (LMFe) with poly pyrrole (LixMnxFe₂O₄-PPY). We employed Powder X-ray diffraction (XRD) to confirm the crystalline nature and spinel phase structure of LMFe nanoparticles, which exhibited a single-phase crystal structure, indicating sample purity. To assess the surface topography, morphology, and grain size of both synthesized LixMnxFe₂O₄ and LixMnxFe₂O₄-PPY, we used atomic force microscopy and scanning electron microscopy (SEM). The average particle size of pure ferrite was found to be 54 nm, while that of its nanocomposite was 71 nm. Energy dispersive X-ray (EDX) analysis confirmed the presence of all required elements, including Li, Mn, Fe, and O, in the appropriate proportions. Saturation magnetization (32.69 emu), remanence (Mr), and coercive force (Hc) were measured using a Vibrating Sample Magnetometer (VSM). To assess the electrochemical performance of the material, we conducted Cyclic Voltammetry (CV) measurements for both pure LMFe and LMFe-PPY. The CV results for LMFe-PPY demonstrated that specific capacitance decreased with increasing scan rate while the area of the current-voltage loop increased. These findings are promising for the development of supercapacitors and lithium-ion batteries (LIBs).

Keywords: lithium manganese ferrite, poly pyrrole, nanocomposites, cyclic voltammetry, cathode

Procedia PDF Downloads 49
10809 The Development and Provision of a Knowledge Management Ecosystem, Optimized for Genomics

Authors: Matthew I. Bellgard

Abstract:

The field of bioinformatics has made, and continues to make, substantial progress and contributions to life science research and development. However, this paper contends that a systems approach integrates bioinformatics activities for any project in a defined manner. The application of critical control points in this bioinformatics systems approach may be useful to identify and evaluate points in a pathway where specified activity risk can be reduced, monitored and quality enhanced.

Keywords: bioinformatics, food security, personalized medicine, systems approach

Procedia PDF Downloads 409
10808 Adapting Tools for Text Monitoring and for Scenario Analysis Related to the Field of Social Disasters

Authors: Svetlana Cojocaru, Mircea Petic, Inga Titchiev

Abstract:

Humanity faces more and more often with different social disasters, which in turn can generate new accidents and catastrophes. To mitigate their consequences, it is important to obtain early possible signals about the events which are or can occur and to prepare the corresponding scenarios that could be applied. Our research is focused on solving two problems in this domain: identifying signals related that an accident occurred or may occur and mitigation of some consequences of disasters. To solve the first problem, methods of selecting and processing texts from global network Internet are developed. Information in Romanian is of special interest for us. In order to obtain the mentioned tools, we should follow several steps, divided into preparatory stage and processing stage. Throughout the first stage, we manually collected over 724 news articles and classified them into 10 categories of social disasters. It constitutes more than 150 thousand words. Using this information, a controlled vocabulary of more than 300 keywords was elaborated, that will help in the process of classification and identification of the texts related to the field of social disasters. To solve the second problem, the formalism of Petri net has been used. We deal with the problem of inhabitants’ evacuation in useful time. The analysis methods such as reachability or coverability tree and invariants technique to determine dynamic properties of the modeled systems will be used. To perform a case study of properties of extended evacuation system by adding time, the analysis modules of PIPE such as Generalized Stochastic Petri Nets (GSPN) Analysis, Simulation, State Space Analysis, and Invariant Analysis have been used. These modules helped us to obtain the average number of persons situated in the rooms and the other quantitative properties and characteristics related to its dynamics.

Keywords: lexicon of disasters, modelling, Petri nets, text annotation, social disasters

Procedia PDF Downloads 190
10807 Interfacial Reactions between Aromatic Polyamide Fibers and Epoxy Matrix

Authors: Khodzhaberdi Allaberdiev

Abstract:

In order to understand the interactions on the interface polyamide fibers and epoxy matrix in fiber- reinforced composites were investigated industrial aramid fibers: armos, svm, terlon using individual epoxy matrix components, epoxies: diglycidyl ether of bisphenol A (DGEBA), three- and diglycidyl derivatives of m, p-amino-, m, p-oxy-, o, m,p-carboxybenzoic acids, the models: curing agent, aniline and the compound, that depict of the structure the primary addition reaction the amine to the epoxy resin, N-di (oxyethylphenoxy) aniline. The chemical structure of the surface of untreated and treated polyamide fibers analyzed using Fourier transform infrared spectroscopy (FTIR). The impregnation of fibers with epoxy matrix components and N-di (oxyethylphenoxy) aniline has been carried out by heating 150˚C (6h). The optimum fiber loading is at 65%.The result a thermal treatment is the covalent bonds formation , derived from a combined of homopolymerization and crosslinking mechanisms in the interfacial region between the epoxy resin and the surface of fibers. The reactivity of epoxy resins on interface in microcomposites (MC) also depends from processing aids treated on surface of fiber and the absorbance moisture. The influences these factors as evidenced by the conversion of epoxy groups values in impregnated with DGEBA of the terlons: industrial, dried (in vacuum) and purified samples: 5.20 %, 4.65% and 14.10%, respectively. The same tendency for svm and armos fibers is observed. The changes in surface composition of these MC were monitored by X-ray photoelectron spectroscopy (XPS). In the case of the purified fibers, functional groups of fibers act as well as a catalyst and curing agent of epoxy resin. It is found that the value of the epoxy groups conversion for reinforced formulations depends on aromatic polyamides nature and decreases in the order: armos >svm> terlon. This difference is due of the structural characteristics of fibers. The interfacial interactions also examined between polyglycidyl esters substituted benzoic acids and polyamide fibers in the MC. It is found that on interfacial interactions these systems influences as well as the structure and the isomerism of epoxides. The IR-spectrum impregnated fibers with aniline showed that the polyamide fibers appreciably with aniline do not react. FTIR results of treated fibers with N-di (oxyethylphenoxy) aniline fibers revealed dramatically changes IR-characteristic of the OH groups of the amino alcohol. These observations indicated hydrogen bondings and covalent interactions between amino alcohol and functional groups of fibers. This result also confirms appearance of the exo peak on Differential Scanning Calorimetry (DSC) curve of the MC. Finally, the theoretical evaluation non-covalent interactions between individual epoxy matrix components and fibers has been performed using the benzanilide and its derivative contaning the benzimidazole moiety as a models of terlon and svm,armos, respectively. Quantum-topological analysis also demonstrated the existence hydrogen bond between amide group of models and epoxy matrix components.All the results indicated that on the interface polyamide fibers and epoxy matrix exist not only covalent, but and non-covalent the interactions during the preparation of MC.

Keywords: epoxies, interface, modeling, polyamide fibers

Procedia PDF Downloads 256
10806 Chemical Life Cycle Alternative Assessment as a Green Chemical Substitution Framework: A Feasibility Study

Authors: Sami Ayad, Mengshan Lee

Abstract:

The Sustainable Development Goals (SDGs) were designed to be the best possible blueprint to achieve peace, prosperity, and overall, a better and more sustainable future for the Earth and all its people, and such a blueprint is needed more than ever. The SDGs face many hurdles that will prevent them from becoming a reality, one of such hurdles, arguably, is the chemical pollution and unintended chemical impacts generated through the production of various goods and resources that we consume. Chemical Alternatives Assessment has proven to be a viable solution for chemical pollution management in terms of filtering out hazardous chemicals for a greener alternative. However, the current substitution practice lacks crucial quantitative datasets (exposures and life cycle impacts) to ensure no unintended trade-offs occur in the substitution process. A Chemical Life Cycle Alternative Assessment (CLiCAA) framework is proposed as a reliable and replicable alternative to Life Cycle Based Alternative Assessment (LCAA) as it integrates chemical molecular structure analysis and Chemical Life Cycle Collaborative (CLiCC) web-based tool to fill in data gaps that the former frameworks suffer from. The CLiCAA framework consists of a four filtering layers, the first two being mandatory, with the final two being optional assessment and data extrapolation steps. Each layer includes relevant impact categories of each chemical, ranging from human to environmental impacts, that will be assessed and aggregated into unique scores for overall comparable results, with little to no data. A feasibility study will demonstrate the efficiency and accuracy of CLiCAA whilst bridging both cancer potency and exposure limit data, hoping to provide the necessary categorical impact information for every firm possible, especially those disadvantaged in terms of research and resource management.

Keywords: chemical alternative assessment, LCA, LCAA, CLiCC, CLiCAA, chemical substitution framework, cancer potency data, chemical molecular structure analysis

Procedia PDF Downloads 76
10805 Comparison of an Anthropomorphic PRESAGE® Dosimeter and Radiochromic Film with a Commercial Radiation Treatment Planning System for Breast IMRT: A Feasibility Study

Authors: Khalid Iqbal

Abstract:

This work presents a comparison of an anthropomorphic PRESAGE® dosimeter and radiochromic film measurements with a commercial treatment planning system to determine the feasibility of PRESAGE® for 3D dosimetry in breast IMRT. An anthropomorphic PRESAGE® phantom was created in the shape of a breast phantom. A five-field IMRT plan was generated with a commercially available treatment planning system and delivered to the PRESAGE® phantom. The anthropomorphic PRESAGE® was scanned with the Duke midsized optical CT scanner (DMOS-RPC) and the OD distribution was converted to dose. Comparisons were performed between the dose distribution calculated with the Pinnacle3 treatment planning system, PRESAGE®, and EBT2 film measurements. DVHs, gamma maps, and line profiles were used to evaluate the agreement. Gamma map comparisons showed that Pinnacle3 agreed with PRESAGE® as greater than 95% of comparison points for the PTV passed a ± 3%/± 3 mm criterion when the outer 8 mm of phantom data were discluded. Edge artifacts were observed in the optical CT reconstruction, from the surface to approximately 8 mm depth. These artifacts resulted in dose differences between Pinnacle3 and PRESAGE® of up to 5% between the surface and a depth of 8 mm and decreased with increasing depth in the phantom. Line profile comparisons between all three independent measurements yielded a maximum difference of 2% within the central 80% of the field width. For the breast IMRT plan studied, the Pinnacle3 calculations agreed with PRESAGE® measurements to within the ±3%/± 3 mm gamma criterion. This work demonstrates the feasibility of the PRESAGE® to be fashioned into anthropomorphic shape, and establishes the accuracy of Pinnacle3 for breast IMRT. Furthermore, these data have established the groundwork for future investigations into 3D dosimetry with more complex anthropomorphic phantoms.

Keywords: 3D dosimetry, PRESAGE®, IMRT, QA, EBT2 GAFCHROMIC film

Procedia PDF Downloads 394
10804 Evaluating the Dosimetric Performance for 3D Treatment Planning System for Wedged and Off-Axis Fields

Authors: Nashaat A. Deiab, Aida Radwan, Mohamed S. Yahiya, Mohamed Elnagdy, Rasha Moustafa

Abstract:

This study is to evaluate the dosimetric performance of our institution's 3D treatment planning system for wedged and off-axis 6MV photon beams, guided by the recommended QA tests documented in the AAPM TG53; NCS report 15 test packages, IAEA TRS 430 and ESTRO booklet no.7. The study was performed for Elekta Precise linear accelerator designed for clinical range of 4, 6 and 15 MV photon beams with asymmetric jaws and fully integrated multileaf collimator that enables high conformance to target with sharp field edges. Ten tests were applied on solid water equivalent phantom along with 2D array dose detection system. The calculated doses using 3D treatment planning system PrecisePLAN were compared with measured doses to make sure that the dose calculations are accurate for simple situations such as square and elongated fields, different SSD, beam modifiers e.g. wedges, blocks, MLC-shaped fields and asymmetric collimator settings. The QA results showed dosimetric accuracy of the TPS within the specified tolerance limits. Except for large elongated wedged field, the central axis and outside central axis have errors of 0.2% and 0.5%, respectively, and off- planned and off-axis elongated fields the region outside the central axis of the beam errors are 0.2% and 1.1%, respectively. The dosimetric investigated results yielded differences within the accepted tolerance level as recommended. Differences between dose values predicted by the TPS and measured values at the same point are the result from limitations of the dose calculation, uncertainties in the measurement procedure, or fluctuations in the output of the accelerator.

Keywords: quality assurance, dose calculation, wedged fields, off-axis fields, 3D treatment planning system, photon beam

Procedia PDF Downloads 428
10803 Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation

Authors: Tomasz Grzes, Maciej Kopczynski, Jaroslaw Stepaniuk

Abstract:

The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.

Keywords: data reduction, digital systems design, field programmable gate array (FPGA), reduct, rough set

Procedia PDF Downloads 207
10802 Machine Learning in Patent Law: How Genetic Breeding Algorithms Challenge Modern Patent Law Regimes

Authors: Stefan Papastefanou

Abstract:

Artificial intelligence (AI) is an interdisciplinary field of computer science with the aim of creating intelligent machine behavior. Early approaches to AI have been configured to operate in very constrained environments where the behavior of the AI system was previously determined by formal rules. Knowledge was presented as a set of rules that allowed the AI system to determine the results for specific problems; as a structure of if-else rules that could be traversed to find a solution to a particular problem or question. However, such rule-based systems typically have not been able to generalize beyond the knowledge provided. All over the world and especially in IT-heavy industries such as the United States, the European Union, Singapore, and China, machine learning has developed to be an immense asset, and its applications are becoming more and more significant. It has to be examined how such products of machine learning models can and should be protected by IP law and for the purpose of this paper patent law specifically, since it is the IP law regime closest to technical inventions and computing methods in technical applications. Genetic breeding models are currently less popular than recursive neural network method and deep learning, but this approach can be more easily described by referring to the evolution of natural organisms, and with increasing computational power; the genetic breeding method as a subset of the evolutionary algorithms models is expected to be regaining popularity. The research method focuses on patentability (according to the world’s most significant patent law regimes such as China, Singapore, the European Union, and the United States) of AI inventions and machine learning. Questions of the technical nature of the problem to be solved, the inventive step as such, and the question of the state of the art and the associated obviousness of the solution arise in the current patenting processes. Most importantly, and the key focus of this paper is the problem of patenting inventions that themselves are developed through machine learning. The inventor of a patent application must be a natural person or a group of persons according to the current legal situation in most patent law regimes. In order to be considered an 'inventor', a person must actually have developed part of the inventive concept. The mere application of machine learning or an AI algorithm to a particular problem should not be construed as the algorithm that contributes to a part of the inventive concept. However, when machine learning or the AI algorithm has contributed to a part of the inventive concept, there is currently a lack of clarity regarding the ownership of artificially created inventions. Since not only all European patent law regimes but also the Chinese and Singaporean patent law approaches include identical terms, this paper ultimately offers a comparative analysis of the most relevant patent law regimes.

Keywords: algorithms, inventor, genetic breeding models, machine learning, patentability

Procedia PDF Downloads 100
10801 Evidence of Natural Selection Footprints among Some African Chicken Breeds and Village Ecotypes

Authors: Ahmed Elbeltagy, Francesca Bertolini, Damarius Fleming, Angelica Van Goor, Chris Ashwell, Carl Schmidt, Donald Kugonza, Susan Lamont, Max Rothschild

Abstract:

The major factor in shaping genomic variation of the African indigenous rural chicken is likely natural selection drives the development genetic footprints in the chicken genomes. To investigate such a hypothesis of a selection footprint, a total of 292 birds were randomly sampled from three indigenous ecotypes from East Africa (Uganda, Rwanda) and North Africa (Egypt) and two registered Egyptian breeds (Fayoumi and Dandarawi), and from the synthetic Kuroiler breed. Samples were genotyped using the Affymetrix 600K Axiom® Array. A total of 526,652 SNPs were utilized in the downstream analysis after quality control measures. The intra-population runs of homozygosity (ROH) that were consensuses in > 50% of individuals of an ecotype or > 75% of a breed were studied. To identify inter-population differentiation due to genetic structure, FST was calculated for North- vs. East- African populations in addition to population-pairwise combinations for overlapping windows (500Kb with an overlap of 250Kb). A total of 28,563 ROH were determined and were classified into three length categories. ROH and Fst detected sweeps were identified on several autosomes. Several genes in these regions are likely to be related to adaptation to local environmental stresses that include high altitude, diseases resistance, poor nutrition, oxidative and heat stresses and were linked to gene ontology terms (GO) related to immune response, oxygen consumption and heme binding, carbohydrate metabolism, oxidation-reduction, and behavior. Results indicated a possible effect of natural selection forces on shaping genomic structure for adaptation to local environmental stresses.

Keywords: African Chicken, runs of homozygosity, FST, selection footprints

Procedia PDF Downloads 304
10800 The Cleavage of DNA by the Anti-Tumor Drug Bleomycin at the Transcription Start Sites of Human Genes Using Genome-Wide Techniques

Authors: Vincent Murray

Abstract:

The glycopeptide bleomycin is used in the treatment of testicular cancer, Hodgkin's lymphoma, and squamous cell carcinoma. Bleomycin damages and cleaves DNA in human cells, and this is considered to be the main mode of action for bleomycin's anti-tumor activity. In particular, double-strand breaks are thought to be the main mechanism for the cellular toxicity of bleomycin. Using Illumina next-generation DNA sequencing techniques, the genome-wide sequence specificity of bleomycin-induced double-strand breaks was determined in human cells. The degree of bleomycin cleavage was also assessed at the transcription start sites (TSSs) of actively transcribed genes and compared with non-transcribed genes. It was observed that bleomycin preferentially cleaved at the TSSs of actively transcribed human genes. There was a correlation between the degree of this enhanced cleavage at TSSs and the level of transcriptional activity. Bleomycin cleavage is also affected by chromatin structure and at TSSs, the peaks of bleomycin cleavage were approximately 200 bp apart. This indicated that bleomycin was able to detect phased nucleosomes at the TSSs of actively transcribed human genes. The genome-wide cleavage pattern of the bleomycin analogues 6′-deoxy-BLM Z and zorbamycin was also investigated in human cells. As found for bleomycin, these bleomycin analogues also preferentially cleaved at the TSSs of actively transcribed human genes. The cytotoxicity (IC₅₀ values) of these bleomycin analogues was determined. It was found that the degree of enhanced cleavage at TSSs was inversely correlated with the IC₅₀ values of the bleomycin analogues. This suggested that the level of cleavage at the TSSs of actively transcribed human genes was important for the cytotoxicity of bleomycin and analogues. Hence this study provided a deeper understanding of the cellular processes involved in the cancer chemotherapeutic activity of bleomycin.

Keywords: anti-tumour activity, bleomycin analogues, chromatin structure, genome-wide study, Illumina DNA sequencing

Procedia PDF Downloads 107
10799 Functional Surfaces and Edges for Cutting and Forming Tools Created Using Directed Energy Deposition

Authors: Michal Brazda, Miroslav Urbanek, Martina Koukolikova

Abstract:

This work focuses on the development of functional surfaces and edges for cutting and forming tools created through the Directed Energy Deposition (DED) technology. In the context of growing challenges in modern engineering, additive technologies, especially DED, present an innovative approach to manufacturing tools for forming and cutting. One of the key features of DED is its ability to precisely and efficiently deposit Fully dense metals from powder feedstock, enabling the creation of complex geometries and optimized designs. Gradually, it becomes an increasingly attractive choice for tool production due to its ability to achieve high precision while simultaneously minimizing waste and material costs. Tools created using DED technology gain significant durability through the utilization of high-performance materials such as nickel alloys and tool steels. For high-temperature applications, Nimonic 80A alloy is applied, while for cold applications, M2 tool steel is used. The addition of ceramic materials, such as tungsten carbide, can significantly increase the tool's resistance. The introduction of functionally graded materials is a significant contribution, opening up new possibilities for gradual changes in the mechanical properties of the tool and optimizing its performance in different sections according to specific requirements. In this work, you will find an overview of individual applications and their utilization in the industry. Microstructural analyses have been conducted, providing detailed insights into the structure of individual components alongside examinations of the mechanical properties and tool life. These analyses offer a deeper understanding of the efficiency and reliability of the created tools, which is a key element for successful development in the field of cutting and forming tools. The production of functional surfaces and edges using DED technology can result in financial savings, as the entire tool doesn't have to be manufactured from expensive special alloys. The tool can be made from common steel, onto which a functional surface from special materials can be applied. Additionally, it allows for tool repairs after wear and tear, eliminating the need for producing a new part and contributing to an overall cost while reducing the environmental footprint. Overall, the combination of DED technology, functionally graded materials, and verified technologies collectively set a new standard for innovative and efficient development of cutting and forming tools in the modern industrial environment.

Keywords: additive manufacturing, directed energy deposition, DED, laser, cutting tools, forming tools, steel, nickel alloy

Procedia PDF Downloads 34
10798 The Changes in Motivations and the Use of Translation Strategies in Crowdsourced Translation: A Case Study on Global Voices’ Chinese Translation Project

Authors: Ya-Mei Chen

Abstract:

Online crowdsourced translation, an innovative translation practice brought by Web 2.0 technologies and the democratization of information, has become increasingly popular in the Internet era. Carried out by grass-root internet users, crowdsourced translation contains fundamentally different features from its off-line traditional counterpart, such as voluntary participation and parallel collaboration. To better understand such a participatory and collaborative nature, this paper will use the online Chinese translation project of Global Voices as a case study to investigate the following issues: (1) the changes in volunteer translators’ and reviewers’ motivations for participation, (2) translators’ and reviewers’ use of translation strategies and (3) the correlations of translators’ and reviewers’ motivations and strategies with the organizational mission, the translation style guide, the translator-reviewer interaction, the mediation of the translation platform and various types of capital within the translation field. With an aim to systematically explore the above three issues, this paper will collect both quantitative and qualitative data and then draw upon Engestrom’s activity theory and Bourdieu’s field theory as a theoretical framework to analyze the data in question. An online anonymous questionnaire will be conducted to obtain the quantitative data. The questionnaire will contain questions related to volunteer translators’ and reviewers’ backgrounds, participation motivations, translation strategies and mutual relations as well as the operation of the translation platform. Concerning the qualitative data, they will come from (1) a comparative study between some English news texts published on Global Voices and their Chinese translations, (2) an analysis of the online discussion forum associated with Global Voices’ Chinese translation project and (3) the information about the project’s translation mission and guidelines. It is hoped that this research, through a detailed sociological analysis of a cause-driven crowdsourced translation project, can enable translation researchers and practitioners to adequately meet the translation challenges appearing in the digital age.

Keywords: crowdsourced translation, global voices, motivation, translation strategies

Procedia PDF Downloads 362
10797 Influence of Harmonics on Medium Voltage Distribution System: A Case Study for Residential Area

Authors: O. Arikan, C. Kocatepe, G. Ucar, Y. Hacialiefendioglu

Abstract:

In this paper, influence of harmonics on medium voltage distribution system of Bogazici Electricity Distribution Inc. (BEDAS) which takes place at Istanbul/Turkey is investigated. A ring network consisting of residential loads is taken into account for this study. Real system parameters and measurement results are used for simulations. Also, probable working conditions of the system are analyzed for %50, %75 and %100 loading of transformers with similar harmonic contents. Results of the study are exhibited the influence of nonlinear loads on %THDV, P.F. and technical losses of the medium voltage distribution system.

Keywords: distribution system, harmonic, technical losses, power factor, total harmonic distortion, residential load, medium voltage

Procedia PDF Downloads 560
10796 An Assessment of Housing Affordability and Safety Measures in the Varied Residential Area of Lagos, A Case Study of the Amuwo-Odofin Local Government Area in Lagos State

Authors: Jubril Olatunbosun Akinde

Abstract:

Unplanned population growth are mostly attributed to a lack of infrastructural facilities and poor economic condition in the rural dwellings and the incidence of rural-urban migration, which has resulted in severe housing deficiency in the urban centre, with a resultant pressure on housing delivery in the cities. Affordable housing does not only encompass environmental factors that make living acceptable and comfortable, which include good access routes, ventilation, sanitation and access to other basic human needs, which include water and safety. The research assessed the housing affordability and safety measures in the varied residential area of lagos by examining the demographic and socioeconomic attributes of residents; examining the existing residential safety measures; by examining the residential quality in terms of safety; the researcher therefore examined if relationship between housing affordability and safety in the varied residential areas. The research adopted the bartlett, kotrlik and higgins (2001) method of t-test to determine the sample size which specifies different populations at different levels of significance (α). The researcher adopted primary data which was sourced from a field survey where the sample population was simply randomly selected to give a member of the population an equal chance of being selected, therefore, the sample size for the field survey was two hundred (200) respondents, and subjected to necessary testing. The research come to conclusion that housing safety and security is the responsibility of every resident, the landlords/landladies possess a better sense of security in their neighbourhood than renters in the community, therefore they need to be aware of their responsibility of ensuring the safety of lives and property.

Keywords: housing, housing affordability, housing security, residential, residential quality

Procedia PDF Downloads 96
10795 Videoconference Technology: An Attractive Vehicle for Challenging and Changing Tutors Practice in Open and Distance Learning Environment

Authors: Ramorola Mmankoko Ziphorah

Abstract:

Videoconference technology represents a recent experiment of technology integration into teaching and learning in South Africa. Increasingly, videoconference technology is commonly used as a substitute for the traditional face-to-face approaches to teaching and learning in helping tutors to reshape and change their teaching practices. Interestingly, though, some studies point out that videoconference technology is commonly used for knowledge dissemination by tutors and not so much for the actual teaching of course content in Open and Distance Learning context. Though videoconference technology has become one of the dominating technologies available among Open and Distance Learning institutions, it is not clear that it has been used as effectively to bridge the learning distance in time, geography, and economy. While tutors are prepared theoretically, in most tutor preparation programs, on the use of videoconference technology, there are still no practical guidelines on how they should go about integrating this technology into their course teaching. Therefore, there is an urgent need to focus on tutor development, specifically on their capacities and skills to use videoconference technology. The assumption is that if tutors become competent in the use of the videoconference technology for course teaching, then their use in Open and Distance Learning environment will become more commonplace. This is the imperative of the 4th Industrial Revolution (4IR) on education generally. Against the current vacuum in the practice of using videoconference technology for course teaching, the current study proposes a qualitative phenomenological approach to investigate the efficacy of videoconferencing as an approach to student learning. Using interviews and observation data from ten participants in Open and Distance Learning institution, the author discusses how dialogue and structure interacted to provide the participating tutors with a rich set of opportunities to deliver course content. The findings to this study highlight various challenges experienced by tutors when using videoconference technology. The study suggests tutor development programs on their capacity and skills and on how to integrate this technology with various teaching strategies in order to enhance student learning. The author argues that it is not merely the existence of the structure, namely the videoconference technology, that provides the opportunity for effective teaching, but that is the interactions, namely, the dialogue amongst tutors and learners that make videoconference technology an attractive vehicle for challenging and changing tutors practice.

Keywords: open distance learning, transactional distance, tutor, videoconference

Procedia PDF Downloads 118
10794 Coupled Exciton - Surface Plasmon Polariton Enhanced Photoresponse of Two-Dimensional Hydrogenated Honeycomb Silicon Boride

Authors: Farzaneh Shayeganfar, Ali Ramazani

Abstract:

Exciton (strong electronic interaction of electron-hole) and hot carriers created by surface plasmon polaritons has been demonstrated in nanoscale optoelectronic devices, enhancing the photoresponse of the system. Herein, we employ a quantum framework to consider coupled exciton- hot carriers effects on photovoltaiv energy distribution, scattering process, polarizability and light emission of 2D-semicnductor. We use density functional theory (DFT) to design computationally a semi-functionalized 2D honeycomb silicon boride (SiB) monolayer with H atoms, suitable for photovoltaics. The dynamical stability, electronic and optical properties of SiB and semi-hydrogenated SiB structures were investigated utilizing the Tran-Blaha modified Becke-Johnson (TB-mBJ) potential. The calculated phonon dispersion shows that while an unhydrogenated SiB monolayer is dynamically unstable, surface semi-hydrogenation improves the stability of the structure and leads to a transition from metallic to semiconducting conductivity with a direct band gap of about 1.57 eV, appropriate for photovoltaic applications. The optical conductivity of this H-SiB structure, determined using the random phase approximation (RPA), shows that light adsorption should begin at the boundary of the visible range of light. Additionally, due to hydrogenation, the reflectivity spectrum declines sharply with respect to the unhydrogenated reflectivity spectrum in the IR and visible ranges of light. The energy band gap remains direct, increasing from 0.9 to 1.8 eV, upon increasing the strain from -6% (compressive) to +6% (tensile). Additionally, compressive and tensile strains lead, respectively, to red and blue shifts of optical the conductivity threshold around the visible range of light. Overall, this study suggests that H-SiB monolayers are suitable as two-dimensional solar cell materials.

Keywords: surface plasmon, hot carrier, strain engineering, valley polariton

Procedia PDF Downloads 98