Search results for: Signature Verification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 750

Search results for: Signature Verification

210 Molecular Alterations Shed Light on Alteration of Methionine Metabolism in Gastric Intestinal Metaplesia; Insight for Treatment Approach

Authors: Nigatu Tadesse, Ying Liu, Juan Li, Hong Ming Liu

Abstract:

Gastric carcinogenesis is a lengthy process of histopathological transition from normal to atrophic gastritis (AG) to intestinal metaplasia (GIM), dysplasia toward gastric cancer (GC). The stage of GIM identified as precancerous lesions with resistance to H-pylori eradication and recurrence after endoscopic surgical resection therapies. GIM divided in to two morphologically distinct phenotypes such as complete GIM bearing intestinal type morphology whereas the incomplete type has colonic type morphology. The incomplete type GIM considered to be the greatest risk factor for the development of GC. Studies indicated the expression of the caudal type homeobox 2 (CDX2) gene is responsible for the development of complete GIM but its progressive downregulation from incomplete metaplasia toward advanced GC identified as the risk for IM progression and neoplastic transformation. The downregulation of CDX2 gene have promoted cell growth and proliferation in gastric and colon cancers and ascribed in chemo-treatment inefficacies. CDX2 downregulated through promoter region hypermethylation in which the methylation frequency positively correlated with the dietary history of the patients, suggesting the role of diet as methyl carbon donor sources such as methionine. However, the metabolism of exogenous methionine is yet unclear. Targeting exogenous methionine metabolism has become a promising approach to limits tumor cell growth, proliferation and progression and increase treatment outcome. This review article discusses molecular alterations that could shed light on the potential of exogenous methionine metabolisms, such as gut microbiota alteration as sources of methionine to host cells, metabolic pathway signaling via PI3K/AKt/mTORC1-c-MYC to rewire exogenous methionine and signature of increased gene methylation index, cell growth and proliferation in GIM, with insights to new treatment avenue via targeting methionine metabolism, and the need for future integrated studies on molecular alterations and metabolomics to uncover altered methionine metabolism and characterization of CDX2 methylation in gastric intestinal metaplasia for potential therapeutic exploitation.

Keywords: altered methionine metabolism, Intestinal metaplesia, CDX2 gene, gastric cancer

Procedia PDF Downloads 33
209 Verification of Satellite and Observation Measurements to Build Solar Energy Projects in North Africa

Authors: Samy A. Khalil, U. Ali Rahoma

Abstract:

The measurements of solar radiation, satellite data has been routinely utilize to estimate solar energy. However, the temporal coverage of satellite data has some limits. The reanalysis, also known as "retrospective analysis" of the atmosphere's parameters, is produce by fusing the output of NWP (Numerical Weather Prediction) models with observation data from a variety of sources, including ground, and satellite, ship, and aircraft observation. The result is a comprehensive record of the parameters affecting weather and climate. The effectiveness of reanalysis datasets (ERA-5) for North Africa was evaluate against high-quality surfaces measured using statistical analysis. Estimating the distribution of global solar radiation (GSR) over five chosen areas in North Africa through ten-years during the period time from 2011 to 2020. To investigate seasonal change in dataset performance, a seasonal statistical analysis was conduct, which showed a considerable difference in mistakes throughout the year. By altering the temporal resolution of the data used for comparison, the performance of the dataset is alter. Better performance is indicate by the data's monthly mean values, but data accuracy is degraded. Solar resource assessment and power estimation are discuses using the ERA-5 solar radiation data. The average values of mean bias error (MBE), root mean square error (RMSE) and mean absolute error (MAE) of the reanalysis data of solar radiation vary from 0.079 to 0.222, 0.055 to 0.178, and 0.0145 to 0.198 respectively during the period time in the present research. The correlation coefficient (R2) varies from 0.93 to 99% during the period time in the present research. This research's objective is to provide a reliable representation of the world's solar radiation to aid in the use of solar energy in all sectors.

Keywords: solar energy, ERA-5 analysis data, global solar radiation, North Africa

Procedia PDF Downloads 74
208 Threshold Sand Detection Limits for Acoustic Monitors in Multiphase Flow

Authors: Vinod Ponnagandla, Brenton McLaury, Siamack Shirazi

Abstract:

Sand production can lead to deposition of particles or erosion. Low production rates resulting in deposition can partially clog systems and cause under deposit corrosion. Commercially available nonintrusive acoustic sand detectors are attractive as they claim to detect sand production. Acoustic sand detectors are used during oil and gas production; however, operators often do not know the threshold detection limits of these devices. It is imperative to know the detection limits to appropriately plan for cleaning of separation equipment or examine risk of erosion. These monitors are based on detecting the acoustic signature of sand as the particles impact the pipe walls. The objective of this work is to determine threshold detection limits for acoustic sand monitors that are commercially available. The minimum threshold sand concentration that can be detected in a pipe are determined as a function of flowing gas and liquid velocities. A large scale flow loop with a 4-inch test section is utilized. Commercially available sand monitors (ClampOn and Roxar) are evaluated for different flow regimes, sand sizes and pipe orientation (vertical and horizontal). The manufacturers’ recommend that the monitors be placed on a bend to maximize the number of particle impacts, so results are shown for monitors placed at 45 and 90 degree positions in a bend. Acoustic sand monitors that clamp to the outside of pipe are passive and listen for solid particle impact noise. The threshold sand rate is calculated by eliminating the background noise created by the flow of gas and liquid in the pipe for various flow regimes that are generated in horizontal and vertical test sections. The average sand sizes examined are 150 and 300 microns. For stratified and bubbly flows the threshold sand rates are much higher than other flow regimes such as slug and annular flow regimes that are investigated. However, the background noise generated by slug flow regime is very high and cause a high uncertainty in detection limits. The threshold sand rates for annular flow and dry gas conditions are the lowest because of high gas velocities. The effects of monitor placement around elbows that are in vertical and horizontal pipes are also examined for 150 micron. The results show that the threshold sand rates that are detected in vertical orientation are generally lower for all various flow regimes that are investigated.

Keywords: acoustic monitor, sand, multiphase flow, threshold

Procedia PDF Downloads 373
207 Controlled Doping of Graphene Monolayer

Authors: Vedanki Khandenwal, Pawan Srivastava, Kartick Tarafder, Subhasis Ghosh

Abstract:

We present here the experimental realization of controlled doping of graphene monolayers through charge transfer by trapping selected organic molecules between the graphene layer and underlying substrates. This charge transfer between graphene and trapped molecule leads to controlled n-type or p-type doping in monolayer graphene (MLG), depending on whether the trapped molecule acts as an electron donor or an electron acceptor. Doping controllability has been validated by a shift in corresponding Raman peak positions and a shift in Dirac points. In the transfer characteristics of field effect transistors, a significant shift of Dirac point towards positive or negative gate voltage region provides the signature of p-type or n-type doping of graphene, respectively, as a result of the charge transfer between graphene and the organic molecules trapped within it. In order to facilitate the charge transfer interaction, it is crucial for the trapped molecules to be situated in close proximity to the graphene surface, as demonstrated by findings in Raman and infrared spectroscopies. However, the mechanism responsible for this charge transfer interaction has remained unclear at the microscopic level. Generally, it is accepted that the dipole moment of adsorbed molecules plays a crucial role in determining the charge-transfer interaction between molecules and graphene. However, our findings clearly illustrate that the doping effect primarily depends on the reactivity of the constituent atoms in the adsorbed molecules rather than just their dipole moment. This has been illustrated by trapping various molecules at the graphene−substrate interface. Dopant molecules such as acetone (containing highly reactive oxygen atoms) promote adsorption across the entire graphene surface. In contrast, molecules with less reactive atoms, such as acetonitrile, tend to adsorb at the edges due to the presence of reactive dangling bonds. In the case of low-dipole moment molecules like toluene, there is a lack of substantial adsorption anywhere on the graphene surface. Observation of (i) the emergence of the Raman D peak exclusively at the edges for trapped molecules without reactive atoms and throughout the entire basal plane for those with reactive atoms, and (ii) variations in the density of attached molecules (with and without reactive atoms) to graphene with their respective dipole moments provides compelling evidence to support our claim. Additionally, these observations were supported by first principle density functional calculations.

Keywords: graphene, doping, charge transfer, liquid phase exfoliation

Procedia PDF Downloads 36
206 Creativity and Expressive Interpretation of Musical Drama in Children with Special Needs (Down Syndrome) in Special Schools Yayasan Pendidikan Anak Cacat, Medan, North Sumatera

Authors: Junita Batubara

Abstract:

Children with special needs, especially those with disability in mental, physical or social/emotional interactions, are marginalized. Many people still view them as troublesome, inconvenience, having learning difficulties, unproductive and burdensome to society. This study intends to investigate; how musical drama can develop the ability to control the coordination of mental functions; how musical dramas can assist children to work together; how musical dramas can assist to maintain the child's emotional and physical health; how musical dramas can improve children creativity. The objectives of the research are: To know whether musical drama can control the coordination of mental function of children; to know whether musical drama can improve communication ability and expression of children; to know whether musical drama can help children work with people around them; to find out if musical dramas can develop the child's emotional and physical health; to find out if musical drama can improve children's creativity. The study employed a qualitative research approach. Data was collecting by listening, observing in depth through public hearings that select the key informants who were teachers and principals, parents and children. The data obtained from each public hearing was then processed (reduced), conclusion drawing/verification, presentation of data (data display). Furthermore, the model obtained was implementing for musical performance, where the benefits of the show are: musical drama can improve language skills; musical dramas are capable of developing memory and storage of information; developing communication skills and express themselves; helping children work together; assisting emotional and physical health; enhancing creativity.

Keywords: children Down syndrome, music, drama script, performance

Procedia PDF Downloads 200
205 Real-Time Monitoring of Drinking Water Quality Using Advanced Devices

Authors: Amani Abdallah, Isam Shahrour

Abstract:

The quality of drinking water is a major concern of public health. The control of this quality is generally performed in the laboratory, which requires a long time. This type of control is not adapted for accidental pollution from sudden events, which can have serious consequences on population health. Therefore, it is of major interest to develop real-time innovative solutions for the detection of accidental contamination in drinking water systems This paper presents researches conducted within the SunRise Demonstrator for ‘Smart and Sustainable Cities’ with a particular focus on the supervision of the water quality. This work aims at (i) implementing a smart water system in a large water network (Campus of the University Lille1) including innovative equipment for real-time detection of abnormal events, such as those related to the contamination of drinking water and (ii) develop a numerical modeling of the contamination diffusion in the water distribution system. The first step included verification of the water quality sensors and their effectiveness on a network prototype of 50m length. This part included the evaluation of the efficiency of these sensors in the detection both bacterial and chemical contamination events in drinking water distribution systems. An on-line optical sensor integral with a laboratory-scale distribution system (LDS) was shown to respond rapidly to changes in refractive index induced by injected loads of chemical (cadmium, mercury) and biological contaminations (Escherichia coli). All injected substances were detected by the sensor; the magnitude of the response depends on the type of contaminant introduced and it is proportional to the injected substance concentration.

Keywords: distribution system, drinking water, refraction index, sensor, real-time

Procedia PDF Downloads 312
204 Modal Composition and Tectonic Provenance of the Sandstones of Ecca Group, Karoo Supergroup in the Eastern Cape Province, South Africa

Authors: Christopher Baiyegunhi, Kuiwu Liu, Oswald Gwavava

Abstract:

Petrography of the sandstones of Ecca Group, Karoo Supergroup in the Eastern Cape Province of South Africa have been investigated on composition, provenance and influence of weathering conditions. Petrographic studies based on quantitative analysis of the detrital minerals revealed that the sandstones are composed mostly of quartz, feldspar and lithic fragments of metamorphic and sedimentary rocks. The sandstones have an average framework composition of 24.3% quartz, 19.3% feldspar, 26.1% rock fragments, and 81.33% of the quartz grains are monocrystalline. These sandstones are generally very fine to fine grained, moderate to well sorted, and subangular to subrounded in shape. In addition, they are compositionally immature and can be classified as feldspathic wacke and lithic wacke. The absence of major petrographically distinctive compositional variations in the sandstones perhaps indicate homogeneity of their source. As a result of this, it is inferred that the transportation distance from the source area was quite short and the main mechanism of transportation was by river systems to the basin. The QFL ternary diagrams revealed dissected and transitional arc provenance pointing to an active margin and uplifted basement preserving the signature of a recycled provenance. This is an indication that the sandstones were derived from a magmatic arc provenance. Since magmatic provenance includes transitional arc and dissected arc, it also shows that the source area of the Ecca sediments had a secondary sedimentary and metasedimentary rocks from a marginal belt that developed as a result of rifting. The weathering diagrams and semi-quantitative weathering index indicate that the Ecca sandstones are mostly from a plutonic source area, with climatic conditions ranging from arid to humid. The compositional immaturity of the sandstones is suggested to be due to weathering or recycling and low relief or short transport from the source area. The detrital modal compositions of these sandstones are related to back arc to island and continental margin arc. The origin and deposition of the Ecca sandstones are due to low-moderate weathering, recycling of pre-existing rocks, erosion and transportation of debris from the orogeny of the Cape Fold Belt.

Keywords: petrography, tectonic setting, provenance, Ecca Group, Karoo Basin

Procedia PDF Downloads 387
203 Nonlinear Aerodynamic Parameter Estimation of a Supersonic Air to Air Missile by Using Artificial Neural Networks

Authors: Tugba Bayoglu

Abstract:

Aerodynamic parameter estimation is very crucial in missile design phase, since accurate high fidelity aerodynamic model is required for designing high performance and robust control system, developing high fidelity flight simulations and verification of computational and wind tunnel test results. However, in literature, there is not enough missile aerodynamic parameter identification study for three main reasons: (1) most air to air missiles cannot fly with constant speed, (2) missile flight test number and flight duration are much less than that of fixed wing aircraft, (3) variation of the missile aerodynamic parameters with respect to Mach number is higher than that of fixed wing aircraft. In addition to these challenges, identification of aerodynamic parameters for high wind angles by using classical estimation techniques brings another difficulty in the estimation process. The reason for this, most of the estimation techniques require employing polynomials or splines to model the behavior of the aerodynamics. However, for the missiles with a large variation of aerodynamic parameters with respect to flight variables, the order of the proposed model increases, which brings computational burden and complexity. Therefore, in this study, it is aimed to solve nonlinear aerodynamic parameter identification problem for a supersonic air to air missile by using Artificial Neural Networks. The method proposed will be tested by using simulated data which will be generated with a six degree of freedom missile model, involving a nonlinear aerodynamic database. The data will be corrupted by adding noise to the measurement model. Then, by using the flight variables and measurements, the parameters will be estimated. Finally, the prediction accuracy will be investigated.

Keywords: air to air missile, artificial neural networks, open loop simulation, parameter identification

Procedia PDF Downloads 247
202 Trinary Affinity—Mathematic Verification and Application (1): Construction of Formulas for the Composite and Prime Numbers

Authors: Liang Ming Zhong, Yu Zhong, Wen Zhong, Fei Fei Yin

Abstract:

Trinary affinity is a description of existence: every object exists as it is known and spoken of, in a system of 2 differences (denoted dif1, dif₂) and 1 similarity (Sim), equivalently expressed as dif₁ / Sim / dif₂ and kn / 0 / tkn (kn = the known, tkn = the 'to be known', 0 = the zero point of knowing). They are mathematically verified and illustrated in this paper by the arrangement of all integers onto 3 columns, where each number exists as a difference in relation to another number as another difference, and the 2 difs as arbitrated by a third number as the Sim, resulting in a trinary affinity or trinity of 3 numbers, of which one is the known, the other the 'to be known', and the third the zero (0) from which both the kn and tkn are measured and specified. Consequently, any number is horizontally specified either as 3n, or as '3n – 1' or '3n + 1', and vertically as 'Cn + c', so that any number seems to occur at the intersection of its X and Y axes and represented by its X and Y coordinates, as any point on Earth’s surface by its latitude and longitude. Technically, i) primes are viewed and treated as progenitors, and composites as descending from them, forming families of composites, each capable of being measured and specified from its own zero called in this paper the realistic zero (denoted 0r, as contrasted to the mathematic zero, 0m), which corresponds to the constant c, and the nature of which separates the composite and prime numbers, and ii) any number is considered as having a magnitude as well as a position, so that a number is verified as a prime first by referring to its descriptive formula and then by making sure that no composite number can possibly occur on its position, by dividing it with factors provided by the composite number formulas. The paper consists of 3 parts: 1) a brief explanation of the trinary affinity of things, 2) the 8 formulas that represent ALL the primes, and 3) families of composite numbers, each represented by a formula. A composite number family is described as 3n + f₁‧f₂. Since there are an infinitely large number of composite number families, to verify the primality of a great probable prime, we have to have it divided with several or many a f₁ from a range of composite number formulas, a procedure that is as laborious as it is the surest way to verifying a great number’s primality. (So, it is possible to substitute planned division for trial division.)

Keywords: trinary affinity, difference, similarity, realistic zero

Procedia PDF Downloads 173
201 Nondecoupling Signatures of Supersymmetry and an Lμ-Lτ Gauge Boson at Belle-II

Authors: Heerak Banerjee, Sourov Roy

Abstract:

Supersymmetry, one of the most celebrated fields of study for explaining experimental observations where the standard model (SM) falls short, is reeling from the lack of experimental vindication. At the same time, the idea of additional gauge symmetry, in particular, the gauged Lμ-Lτ symmetric models have also generated significant interest. They have been extensively proposed in order to explain the tantalizing discrepancy in the predicted and measured value of the muon anomalous magnetic moment alongside several other issues plaguing the SM. While very little parameter space within these models remain unconstrained, this work finds that the γ + Missing Energy (ME) signal at the Belle-II detector will be a smoking gun for supersymmetry (SUSY) in the presence of a gauged U(1)Lμ-Lτ symmetry. A remarkable consequence of breaking the enhanced symmetry appearing in the limit of degenerate (s)leptons is the nondecoupling of the radiative contribution of heavy charged sleptons to the γ-Z΄ kinetic mixing. The signal process, e⁺e⁻ →γZ΄→γ+ME, is an outcome of this ubiquitous feature. Taking the severe constraints on gauged Lμ-Lτ models by several low energy observables into account, it is shown that any significant excess in all but the highest photon energy bin would be an undeniable signature of such heavy scalar fields in SUSY coupling to the additional gauge boson Z΄. The number of signal events depends crucially on the logarithm of the ratio of stau to smuon mass in the presence of SUSY. In addition, the number is also inversely proportional to the e⁺e⁻ collision energy, making a low-energy, high-luminosity collider like Belle-II an ideal testing ground for this channel. This process can probe large swathes of the hitherto free slepton mass ratio vs. additional gauge coupling (gₓ) parameter space. More importantly, it can explore the narrow slice of Z΄ mass (MZ΄) vs. gₓ parameter space still allowed in gauged U(1)Lμ-Lτ models for superheavy sparticles. The spectacular finding that the signal significance is independent of individual slepton masses is an exciting prospect indeed. Further, the prospect that signatures of even superheavy SUSY particles that may have escaped detection at the LHC may show up at the Belle-II detector is an invigorating revelation.

Keywords: additional gauge symmetry, electron-positron collider, kinetic mixing, nondecoupling radiative effect, supersymmetry

Procedia PDF Downloads 103
200 Improving Electrical Safety through Enhanced Work Permits

Authors: Nuwan Karunarathna, Hemali Seneviratne

Abstract:

Distribution Utilities inherently present electrical hazards for their workers in addition to the general public especially due to bare overhead lines spreading out over a large geographical area. Therefore, certain procedures such as; de-energization, verification of de-energization, isolation, lock-out tag-out and earthing are carried out to ensure safe working conditions when conducting maintenance work on de-energized overhead lines. However, measures must be taken to coordinate the above procedures and to ensure successful and accurate execution of those procedures. Issuing of 'Work Permits' is such a measure that is used by the Distribution Utility considered in this paper. Unfortunately, the Work Permit method adopted by the Distribution Utility concerned here has not been successful in creating the safe working conditions as expected which was evidenced by four (4) number of fatalities of workers due to electrocution occurred in the Distribution Utility from 2016 to 2018. Therefore, this paper attempts to identify deficiencies in the Work Permit method and related contributing factors through careful analysis of the four (4) fatalities and work place practices to rectify the short comings to prevent future incidents. The analysis shows that the present level of coordination between the 'Authorized Person' who issues the work permit and the 'Competent Person' who performs the actual work is grossly inadequate to achieve the intended safe working conditions. The paper identifies the need of active participation of a 'Control Person' who oversees the whole operation from a bird’s eye perspective and recommends further measures that are derived through the analysis of the fatalities to address the identified lapses in the current work permit system.

Keywords: authorized person, competent person, control person, de-energization, distribution utility, isolation, lock-out tag-out, overhead lines, work permit

Procedia PDF Downloads 108
199 Space Weather and Earthquakes: A Case Study of Solar Flare X9.3 Class on September 6, 2017

Authors: Viktor Novikov, Yuri Ruzhin

Abstract:

The studies completed to-date on a relation of the Earth's seismicity and solar processes provide the fuzzy and contradictory results. For verification of an idea that solar flares can trigger earthquakes, we have analyzed a case of a powerful surge of solar flash activity early in September 2017 during approaching the minimum of 24th solar cycle was accompanied by significant disturbances of space weather. On September 6, 2017, a group of sunspots AR2673 generated a large solar flare of X9.3 class, the strongest flare over the past twelve years. Its explosion produced a coronal mass ejection partially directed towards the Earth. We carried out a statistical analysis of the catalogs of earthquakes USGS and EMSC for determination of the effect of solar flares on global seismic activity. New evidence of earthquake triggering due to the Sun-Earth interaction has been demonstrated by simple comparison of behavior of Earth's seismicity before and after the strong solar flare. The global number of earthquakes with magnitude of 2.5 to 5.5 within 11 days after the solar flare has increased by 30 to 100%. A possibility of electric/electromagnetic triggering of earthquake due to space weather disturbances is supported by results of field and laboratory studies, where the earthquakes (both natural and laboratory) were initiated by injection of electrical current into the Earth crust. For the specific case of artificial electric earthquake triggering the current density at a depth of earthquake, sources are comparable with estimations of a density of telluric currents induced by variation of space weather conditions due to solar flares. Acknowledgment: The work was supported by RFBR grant No. 18-05-00255.

Keywords: solar flare, earthquake activity, earthquake triggering, solar-terrestrial relations

Procedia PDF Downloads 118
198 Analysis of Surface Hardness, Surface Roughness and near Surface Microstructure of AISI 4140 Steel Worked with Turn-Assisted Deep Cold Rolling Process

Authors: P. R. Prabhu, S. M. Kulkarni, S. S. Sharma, K. Jagannath, Achutha Kini U.

Abstract:

In the present study, response surface methodology has been used to optimize turn-assisted deep cold rolling process of AISI 4140 steel. A regression model is developed to predict surface hardness and surface roughness using response surface methodology and central composite design. In the development of predictive model, deep cold rolling force, ball diameter, initial roughness of the workpiece, and number of tool passes are considered as model variables. The rolling force and the ball diameter are the significant factors on the surface hardness and ball diameter and numbers of tool passes are found to be significant for surface roughness. The predicted surface hardness and surface roughness values and the subsequent verification experiments under the optimal operating conditions confirmed the validity of the predicted model. The absolute average error between the experimental and predicted values at the optimal combination of parameter settings for surface hardness and surface roughness is calculated as 0.16% and 1.58% respectively. Using the optimal processing parameters, the hardness is improved from 225 to 306 HV, which resulted in an increase in the near surface hardness by about 36% and the surface roughness is improved from 4.84µm to 0.252 µm, which resulted in decrease in the surface roughness by about 95%. The depth of compression is found to be more than 300µm from the microstructure analysis and this is in correlation with the results obtained from the microhardness measurements. Taylor Hobson Talysurf tester, micro Vickers hardness tester, optical microscopy and X-ray diffractometer are used to characterize the modified surface layer.

Keywords: hardness, response surface methodology, microstructure, central composite design, deep cold rolling, surface roughness

Procedia PDF Downloads 385
197 Advanced Biosensor Characterization of Phage-Mediated Lysis in Real-Time and under Native Conditions

Authors: Radka Obořilová, Hana Šimečková, Matěj Pastucha, Jan Přibyl, Petr Skládal, Ivana Mašlaňová, Zdeněk Farka

Abstract:

Due to the spreading of antimicrobial resistance, alternative approaches to combat superinfections are being sought, both in the field of lysing agents and methods for studying bacterial lysis. A suitable alternative to antibiotics is phage therapy and enzybiotics, for which it is also necessary to study the mechanism of their action. Biosensor-based techniques allow rapid detection of pathogens in real time, verification of sensitivity to commonly used antimicrobial agents, and selection of suitable lysis agents. The detection of lysis takes place on the surface of the biosensor with immobilized bacteria, which has the potential to be used to study biofilms. An example of such a biosensor is surface plasmon resonance (SPR), which records the kinetics of bacterial lysis based on a change in the resonance angle. The bacteria are immobilized on the surface of the SPR chip, and the action of phage as the mass loss is monitored after a typical lytic cycle delay. Atomic force microscopy (AFM) is a technique for imaging of samples on the surface. In contrast to electron microscopy, it has the advantage of real-time imaging in the native conditions of the nutrient medium. In our case, Staphylococcus aureus was lysed using the enzyme lysostaphin and phage P68 from the familyPodoviridae at 37 ° C. In addition to visualization, AFM was used to study changes in mechanical properties during lysis, which resulted in a reduction of Young’s modulus (E) after disruption of the bacterial wall. Changes in E reflect the stiffness of the bacterium. These advanced methods provide deeper insight into bacterial lysis and can help to fight against bacterial diseases.

Keywords: biosensors, atomic force microscopy, surface plasmon resonance, bacterial lysis, staphylococcus aureus, phage P68

Procedia PDF Downloads 108
196 Identification of the Putative Interactome of Escherichia coli Glutaredoxin 2 by Affinity Chromatography

Authors: Eleni Poulou-Sidiropoulou, Charalampos N. Bompas, Martina Samiotaki, Alexios Vlamis-Gardikas

Abstract:

The glutaredoxin (Grx) and thioredoxin (Trx) systems keep the intracellular environment reduced in almost all organisms. In Escherichia coli (E. coli), the Grx system relies on NADPH+ to reduce GSH reductase (GR), the latter reducing oxidized diglutathione to glutathione (GSH) which in turn reduces cytosolic Grxs, the electron donors for different intracellular substrates. In the Trx system, GR and GSH are replaced by Trx reductase (TrxR). Three of the Grxs of E. coli (Grx1, 2, 3) are reduced by GSH, while Grx4 is likely reduced by TrxR. Trx1 and Grx1 from E. coli may reduce ribonucleotide reductase Ia to ensure a constant supply of deoxyribonucleotides for the synthesis of DNA. The role of the other three Grxs is relatively unknown, especially for Grx2 that may amount up to 1 % of total cellular protein in the stationary phase of growth. The protein is known as a potent antioxidant, but no specific functions have been attributed to it. Herein, affinity chromatography of cellular extracts on immobilized Grx2, followed by MS analysis of the resulting eluates, was employed to identify protein ligands that could provide insights into the biological role of Grx2. Ionic, strong non-covalent, and covalent (disulfide) interactions with relevant proteins were detected. As a means of verification, the identified ligands were subjected to in silico docking with monothiol Grx2. In other experiments, protein extracts from E. coli cells lacking the gene for Grx2 (grxB) were compared to those of wild type. Taken together, the two approaches suggest that Grx2 is involved in protein synthesis, nucleotide metabolism, DNA damage repair, stress responses, and various metabolic processes. Grx2 appears as a versatile protein that may participate in a wide range of biological pathways beyond its known general antioxidant function.

Keywords: Escherichia coli, glutaredoxin 2, interactome, thiol-disulfide oxidoreductase

Procedia PDF Downloads 19
195 Foreign Language Faculty Mentorship in Vietnam: An Interpretive Qualitative Study

Authors: Hung Tran

Abstract:

This interpretive qualitative study employed three theoretical lenses: Bronfenbrenner’s (1979) Ecological System of Human Development, Vygotsky’s (1978) Sociocultural Theory of Development, and Knowles’s (1970) Adult Learning Theory as the theoretical framework in connection with the constructivist research paradigm to investigate into positive and negative aspects of the extant English as a Foreign Language (EFL) faculty mentoring programs at four higher education institutions (HEIs) in the Mekong River Delta (MRD) of Vietnam. Four apprentice faculty members (mentees), four experienced faculty members (mentors), and two associate deans (administrators) from these HEIs participated in two tape-recorded individual interviews in the Vietnamese language. Twenty interviews were transcribed verbatim and translated into English with verification. The initial analysis of data reveals that the mentoring program, which is mandated by Vietnam’s Ministry of Education and Training, has been implemented differently at these HEIs due to a lack of officially-documented mentoring guidance. Other general themes emerging from the data include essentials of the mentoring program, approaches of the mentoring practice, the mentee – mentor relationship, and lifelong learning beyond the mentoring program. Practically, this study offers stakeholders in the mentoring cycle description of benefits and best practices of tertiary EFL mentorship and a suggested mentoring program that is metaphorically depicted as “a lifebuoy” for its current and potential administrators and mentors to help their mentees survive in the first years of teaching. Theoretically, this study contributes to the world’s growing knowledge of post-secondary mentorship by enriching the modest literature on Asian tertiary EFL mentorship.

Keywords: faculty mentorship, mentees, mentors, administrator, the MRD, Vietnam

Procedia PDF Downloads 102
194 Establishment of a Test Bed for Integrated Map of Underground Space and Verification of GPR Exploration Equipment

Authors: Jisong Ryu, Woosik Lee, Yonggu Jang

Abstract:

The paper discusses the process of establishing a reliable test bed for verifying the usability of Ground Penetrating Radar (GPR) exploration equipment based on an integrated underground spatial map in Korea. The aim of this study is to construct a test bed consisting of metal and non-metal pipelines to verify the performance of GPR equipment and improve the accuracy of the underground spatial integrated map. The study involved the design and construction of a test bed for metal and non-metal pipe detecting tests. The test bed was built in the SOC Demonstration Research Center (Yeoncheon) of the Korea Institute of Civil Engineering and Building Technology, burying metal and non-metal pipelines up to a depth of 5m. The test bed was designed in both vehicle-type and cart-type GPR-mounted equipment. The study collected data through the construction of the test bed and conducting metal and non-metal pipe detecting tests. The study analyzed the reliability of GPR detecting results by comparing them with the basic drawings, such as the underground space integrated map. The study contributes to the improvement of GPR equipment performance evaluation and the accuracy of the underground spatial integrated map, which is essential for urban planning and construction. The study addressed the question of how to verify the usability of GPR exploration equipment based on an integrated underground spatial map and improve its performance. The study found that the test bed is reliable for verifying the performance of GPR exploration equipment and accurately detecting metal and non-metal pipelines using an integrated underground spatial map. The study concludes that the establishment of a test bed for verifying the usability of GPR exploration equipment based on an integrated underground spatial map is essential. The proposed Korean-style test bed can be used for the evaluation of GPR equipment performance and support the construction of a national non-metal pipeline exploration equipment performance evaluation center in Korea.

Keywords: Korea-style GPR testbed, GPR, metal pipe detecting, non-metal pipe detecting

Procedia PDF Downloads 69
193 Service Interactions Coordination Using a Declarative Approach: Focuses on Deontic Rule from Semantics of Business Vocabulary and Rules Models

Authors: Nurulhuda A. Manaf, Nor Najihah Zainal Abidin, Nur Amalina Jamaludin

Abstract:

Coordinating service interactions are a vital part of developing distributed applications that are built up as networks of autonomous participants, e.g., software components, web services, online resources, involve a collaboration between a diverse number of participant services on different providers. The complexity in coordinating service interactions reflects how important the techniques and approaches require for designing and coordinating the interaction between participant services to ensure the overall goal of a collaboration between participant services is achieved. The objective of this research is to develop capability of steering a complex service interaction towards a desired outcome. Therefore, an efficient technique for modelling, generating, and verifying the coordination of service interactions is developed. The developed model describes service interactions using service choreographies approach and focusing on a declarative approach, advocating an Object Management Group (OMG) standard, Semantics of Business Vocabulary and Rules (SBVR). This model, namely, SBVR model for service choreographies focuses on a declarative deontic rule expressing both obligation and prohibition, which can be more useful in working with coordinating service interactions. The generated SBVR model is then be formulated and be transformed into Alloy model using Alloy Analyzer for verifying the generated SBVR model. The transformation of SBVR into Alloy allows to automatically generate the corresponding coordination of service interactions (service choreography), hence producing an immediate instance of execution that satisfies the constraints of the specification and verifies whether a specific request can be realised in the given choreography in the generated choreography.

Keywords: service choreography, service coordination, behavioural modelling, complex interactions, declarative specification, verification, model transformation, semantics of business vocabulary and rules, SBVR

Procedia PDF Downloads 118
192 FE Modelling of Structural Effects of Alkali-Silica Reaction in Reinforced Concrete Beams

Authors: Mehdi Habibagahi, Shami Nejadi, Ata Aminfar

Abstract:

A significant degradation factor that impacts the durability of concrete structures is the alkali-silica reaction. Engineers are frequently charged with the challenges of conducting a thorough safety assessment of concrete structures that have been impacted by ASR. The alkali-silica reaction has a major influence on the structural capacities of structures. In most cases, the reduction in compressive strength, tensile strength, and modulus of elasticity is expressed as a function of free expansion and crack widths. Predicting the effect of ASR on flexural strength is also relevant. In this paper, a nonlinear three-dimensional (3D) finite-element model was proposed to describe the flexural strength degradation induced byASR.Initial strains, initial stresses, initial cracks, and deterioration of material characteristics were all considered ASR factors in this model. The effects of ASR on structural performance were evaluated by focusing on initial flexural stiffness, force–deformation curve, and load-carrying capacity. Degradation of concrete mechanical properties was correlated with ASR growth using material test data conducted at Tech Lab, UTS, and implemented into the FEM for various expansions. The finite element study revealed a better understanding of the ASR-affected RC beam's failure mechanism and capacity reduction as a function of ASR expansion. Furthermore, in this study, decreasing of the residual mechanical properties due to ASRisreviewed, using as input data for the FEM model. Finally, analysis techniques and a comparison of the analysis and the experiment results are discussed. Verification is also provided through analyses of reinforced concrete beams with behavior governed by either flexural or shear mechanisms.

Keywords: alkali-silica reaction, analysis, assessment, finite element, nonlinear analysis, reinforced concrete

Procedia PDF Downloads 141
191 The Influence of Market Attractiveness and Core Competence on Value Creation Strategy and Competitive Advantage and Its Implication on Business Performance

Authors: Firsan Nova

Abstract:

The average Indonesian watches 5.5 hours of TV a day. With a population of 242 million people and a Free-to-Air (FTA) TV penetration rate of 56%, that equates to 745 million hours of television watched each day. With such potential, it is no wonder that many companies are now attempting to get into the Pay TV market. Research firm Media Partner Asia has forecast in its study that the number of Indonesian pay-television subscribers will climb from 2.4 million in 2012 to 8.7 million by 2020, with penetration scaling up from 7 percent to 21 percent. Key drivers of market growth, the study says, include macro trends built around higher disposable income and a rising middle class, with leading players continuing to invest significantly in sales, distribution and content. New entrants, in the meantime, will boost overall prospects. This study aims to examine and analyze the effect of Market Attractiveness and the Core Competence on Value Creation and Competitive Advantage and its impact to Business Performance in the pay TV industry in Indonesia. The study using strategic management science approach with the census method in which all members of the population are as sample. Verification method is used to examine the relationship between variables. The unit of analysis in this research is all Indonesian Pay TV business units totaling 19 business units. The unit of observation is the director and managers of each business unit. Hypothesis testing is performed by using statistical Partial Least Square (PLS). The conclusion of the study shows that the market attractiveness affects business performance through value creation and competitive advantage. The appropriate value creation comes from the company ability to optimize its core competence and exploit market attractiveness. Value creation affects competitive advantage. The competitive advantage can be determined based on the company's ability to create value for customers and the competitive advantage has an impact on business performance.

Keywords: market attractiveness, core competence, value creation, competitive advantage, business performance

Procedia PDF Downloads 321
190 Finite Element Modeling and Nonlinear Analysis for Seismic Assessment of Off-Diagonal Steel Braced RC Frame

Authors: Keyvan Ramin

Abstract:

The geometric nonlinearity of Off-Diagonal Bracing System (ODBS) could be a complementary system to covering and extending the nonlinearity of reinforced concrete material. Finite element modeling is performed for flexural frame, x-braced frame and the ODBS braced frame system at the initial phase. Then the different models are investigated along various analyses. According to the experimental results of flexural and x-braced frame, the verification is done. Analytical assessments are performed in according to three-dimensional finite element modeling. Non-linear static analysis is considered to obtain performance level and seismic behavior, and then the response modification factors calculated from each model’s pushover curve. In the next phase, the evaluation of cracks observed in the finite element models, especially for RC members of all three systems is performed. The finite element assessment is performed on engendered cracks in ODBS braced frame for various time steps. The nonlinear dynamic time history analysis accomplished in different stories models for three records of Elcentro, Naghan, and Tabas earthquake accelerograms. Dynamic analysis is performed after scaling accelerogram on each type of flexural frame, x-braced frame and ODBS braced frame one by one. The base-point on RC frame is considered to investigate proportional displacement under each record. Hysteresis curves are assessed along continuing this study. The equivalent viscous damping for ODBS system is estimated in according to references. Results in each section show the ODBS system has an acceptable seismic behavior and their conclusions have been converged when the ODBS system is utilized in reinforced concrete frame.

Keywords: FEM, seismic behaviour, pushover analysis, geometric nonlinearity, time history analysis, equivalent viscous damping, passive control, crack investigation, hysteresis curve

Procedia PDF Downloads 356
189 Angiogenic and Immunomodulatory Properties and Phenotype of Mesenchymal Stromal Cells Can Be Regulated by Cytokine Treatment

Authors: Ekaterina Zubkova, Irina Beloglazova, Iurii Stafeev, Konsyantin Dergilev, Yelena Parfyonova, Mikhail Menshikov

Abstract:

Mesenchymal stromal cells from adipose tissue (MSC) currently are widely used in regenerative medicine to restore the function of damaged tissues, but that is significantly hampered by their heterogeneity. One of the modern approaches to overcoming this obstacle is the polarization of cell subpopulations into a specific phenotype under the influence of cytokines and other factors that activate receptors and signal transmission to cells. We polarized MSC with factors affecting the inflammatory signaling and functional properties of cells, followed by verification of their expression profile and ability to affect the polarization of macrophages. RT-PCR evaluation showed that cells treated with LPS, interleukin-17, tumor necrosis factor α (TNF α), primarily express pro-inflammatory factors and cytokines, and after treatment with polyninosin polycytidic acid and interleukin-4 (IL4) anti-inflammatory factors and some proinflammatory factors. MSC polarized with pro-inflammatory cytokines showed a more robust pro-angiogenic effect in fibrin gel bead 3D angiogenesis assay. Further, we evaluated the possibility of paracrine effects of MSCs on the polarization of intact macrophages. Polarization efficiency was assesed by expression of M1/M2 phenotype markers CD80 and CD206. We showed that conditioned media from MSC preincubated in the presence of IL-4 cause an increase in CD206 expression similar to that observed in M2 macrophages. Conditioned media from MSC polarized in the presence of LPS or TNF-α increased the expression of CD80 antigen in macrophages, similar to that observed in M1 macrophages. In other cases, a pronounced paracrine effect of MSC on the polarization of macrophages was not detected. Thus, our study showed that the polarization of MSC along the pro-inflammatory or anti-inflammatory pathway allows us to obtain cell subpopulations that have a multidirectional modulating effect on the polarization of macrophages. (RFBR grants 20-015-00405 and 18-015-00398.)

Keywords: angiogenesis, cytokines, mesenchymal, polarization, inflammation

Procedia PDF Downloads 135
188 Dynamic Test for Stability of Columns in Sway Mode

Authors: Elia Efraim, Boris Blostotsky

Abstract:

Testing of columns in sway mode is performed in order to determine the maximal allowable load limited by plastic deformations or their end connections and a critical load limited by columns stability. Motivation to determine accurate value of critical force is caused by its using as follow: - critical load is maximal allowable load for given column configuration and can be used as criterion of perfection; - it is used in calculation prescribed by standards for design of structural elements under combined action of compression and bending; - it is used for verification of theoretical analysis of stability at various end conditions of columns. In the present work a new non-destructive method for determination of columns critical buckling load in sway mode is proposed. The method allows performing measurements during the tests under loads that exceeds the columns critical load without losing its stability. The possibility of such loading is achieved by structure of the loading system. The system is performed as frame with rigid girder, one of the columns is the tested column and the other is additional two-hinged strut. Loading of the frame is carried out by the flexible traction element attached to the girder. The load applied on the tested column can achieve values that exceed the critical load by choice of parameters of the traction element and the additional strut. The system lateral stiffness and the column critical load are obtained by the dynamic method. The experiment planning and the comparison between the experimental and theoretical values were performed based on the developed dependency of lateral stiffness of the system on vertical load, taking into account semi-rigid connections of the column's ends. The agreement between the obtained results was established. The method can be used for testing of real full-size columns in industrial conditions.

Keywords: buckling, columns, dynamic method, end-fixity factor, sway mode

Procedia PDF Downloads 329
187 Using the Weakest Precondition to Achieve Self-Stabilization in Critical Networks

Authors: Antonio Pizzarello, Oris Friesen

Abstract:

Networks, such as the electric power grid, must demonstrate exemplary performance and integrity. Integrity depends on the quality of both the system design model and the deployed software. Integrity of the deployed software is key, for both the original versions and the many that occur throughout numerous maintenance activity. Current software engineering technology and practice do not produce adequate integrity. Distributed systems utilize networks where each node is an independent computer system. The connections between them is realized via a network that is normally redundantly connected to guarantee the presence of a path between two nodes in the case of failure of some branch. Furthermore, at each node, there is software which may fail. Self-stabilizing protocols are usually present that recognize failure in the network and perform a repair action that will bring the node back to a correct state. These protocols first introduced by E. W. Dijkstra are currently present in almost all Ethernets. Super stabilization protocols capable of reacting to a change in the network topology due to the removal or addition of a branch in the network are less common but are theoretically defined and available. This paper describes how to use the Software Integrity Assessment (SIA) methodology to analyze self-stabilizing software. SIA is based on the UNITY formalism for parallel and distributed programming, which allows the analysis of code for verifying the progress property p leads-to q that describes the progress of all computations starting in a state satisfying p to a state satisfying q via the execution of one or more system modules. As opposed to demonstrably inadequate test and evaluation methods SIA allows the analysis and verification of any network self-stabilizing software as well as any other software that is designed to recover from failure without external intervention of maintenance personnel. The model to be analyzed is obtained by automatic translation of the system code to a transition system that is based on the use of the weakest precondition.

Keywords: network, power grid, self-stabilization, software integrity assessment, UNITY, weakest precondition

Procedia PDF Downloads 197
186 Dynamic Test for Sway-Mode Buckling of Columns

Authors: Boris Blostotsky, Elia Efraim

Abstract:

Testing of columns in sway mode is performed in order to determine the maximal allowable load limited by plastic deformations or their end connections and a critical load limited by columns stability. Motivation to determine accurate value of critical force is caused by its using as follow: - critical load is maximal allowable load for given column configuration and can be used as criterion of perfection; - it is used in calculation prescribed by standards for design of structural elements under combined action of compression and bending; - it is used for verification of theoretical analysis of stability at various end conditions of columns. In the present work a new non-destructive method for determination of columns critical buckling load in sway mode is proposed. The method allows performing measurements during the tests under loads that exceeds the columns critical load without losing its stability. The possibility of such loading is achieved by structure of the loading system. The system is performed as frame with rigid girder, one of the columns is the tested column and the other is additional two-hinged strut. Loading of the frame is carried out by the flexible traction element attached to the girder. The load applied on the tested column can achieve a values that exceed the critical load by choice of parameters of the traction element and the additional strut. The system lateral stiffness and the column critical load are obtained by the dynamic method. The experiment planning and the comparison between the experimental and theoretical values were performed based on the developed dependency of lateral stiffness of the system on vertical load, taking into account a semi-rigid connections of the column's ends. The agreement between the obtained results was established. The method can be used for testing of real full-size columns in industrial conditions.

Keywords: buckling, columns, dynamic method, semi-rigid connections, sway mode

Procedia PDF Downloads 291
185 Multi Biomertric Personal Identification System Based On Hybird Intellegence Method

Authors: Laheeb M. Ibrahim, Ibrahim A. Salih

Abstract:

Biometrics is a technology that has been widely used in many official and commercial identification applications. The increased concerns in security during recent years (especially during the last decades) have essentially resulted in more attention being given to biometric-based verification techniques. Here, a novel fusion approach of palmprint, dental traits has been suggested. These traits which are authentication techniques have been employed in a range of biometric applications that can identify any postmortem PM person and antemortem AM. Besides improving the accuracy, the fusion of biometrics has several advantages such as increasing, deterring spoofing activities and reducing enrolment failure. In this paper, a first unimodel biometric system has been made by using (palmprint and dental) traits, for each one classification applying an artificial neural network and a hybrid technique that combines swarm intelligence and neural network together, then attempt has been made to combine palmprint and dental biometrics. Principally, the fusion of palmprint and dental biometrics and their potential application has been explored as biometric identifiers. To address this issue, investigations have been carried out about the relative performance of several statistical data fusion techniques for integrating the information in both unimodal and multimodal biometrics. Also the results of the multimodal approach have been compared with each one of these two traits authentication approaches. This paper studies the features and decision fusion levels in multimodal biometrics. To determine the accuracy of GAR to parallel system decision-fusion including (AND, OR, Majority fating) has been used. The backpropagation method has been used for classification and has come out with result (92%, 99%, 97%) respectively for GAR, while the GAR) for this algorithm using hybrid technique for classification (95%, 99%, 98%) respectively. To determine the accuracy of the multibiometric system for feature level fusion has been used, while the same preceding methods have been used for classification. The results have been (98%, 99%) respectively while to determine the GAR of feature level different methods have been used and have come out with (98%).

Keywords: back propagation neural network BP ANN, multibiometric system, parallel system decision-fusion, practical swarm intelligent PSO

Procedia PDF Downloads 510
184 Multi-Sensor Image Fusion for Visible and Infrared Thermal Images

Authors: Amit Kumar Happy

Abstract:

This paper is motivated by the importance of multi-sensor image fusion with a specific focus on infrared (IR) and visual image (VI) fusion for various applications, including military reconnaissance. Image fusion can be defined as the process of combining two or more source images into a single composite image with extended information content that improves visual perception or feature extraction. These images can be from different modalities like visible camera & IR thermal imager. While visible images are captured by reflected radiations in the visible spectrum, the thermal images are formed from thermal radiation (infrared) that may be reflected or self-emitted. A digital color camera captures the visible source image, and a thermal infrared camera acquires the thermal source image. In this paper, some image fusion algorithms based upon multi-scale transform (MST) and region-based selection rule with consistency verification have been proposed and presented. This research includes the implementation of the proposed image fusion algorithm in MATLAB along with a comparative analysis to decide the optimum number of levels for MST and the coefficient fusion rule. The results are presented, and several commonly used evaluation metrics are used to assess the suggested method's validity. Experiments show that the proposed approach is capable of producing good fusion results. While deploying our image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, they also make it hard to become deployed in systems and applications that require a real-time operation, high flexibility, and low computation ability. So, the methods presented in this paper offer good results with minimum time complexity.

Keywords: image fusion, IR thermal imager, multi-sensor, multi-scale transform

Procedia PDF Downloads 84
183 Shaping and Improving the Human Resource Management in Small and Medium Enterprises in Poland

Authors: Małgorzata Smolarek

Abstract:

One of the barriers to the development of small and medium-sized enterprises (SME) are difficulties connected with management of human resources. The first part of article defines the specifics of staff management in small and medium enterprises. The practical part presents results of own studies in the area of diagnosis of the state of the human resources management in small and medium-sized enterprises in Poland. It takes into account its impact on the functioning of SME in a variable environment. This part presents findings of empirical studies, which enabled verification of the hypotheses and formulation of conclusions. The findings presented in this paper were obtained during the implementation of the project entitled 'Tendencies and challenges in strategic managing SME in Silesian Voivodeship.' The aim of the studies was to diagnose the state of strategic management and human resources management taking into account its impact on the functioning of small and medium enterprises operating in Silesian Voivodeship in Poland and to indicate improvement areas of the model under diagnosis. One of the specific objectives of the studies was to diagnose the state of the process of strategic management of human resources and to identify fundamental problems. In this area, the main hypothesis was formulated: The enterprises analysed do not have comprehensive strategies for management of human resources. The survey was conducted by questionnaire. Main Research Results: Human resource management in SMEs is characterized by simplicity of procedures, and the lack of sophisticated tools and its specificity depends on the size of the company. The process of human resources management in SME has to be adjusted to the structure of an organisation, result from its objectives, so that an organisation can fully implement its strategic plans and achieve success and competitive advantage on the market. A guarantee of success is an accurately developed policy of human resources management based on earlier analyses of the existing procedures and possessed human resources.

Keywords: human resources management, human resources policy, personnel strategy, small and medium enterprises

Procedia PDF Downloads 217
182 Electronic Structure Studies of Mn Doped La₀.₈Bi₀.₂FeO₃ Multiferroic Thin Film Using Near-Edge X-Ray Absorption Fine Structure

Authors: Ghazala Anjum, Farooq Hussain Bhat, Ravi Kumar

Abstract:

Multiferroic materials are vital for new application and memory devices, not only because of the presence of multiple types of domains but also as a result of cross correlation between coexisting forms of magnetic and electrical orders. In spite of wide studies done on multiferroic bulk ceramic materials their realization in thin film form is yet limited due to some crucial problems. During the last few years, special attention has been devoted to synthesis of thin films like of BiFeO₃. As they allow direct integration of the material into the device technology. Therefore owing to the process of exploration of new multiferroic thin films, preparation, and characterization of La₀.₈Bi₀.₂Fe₀.₇Mn₀.₃O₃ (LBFMO3) thin film on LaAlO₃ (LAO) substrate with LaNiO₃ (LNO) being the buffer layer has been done. The fact that all the electrical and magnetic properties are closely related to the electronic structure makes it inevitable to study the electronic structure of system under study. Without the knowledge of this, one may never be sure about the mechanism responsible for different properties exhibited by the thin film. Literature review reveals that studies on change in atomic and the hybridization state in multiferroic samples are still insufficient except few. The technique of x-ray absorption (XAS) has made great strides towards the goal of providing such information. It turns out to be a unique signature to a given material. In this milieu, it is time honoured to have the electronic structure study of the elements present in the LBFMO₃ multiferroic thin film on LAO substrate with buffer layer of LNO synthesized by RF sputtering technique. We report the electronic structure studies of well characterized LBFMO3 multiferroic thin film on LAO substrate with LNO as buffer layer using near-edge X-ray absorption fine structure (NEXAFS). Present exploration has been performed to find out the valence state and crystal field symmetry of ions present in the system. NEXAFS data of O K- edge spectra reveals a slight shift in peak position along with growth in intensities of low energy feature. Studies of Mn L₃,₂- edge spectra indicates the presence of Mn³⁺/Mn⁴⁺ network apart from very small contribution from Mn²⁺ ions in the system that substantiates the magnetic properties exhibited by the thin film. Fe L₃,₂- edge spectra along with spectra of reference compound reveals that Fe ions are present in +3 state. Electronic structure and valence state are found to be in accordance with the magnetic properties exhibited by LBFMO/LNO/LAO thin film.

Keywords: magnetic, multiferroic, NEXAFS, x-ray absorption fine structure, XMCD, x-ray magnetic circular dichroism

Procedia PDF Downloads 129
181 Brain Connectome of Glia, Axons, and Neurons: Cognitive Model of Analogy

Authors: Ozgu Hafizoglu

Abstract:

An analogy is an essential tool of human cognition that enables connecting diffuse and diverse systems with physical, behavioral, principal relations that are essential to learning, discovery, and innovation. The Cognitive Model of Analogy (CMA) leads and creates patterns of pathways to transfer information within and between domains in science, just as happens in the brain. The connectome of the brain shows how the brain operates with mental leaps between domains and mental hops within domains and the way how analogical reasoning mechanism operates. This paper demonstrates the CMA as an evolutionary approach to science, technology, and life. The model puts forward the challenges of deep uncertainty about the future, emphasizing the need for flexibility of the system in order to enable reasoning methodology to adapt to changing conditions in the new era, especially post-pandemic. In this paper, we will reveal how to draw an analogy to scientific research to discover new systems that reveal the fractal schema of analogical reasoning within and between the systems like within and between the brain regions. Distinct phases of the problem-solving processes are divided thusly: stimulus, encoding, mapping, inference, and response. Based on the brain research so far, the system is revealed to be relevant to brain activation considering each of these phases with an emphasis on achieving a better visualization of the brain’s mechanism in macro context; brain and spinal cord, and micro context: glia and neurons, relative to matching conditions of analogical reasoning and relational information, encoding, mapping, inference and response processes, and verification of perceptual responses in four-term analogical reasoning. Finally, we will relate all these terminologies with these mental leaps, mental maps, mental hops, and mental loops to make the mental model of CMA clear.

Keywords: analogy, analogical reasoning, brain connectome, cognitive model, neurons and glia, mental leaps, mental hops, mental loops

Procedia PDF Downloads 144