Search results for: Computer Simulation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7027

Search results for: Computer Simulation

667 Influence of Confinement on Phase Behavior in Unconventional Gas Condensate Reservoirs

Authors: Szymon Kuczynski

Abstract:

Poland is characterized by the presence of numerous sedimentary basins and hydrocarbon provinces. Since 2006 exploration for hydrocarbons in Poland become gradually more focus on new unconventional targets, particularly on the shale gas potential of the Upper Ordovician and Lower Silurian in the Baltic-Podlasie-Lublin Basin. The first forecast prepared by US Energy Information Administration in 2011 indicated to 5.3 Tcm of natural gas. In 2012, Polish Geological Institute presented its own forecast which estimated maximum reserves on 1.92 Tcm. The difference in the estimates was caused by problems with calculations of the initial amount of adsorbed, as well as free, gas trapped in shale rocks (GIIP - Gas Initially in Place). This value is dependent from sorption capacity, gas saturation and mutual interactions between gas, water, and rock. Determination of the reservoir type in the initial exploration phase brings essential knowledge, which has an impact on decisions related to the production. The study of porosity impact for phase envelope shift eliminates errors and improves production profitability. Confinement phenomenon affects flow characteristics, fluid properties, and phase equilibrium. The thermodynamic behavior of confined fluids in porous media is subject to the basic considerations for industrial applications such as hydrocarbons production. In particular the knowledge of the phase equilibrium and the critical properties of the contained fluid is essential for the design and optimization of such process. In pores with a small diameter (nanopores), the effect of the wall interaction with the fluid particles becomes significant and occurs in shale formations. Nano pore size is similar to the fluid particles’ diameter and the area of particles which flow without interaction with pore wall is almost equal to the area where this phenomenon occurs. The molecular simulation studies have shown an effect of confinement to the pseudo critical properties. Therefore, the critical parameters pressure and temperature and the flow characteristics of hydrocarbons in terms of nano-scale are under the strong influence of fluid particles with the pore wall. It can be concluded that the impact of a single pore size is crucial when it comes to the nanoscale because there is possible the above-described effect. Nano- porosity makes it difficult to predict the flow of reservoir fluid. Research are conducted to explain the mechanisms of fluid flow in the nanopores and gas extraction from porous media by desorption.

Keywords: adsorption, capillary condensation, phase envelope, nanopores, unconventional natural gas

Procedia PDF Downloads 337
666 Predicting Polyethylene Processing Properties Based on Reaction Conditions via a Coupled Kinetic, Stochastic and Rheological Modelling Approach

Authors: Kristina Pflug, Markus Busch

Abstract:

Being able to predict polymer properties and processing behavior based on the applied operating reaction conditions in one of the key challenges in modern polymer reaction engineering. Especially, for cost-intensive processes such as the high-pressure polymerization of low-density polyethylene (LDPE) with high safety-requirements, the need for simulation-based process optimization and product design is high. A multi-scale modelling approach was set-up and validated via a series of high-pressure mini-plant autoclave reactor experiments. The approach starts with the numerical modelling of the complex reaction network of the LDPE polymerization taking into consideration the actual reaction conditions. While this gives average product properties, the complex polymeric microstructure including random short- and long-chain branching is calculated via a hybrid Monte Carlo-approach. Finally, the processing behavior of LDPE -its melt flow behavior- is determined in dependence of the previously determined polymeric microstructure using the branch on branch algorithm for randomly branched polymer systems. All three steps of the multi-scale modelling approach can be independently validated against analytical data. A triple-detector GPC containing an IR, viscosimetry and multi-angle light scattering detector is applied. It serves to determine molecular weight distributions as well as chain-length dependent short- and long-chain branching frequencies. 13C-NMR measurements give average branching frequencies, and rheological measurements in shear and extension serve to characterize the polymeric flow behavior. The accordance of experimental and modelled results was found to be extraordinary, especially taking into consideration that the applied multi-scale modelling approach does not contain parameter fitting of the data. This validates the suggested approach and proves its universality at the same time. In the next step, the modelling approach can be applied to other reactor types, such as tubular reactors or industrial scale. Moreover, sensitivity analysis for systematically varying process conditions is easily feasible. The developed multi-scale modelling approach finally gives the opportunity to predict and design LDPE processing behavior simply based on process conditions such as feed streams and inlet temperatures and pressures.

Keywords: low-density polyethylene, multi-scale modelling, polymer properties, reaction engineering, rheology

Procedia PDF Downloads 124
665 A First-Principles Investigation of Magnesium-Hydrogen System: From Bulk to Nano

Authors: Paramita Banerjee, K. R. S. Chandrakumar, G. P. Das

Abstract:

Bulk MgH2 has drawn much attention for the purpose of hydrogen storage because of its high hydrogen storage capacity (~7.7 wt %) as well as low cost and abundant availability. However, its practical usage has been hindered because of its high hydrogen desorption enthalpy (~0.8 eV/H2 molecule), which results in an undesirable desorption temperature of 3000C at 1 bar H2 pressure. To surmount the limitations of bulk MgH2 for the purpose of hydrogen storage, a detailed first-principles density functional theory (DFT) based study on the structure and stability of neutral (Mgm) and positively charged (Mgm+) Mg nanoclusters of different sizes (m = 2, 4, 8 and 12), as well as their interaction with molecular hydrogen (H2), is reported here. It has been found that due to the absence of d-electrons within the Mg atoms, hydrogen remained in molecular form even after its interaction with neutral and charged Mg nanoclusters. Interestingly, the H2 molecules do not enter into the interstitial positions of the nanoclusters. Rather, they remain on the surface by ornamenting these nanoclusters and forming new structures with a gravimetric density higher than 15 wt %. Our observation is that the inclusion of Grimme’s DFT-D3 dispersion correction in this weakly interacting system has a significant effect on binding of the H2 molecules with these nanoclusters. The dispersion corrected interaction energy (IE) values (0.1-0.14 eV/H2 molecule) fall in the right energy window, that is ideal for hydrogen storage. These IE values are further verified by using high-level coupled-cluster calculations with non-iterative triples corrections i.e. CCSD(T), (which has been considered to be a highly accurate quantum chemical method) and thereby confirming the accuracy of our ‘dispersion correction’ incorporated DFT calculations. The significance of the polarization and dispersion energy in binding of the H2 molecules are confirmed by performing energy decomposition analysis (EDA). A total of 16, 24, 32 and 36 H2 molecules can be attached to the neutral and charged nanoclusters of size m = 2, 4, 8 and 12 respectively. Ab-initio molecular dynamics (AIMD) simulation shows that the outermost H2 molecules are desorbed at a rather low temperature viz. 150 K (-1230C) which is expected. However, complete dehydrogenation of these nanoclusters occur at around 1000C. Most importantly, the host nanoclusters remain stable up to ~500 K (2270C). All these results on the adsorption and desorption of molecular hydrogen with neutral and charged Mg nanocluster systems indicate towards the possibility of reducing the dehydrogenation temperature of bulk MgH2 by designing new Mg-based nano materials which will be able to adsorb molecular hydrogen via this weak Mg-H2 interaction, rather than the strong Mg-H bonding. Notwithstanding the fact that in practical applications, these interactions will be further complicated by the effect of substrates as well as interactions with other clusters, the present study has implications on our fundamental understanding to this problem.

Keywords: density functional theory, DFT, hydrogen storage, molecular dynamics, molecular hydrogen adsorption, nanoclusters, physisorption

Procedia PDF Downloads 415
664 A Novel Methodology for Browser Forensics to Retrieve Searched Keywords from Windows 10 Physical Memory Dump

Authors: Dija Sulekha

Abstract:

Nowadays, a good percentage of reported cybercrimes involve the usage of the Internet, directly or indirectly for committing the crime. Usually, Web Browsers leave traces of browsing activities on the host computer’s hard disk, which can be used by investigators to identify internet-based activities of the suspect. But criminals, who involve in some organized crimes, disable browser file generation feature to hide the evidence while doing illegal activities through the Internet. In such cases, even though browser files were not generated in the storage media of the system, traces of recent and ongoing activities were generated in the Physical Memory of the system. As a result, the analysis of Physical Memory Dump collected from the suspect's machine retrieves lots of forensically crucial information related to the browsing history of the Suspect. This information enables the cyber forensic investigators to concentrate on a few highly relevant selected artefacts while doing the Offline Forensics analysis of storage media. This paper addresses the reconstruction of web browsing activities by conducting live forensics to identify searched terms, downloaded files, visited sites, email headers, email ids, etc. from the physical memory dump collected from Windows 10 Systems. Well-known entry points are available for retrieving all the above artefacts except searched terms. The paper describes a novel methodology to retrieve the searched terms from Windows 10 Physical Memory. The searched terms retrieved in this way can be used for doing advanced file and keyword search in the storage media files reconstructed from the file system recovery in offline forensics.

Keywords: browser forensics, digital forensics, live Forensics, physical memory forensics

Procedia PDF Downloads 116
663 Software-Defined Networking: A New Approach to Fifth Generation Networks: Security Issues and Challenges Ahead

Authors: Behrooz Daneshmand

Abstract:

Software Defined Networking (SDN) is designed to meet the future needs of 5G mobile networks. The SDN architecture offers a new solution that involves separating the control plane from the data plane, which is usually paired together. Network functions traditionally performed on specific hardware can now be abstracted and virtualized on any device, and a centralized software-based administration approach is based on a central controller, facilitating the development of modern applications and services. These plan standards clear the way for a more adaptable, speedier, and more energetic network beneath computer program control compared with a conventional network. We accept SDN gives modern inquire about openings to security, and it can significantly affect network security research in numerous diverse ways. Subsequently, the SDN architecture engages systems to effectively screen activity and analyze threats to facilitate security approach modification and security benefit insertion. The segregation of the data planes and control and, be that as it may, opens security challenges, such as man-in-the-middle attacks (MIMA), denial of service (DoS) attacks, and immersion attacks. In this paper, we analyze security threats to each layer of SDN - application layer - southbound interfaces/northbound interfaces - controller layer and data layer. From a security point of see, the components that make up the SDN architecture have a few vulnerabilities, which may be abused by aggressors to perform noxious activities and hence influence the network and its administrations. Software-defined network assaults are shockingly a reality these days. In a nutshell, this paper highlights architectural weaknesses and develops attack vectors at each layer, which leads to conclusions about further progress in identifying the consequences of attacks and proposing mitigation strategies.

Keywords: software-defined networking, security, SDN, 5G/IMT-2020

Procedia PDF Downloads 99
662 Optimizing Super Resolution Generative Adversarial Networks for Resource-Efficient Single-Image Super-Resolution via Knowledge Distillation and Weight Pruning

Authors: Hussain Sajid, Jung-Hun Shin, Kum-Won Cho

Abstract:

Image super-resolution is the most common computer vision problem with many important applications. Generative adversarial networks (GANs) have promoted remarkable advances in single-image super-resolution (SR) by recovering photo-realistic images. However, high memory requirements of GAN-based SR (mainly generators) lead to performance degradation and increased energy consumption, making it difficult to implement it onto resource-constricted devices. To relieve such a problem, In this paper, we introduce an optimized and highly efficient architecture for SR-GAN (generator) model by utilizing model compression techniques such as Knowledge Distillation and pruning, which work together to reduce the storage requirement of the model also increase in their performance. Our method begins with distilling the knowledge from a large pre-trained model to a lightweight model using different loss functions. Then, iterative weight pruning is applied to the distilled model to remove less significant weights based on their magnitude, resulting in a sparser network. Knowledge Distillation reduces the model size by 40%; pruning then reduces it further by 18%. To accelerate the learning process, we employ the Horovod framework for distributed training on a cluster of 2 nodes, each with 8 GPUs, resulting in improved training performance and faster convergence. Experimental results on various benchmarks demonstrate that the proposed compressed model significantly outperforms state-of-the-art methods in terms of peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and image quality for x4 super-resolution tasks.

Keywords: single-image super-resolution, generative adversarial networks, knowledge distillation, pruning

Procedia PDF Downloads 96
661 Analysis of Correlation Between Manufacturing Parameters and Mechanical Strength Followed by Uncertainty Propagation of Geometric Defects in Lattice Structures

Authors: Chetra Mang, Ahmadali Tahmasebimoradi, Xavier Lorang

Abstract:

Lattice structures are widely used in various applications, especially in aeronautic, aerospace, and medical applications because of their high performance properties. Thanks to advancement of the additive manufacturing technology, the lattice structures can be manufactured by different methods such as laser beam melting technology. However, the presence of geometric defects in the lattice structures is inevitable due to the manufacturing process. The geometric defects may have high impact on the mechanical strength of the structures. This work analyzes the correlation between the manufacturing parameters and the mechanical strengths of the lattice structures. To do that, two types of the lattice structures; body-centered cubic with z-struts (BCCZ) structures made of Inconel718, and body-centered cubic (BCC) structures made of Scalmalloy, are manufactured by laser melting beam machine using Taguchi design of experiment. Each structure is placed on the substrate with a specific position and orientation regarding the roller direction of deposed metal powder. The position and orientation are considered as the manufacturing parameters. The geometric defects of each beam in the lattice are characterized and used to build the geometric model in order to perform simulations. Then, the mechanical strengths are defined by the homogeneous response as Young's modulus and yield strength. The distribution of mechanical strengths is observed as a function of manufacturing parameters. The mechanical response of the BCCZ structure is stretch-dominated, i.e., the mechanical strengths are directly dependent on the strengths of the vertical beams. As the geometric defects of vertical beams are slightly changed based on their position/orientation on the manufacturing substrate, the mechanical strengths are less dispersed. The manufacturing parameters are less influenced on the mechanical strengths of the structure BCCZ. The mechanical response of the BCC structure is bending-dominated. The geometric defects of inclined beam are highly dispersed within a structure and also based on their position/orientation on the manufacturing substrate. For different position/orientation on the substrate, the mechanical responses are highly dispersed as well. This shows that the mechanical strengths are directly impacted by manufacturing parameters. In addition, this work is carried out to study the uncertainty propagation of the geometric defects on the mechanical strength of the BCC lattice structure made of Scalmalloy. To do that, we observe the distribution of mechanical strengths of the lattice according to the distribution of the geometric defects. A probability density law is determined based on a statistical hypothesis corresponding to the geometric defects of the inclined beams. The samples of inclined beams are then randomly drawn from the density law to build the lattice structure samples. The lattice samples are then used for simulation to characterize the mechanical strengths. The results reveal that the distribution of mechanical strengths of the structures with the same manufacturing parameters is less dispersed than one of the structures with different manufacturing parameters. Nevertheless, the dispersion of mechanical strengths due to the structures with the same manufacturing parameters are unneglectable.

Keywords: geometric defects, lattice structure, mechanical strength, uncertainty propagation

Procedia PDF Downloads 123
660 TutorBot+: Automatic Programming Assistant with Positive Feedback based on LLMs

Authors: Claudia Martínez-Araneda, Mariella Gutiérrez, Pedro Gómez, Diego Maldonado, Alejandra Segura, Christian Vidal-Castro

Abstract:

The purpose of this document is to showcase the preliminary work in developing an EduChatbot-type tool and measuring the effects of its use aimed at providing effective feedback to students in programming courses. This bot, hereinafter referred to as tutorBot+, was constructed based on chatGPT and is tasked with assisting and delivering timely positive feedback to students in the field of computer science at the Universidad Católica de Concepción. The proposed working method consists of four stages: (1) Immersion in the domain of Large Language Models (LLMs), (2) Development of the tutorBot+ prototype and integration, (3) Experiment design, and (4) Intervention. The first stage involves a literature review on the use of artificial intelligence in education and the evaluation of intelligent tutors, as well as research on types of feedback for learning and the domain of chatGPT. The second stage encompasses the development of tutorBot+, and the final stage involves a quasi-experimental study with students from the Programming and Database labs, where the learning outcome involves the development of computational thinking skills, enabling the use and measurement of the tool's effects. The preliminary results of this work are promising, as a functional chatBot prototype has been developed in both conversational and non-conversational versions integrated into an open-source online judge and programming contest platform system. There is also an exploration of the possibility of generating a custom model based on a pre-trained one tailored to the domain of programming. This includes the integration of the created tool and the design of the experiment to measure its utility.

Keywords: assessment, chatGPT, learning strategies, LLMs, timely feedback

Procedia PDF Downloads 68
659 Greenhouse Controlled with Graphical Plotting in Matlab

Authors: Bruno R. A. Oliveira, Italo V. V. Braga, Jonas P. Reges, Luiz P. O. Santos, Sidney C. Duarte, Emilson R. R. Melo, Auzuir R. Alexandria

Abstract:

This project aims to building a controlled greenhouse, or for better understanding, a structure where one can maintain a given range of temperature values (°C) coming from radiation emitted by an incandescent light, as previously defined, characterizing as a kind of on-off control and a differential, which is the plotting of temperature versus time graphs assisted by MATLAB software via serial communication. That way it is possible to connect the stove with a computer and monitor parameters. In the control, it was performed using a PIC 16F877A microprocessor which enabled convert analog signals to digital, perform serial communication with the IC MAX232 and enable signal transistors. The language used in the PIC's management is Basic. There are also a cooling system realized by two coolers 12V distributed in lateral structure, being used for venting and the other for exhaust air. To find out existing temperature inside is used LM35DZ sensor. Other mechanism used in the greenhouse construction was comprised of a reed switch and a magnet; their function is in recognition of the door position where a signal is sent to a buzzer when the door is open. Beyond it exist LEDs that help to identify the operation which the stove is located. To facilitate human-machine communication is employed an LCD display that tells real-time temperature and other information. The average range of design operating without any major problems, taking into account the limitations of the construction material and structure of electrical current conduction, is approximately 65 to 70 ° C. The project is efficient in these conditions, that is, when you wish to get information from a given material to be tested at temperatures not as high. With the implementation of the greenhouse automation, facilitating the temperature control and the development of a structure that encourages correct environment for the most diverse applications.

Keywords: greenhouse, microcontroller, temperature, control, MATLAB

Procedia PDF Downloads 402
658 Automatic Near-Infrared Image Colorization Using Synthetic Images

Authors: Yoganathan Karthik, Guhanathan Poravi

Abstract:

Colorizing near-infrared (NIR) images poses unique challenges due to the absence of color information and the nuances in light absorption. In this paper, we present an approach to NIR image colorization utilizing a synthetic dataset generated from visible light images. Our method addresses two major challenges encountered in NIR image colorization: accurately colorizing objects with color variations and avoiding over/under saturation in dimly lit scenes. To tackle these challenges, we propose a Generative Adversarial Network (GAN)-based framework that learns to map NIR images to their corresponding colorized versions. The synthetic dataset ensures diverse color representations, enabling the model to effectively handle objects with varying hues and shades. Furthermore, the GAN architecture facilitates the generation of realistic colorizations while preserving the integrity of dimly lit scenes, thus mitigating issues related to over/under saturation. Experimental results on benchmark NIR image datasets demonstrate the efficacy of our approach in producing high-quality colorizations with improved color accuracy and naturalness. Quantitative evaluations and comparative studies validate the superiority of our method over existing techniques, showcasing its robustness and generalization capability across diverse NIR image scenarios. Our research not only contributes to advancing NIR image colorization but also underscores the importance of synthetic datasets and GANs in addressing domain-specific challenges in image processing tasks. The proposed framework holds promise for various applications in remote sensing, medical imaging, and surveillance where accurate color representation of NIR imagery is crucial for analysis and interpretation.

Keywords: computer vision, near-infrared images, automatic image colorization, generative adversarial networks, synthetic data

Procedia PDF Downloads 43
657 The Development of Traffic Devices Using Natural Rubber in Thailand

Authors: Weeradej Cheewapattananuwong, Keeree Srivichian, Godchamon Somchai, Wasin Phusanong, Nontawat Yoddamnern

Abstract:

Natural rubber used for traffic devices in Thailand has been developed and researched for several years. When compared with Dry Rubber Content (DRC), the quality of Rib Smoked Sheet (RSS) is better. However, the cost of admixtures, especially CaCO₃ and sulphur, is higher than the cost of RSS itself. In this research, Flexible Guideposts and Rubber Fender Barriers (RFB) are taken into consideration. In case of flexible guideposts, the materials used are both RSS and DRC60%, but for RFB, only RSS is used due to the controlled performance tests. The objective of flexible guideposts and RFB is to decrease a number of accidents, fatal rates, and serious injuries. Functions of both devices are to save road users and vehicles as well as to absorb impact forces from vehicles so as to decrease of serious road accidents. This leads to the mitigation methods to remedy the injury of motorists, form severity to moderate one. The solution is to find the best practice of traffic devices using natural rubber under the engineering concepts. In addition, the performances of materials, such as tensile strength and durability, are calculated for the modulus of elasticity and properties. In the laboratory, the simulation of crashes, finite element of materials, LRFD, and concrete technology methods are taken into account. After calculation, the trials' compositions of materials are mixed and tested in the laboratory. The tensile test, compressive test, and weathering or durability test are followed and based on ASTM. Furthermore, the Cycle-Repetition Test of Flexible Guideposts will be taken into consideration. The final decision is to fabricate all materials and have a real test section in the field. In RFB test, there will be 13 crash tests, 7 Pickup Truck tests, and 6 Motorcycle Tests. The test of vehicular crashes happens for the first time in Thailand, applying the trial and error methods; for example, the road crash test under the standard of NCHRP-TL3 (100 kph) is changed to the MASH 2016. This is owing to the fact that MASH 2016 is better than NCHRP in terms of speed, types, and weight of vehicles and the angle of crash. In the processes of MASH, Test Level 6 (TL-6), which is composed of 2,270 kg Pickup Truck, 100 kph, and 25 degree of crash-angle is selected. The final test for real crash will be done, and the whole system will be evaluated again in Korea. The researchers hope that the number of road accidents will decrease, and Thailand will be no more in the top tenth ranking of road accidents in the world.

Keywords: LRFD, load and resistance factor design, ASTM, american society for testing and materials, NCHRP, national cooperation highway research program, MASH, manual for assessing safety hardware

Procedia PDF Downloads 128
656 A Gamification Teaching Method for Software Measurement Process

Authors: Lennon Furtado, Sandro Oliveira

Abstract:

The importance of an effective measurement program lies in the ability to control and predict what can be measured. Thus, the measurement program has the capacity to provide bases in decision-making to support the interests of an organization. Therefore, it is only possible to apply for an effective measurement program with a team of software engineers well trained in the measurement area. However, the literature indicates that are few computer science courses that have in their program the teaching of the software measurement process. And even these, generally present only basic theoretical concepts of said process and little or no measurement in practice, which results in the student's lack of motivation to learn the measurement process. In this context, according to some experts in software process improvements, one of the most used approaches to maintaining the motivation and commitment to software process improvements program is the use of the gamification. Therefore, this paper aims to present a proposal of teaching the measurement process by gamification. Which seeks to improve student motivation and performance in the assimilation of tasks related to software measurement, by incorporating elements of games into the practice of measurement process, making it more attractive for learning. And as a way of validating the proposal will be made a comparison between two distinct groups of 20 students of Software Quality class, a control group, and an experiment group. The control group will be the students that will not make use of the gamification proposal to learn software measurement process, while the experiment group, will be the students that will make use of the gamification proposal to learn software measurement process. Thus, this paper will analyze the objective and subjective results of each group. And as objective result will be analyzed the student grade reached at the end of the course, and as subjective results will be analyzed a post-course questionnaire with the opinion of each student about the teaching method. Finally, this paper aims to prove or refute the following hypothesis: If the gamification proposal to teach software measurement process does appropriate motivate the student, in order to attribute the necessary competence to the practical application of the measurement process.

Keywords: education, gamification, software measurement process, software engineering

Procedia PDF Downloads 314
655 Cessna Citation X Business Aircraft Stability Analysis Using Linear Fractional Representation LFRs Model

Authors: Yamina Boughari, Ruxandra Mihaela Botez, Florian Theel, Georges Ghazi

Abstract:

Clearance of flight control laws of a civil aircraft is a long and expensive process in the Aerospace industry. Thousands of flight combinations in terms of speeds, altitudes, gross weights, centers of gravity and angles of attack have to be investigated, and proved to be safe. Nonetheless, in this method, a worst flight condition can be easily missed, and its missing would lead to a critical situation. Definitively, it would be impossible to analyze a model because of the infinite number of cases contained within its flight envelope, that might require more time, and therefore more design cost. Therefore, in industry, the technique of the flight envelope mesh is commonly used. For each point of the flight envelope, the simulation of the associated model ensures the satisfaction or not of specifications. In order to perform fast, comprehensive and effective analysis, other varying parameters models were developed by incorporating variations, or uncertainties in the nominal models, known as Linear Fractional Representation LFR models; these LFR models were able to describe the aircraft dynamics by taking into account uncertainties over the flight envelope. In this paper, the LFRs models are developed using the speeds and altitudes as varying parameters; The LFR models were built using several flying conditions expressed in terms of speeds and altitudes. The use of such a method has gained a great interest by the aeronautical companies that have seen a promising future in the modeling, and particularly in the design and certification of control laws. In this research paper, we will focus on the Cessna Citation X open loop stability analysis. The data are provided by a Research Aircraft Flight Simulator of Level D, that corresponds to the highest level flight dynamics certification; this simulator was developed by CAE Inc. and its development was based on the requirements of research at the LARCASE laboratory. The acquisition of these data was used to develop a linear model of the airplane in its longitudinal and lateral motions, and was further used to create the LFR’s models for 12 XCG /weights conditions, and thus the whole flight envelope using a friendly Graphical User Interface developed during this study. Then, the LFR’s models are analyzed using Interval Analysis method based upon Lyapunov function, and also the ‘stability and robustness analysis’ toolbox. The results were presented under the form of graphs, thus they have offered good readability, and were easily exploitable. The weakness of this method stays in a relatively long calculation, equal to about four hours for the entire flight envelope.

Keywords: flight control clearance, LFR, stability analysis, robustness analysis

Procedia PDF Downloads 352
654 A Compact Extended Laser Diode Cavity Centered at 780 nm for Use in High-Resolution Laser Spectroscopy

Authors: J. Alvarez, J. Pimienta, R. Sarmiento

Abstract:

Diode lasers working in free mode present different shifting and broadening determined by external factors such as temperature, current or mechanical vibrations, and they are not more useful in applications such as spectroscopy, metrology, and cooling of atoms, among others. Different configurations can reduce the spectral width of a laser; one of the most effective is to extend the optical resonator of the laser diode and use optical feedback either with the help of a partially reflective mirror or with a diffraction grating; this latter configuration is not only allowed to reduce the spectral width of the laser line but also to coarsely adjust its working wavelength, within a wide range typically ~ 10nm by slightly varying the angle of the diffraction grating. Two settings are commonly used for this purpose, the Littrow configuration and the Littmann Metcalf. In this paper, we present the design, construction, and characterization of a compact extended laser cavity in Littrow configuration. The designed cavity is compact and was machined on an aluminum block using computer numerical control (CNC); it has a mass of only 380 g. The design was tested on laser diodes with different wavelengths, 650nm, 780nm, and 795 nm, but can be equally efficient at other wavelengths. This report details the results obtained from the extended cavity working at a wavelength of 780 nm, with an output power of around 35mW and a line width of less than 1Mhz. The cavity was used to observe the spectrum of the corresponding Rubidium D2 line. By modulating the current and with the help of phase detection techniques, a dispersion signal with an excellent signal-to-noise ratio was generated that allowed the stabilization of the laser to a transition of the hyperfine structure of Rubidium with an integral proportional controller (PI) circuit made with precision operational amplifiers.

Keywords: Littrow, Littman-Metcalf, line width, laser stabilization, hyperfine structure

Procedia PDF Downloads 227
653 Logical-Probabilistic Modeling of the Reliability of Complex Systems

Authors: Sergo Tsiramua, Sulkhan Sulkhanishvili, Elisabed Asabashvili, Lazare Kvirtia

Abstract:

The paper presents logical-probabilistic methods, models, and algorithms for reliability assessment of complex systems, based on which a web application for structural analysis and reliability assessment of systems was created. It is important to design systems based on structural analysis, research, and evaluation of efficiency indicators. One of the important efficiency criteria is the reliability of the system, which depends on the components of the structure. Quantifying the reliability of large-scale systems is a computationally complex process, and it is advisable to perform it with the help of a computer. Logical-probabilistic modeling is one of the effective means of describing the structure of a complex system and quantitatively evaluating its reliability, which was the basis of our application. The reliability assessment process included the following stages, which were reflected in the application: 1) Construction of a graphical scheme of the structural reliability of the system; 2) Transformation of the graphic scheme into a logical representation and modeling of the shortest ways of successful functioning of the system; 3) Description of system operability condition with logical function in the form of disjunctive normal form (DNF); 4) Transformation of DNF into orthogonal disjunction normal form (ODNF) using the orthogonalization algorithm; 5) Replacing logical elements with probabilistic elements in ODNF, obtaining a reliability estimation polynomial and quantifying reliability; 6) Calculation of “weights” of elements of system. Using the logical-probabilistic methods, models and algorithms discussed in the paper, a special software was created, by means of which a quantitative assessment of the reliability of systems of a complex structure is produced. As a result, structural analysis of systems, research, and designing of optimal structure systems are carried out.

Keywords: complex systems, logical-probabilistic methods, orthogonalization algorithm, reliability of systems, “weights” of elements

Procedia PDF Downloads 66
652 Challenges of Translation Knowledge for Pediatric Rehabilitation Technology

Authors: Patrice L. Weiss, Barbara Mazer, Tal Krasovsky, Naomi Gefen

Abstract:

Knowledge translation (KT) involves the process of applying the most promising research findings to practical settings, ensuring that new technological discoveries enhance healthcare accessibility, effectiveness, and accountability. This perspective paper aims to discuss and provide examples of how the KT process can be implemented during a time of rapid advancement in rehabilitation technologies, which have the potential to greatly influence pediatric healthcare. The analysis is grounded in a comprehensive systematic review of literature, where key studies from the past 34 years were carefully interpreted by four expert researchers in scientific and clinical fields. This review revealed both theoretical and practical insights into the factors that either facilitate or impede the successful implementation of new rehabilitation technologies. By utilizing the Knowledge-to-Action cycle, which encompasses the knowledge creation funnel and the action cycle, we demonstrated its application in integrating advanced technologies into clinical practice and guiding healthcare policy adjustments. We highlighted three successful technology applications: powered mobility, head support systems, and telerehabilitation. Moreover, we investigated emerging technologies, such as brain-computer interfaces and robotic assistive devices, which face challenges related to cost, durability, and usability. Recommendations include prioritizing early and ongoing design collaborations, transitioning from research to practical implementation, and determining the optimal timing for clinical adoption of new technologies. In conclusion, this paper informs, justifies, and strengthens the knowledge translation process, ensuring it remains relevant, rigorous, and significantly contributes to pediatric rehabilitation and other clinical fields.

Keywords: knowledge translation, rehabilitation technology, pediatrics, barriers, facilitators, stakeholders

Procedia PDF Downloads 20
651 Prioritization Assessment of Housing Development Risk Factors: A Fuzzy Hierarchical Process-Based Approach

Authors: Yusuf Garba Baba

Abstract:

The construction industry and housing subsector are fraught with risks that have the potential of negatively impacting on the achievement of project objectives. The success or otherwise of most construction projects depends to large extent on how well these risks have been managed. The recent paradigm shift by the subsector to use of formal risk management approach in contrast to hitherto developed rules of thumb means that risks must not only be identified but also properly assessed and responded to in a systematic manner. The study focused on identifying risks associated with housing development projects and prioritisation assessment of the identified risks in order to provide basis for informed decision. The study used a three-step identification framework: review of literature for similar projects, expert consultation and questionnaire based survey to identify potential risk factors. Delphi survey method was employed in carrying out the relative prioritization assessment of the risks factors using computer-based Analytical Hierarchical Process (AHP) software. The results show that 19 out of the 50 risks significantly impact on housing development projects. The study concludes that although significant numbers of risk factors have been identified as having relevance and impacting to housing construction projects, economic risk group and, in particular, ‘changes in demand for houses’ is prioritised by most developers as posing a threat to the achievement of their housing development objectives. Unless these risks are carefully managed, their effects will continue to impede success in these projects. The study recommends the adoption and use of the combination of multi-technique identification framework and AHP prioritization assessment methodology as a suitable model for the assessment of risks in housing development projects.

Keywords: risk management, risk identification, risk analysis, analytic hierarchical process

Procedia PDF Downloads 118
650 Quality Analysis of Vegetables Through Image Processing

Authors: Abdul Khalique Baloch, Ali Okatan

Abstract:

The quality analysis of food and vegetable from image is hot topic now a day, where researchers make them better then pervious findings through different technique and methods. In this research we have review the literature, and find gape from them, and suggest better proposed approach, design the algorithm, developed a software to measure the quality from images, where accuracy of image show better results, and compare the results with Perouse work done so for. The Application we uses an open-source dataset and python language with tensor flow lite framework. In this research we focus to sort food and vegetable from image, in the images, the application can sorts and make them grading after process the images, it could create less errors them human base sorting errors by manual grading. Digital pictures datasets were created. The collected images arranged by classes. The classification accuracy of the system was about 94%. As fruits and vegetables play main role in day-to-day life, the quality of fruits and vegetables is necessary in evaluating agricultural produce, the customer always buy good quality fruits and vegetables. This document is about quality detection of fruit and vegetables using images. Most of customers suffering due to unhealthy foods and vegetables by suppliers, so there is no proper quality measurement level followed by hotel managements. it have developed software to measure the quality of the fruits and vegetables by using images, it will tell you how is your fruits and vegetables are fresh or rotten. Some algorithms reviewed in this thesis including digital images, ResNet, VGG16, CNN and Transfer Learning grading feature extraction. This application used an open source dataset of images and language used python, and designs a framework of system.

Keywords: deep learning, computer vision, image processing, rotten fruit detection, fruits quality criteria, vegetables quality criteria

Procedia PDF Downloads 70
649 Linguistics and Islamic Studies in Historical Perspective: The Case of Interdisciplinary Communication

Authors: Olga Bernikova, Oleg Redkin

Abstract:

Islamic Studies and the Arabic language are indivisible from each other starting from the appearance of Islam and formation of the Classical language. The present paper demonstrates correlation among linguistics and religion in historical perspective with regard to peculiarities of the Arabic language which distinguish it from the other prophetic languages. Islamic Studies and Linguistics are indivisible from each other starting from the invent of Islam and formation of the Classical language. In historical perspective, the Arabic language has been and remains a tool for the expression of Islamic rhetoric being a prophetic language. No other language in the world has preserved its stability for more than 14 centuries. Islam is considered to be one of the most important factors which secure this stability. The analysis and study of the text of Qurʾān are of special importance for those who study Islamic civilization, its role in the destinies of the mankind, its values and virtues. Without understanding of the polyphony of this sacred text, indivisible unity of its form and content it is impossible to understand social developments both in the present and the past. Since the first years of Islam Qurʾān had been in the center of attention of Muslim scholars, and in the center of attention of theologians, historians, philologists, jurists, mathematicians. Only quite recently it has become an object of analysis of the specialists of computer technologies. In Arabic and Islamic studies mediaeval texts i.e. textual documents are considered the main source of information. Hence the analysis of the multiplicity of various texts and finding of interconnections between them help to set scattered fragments of the riddle into a common and eloquent picture of the past, which reflects the state of the society on certain stages of its development. The text of the Qurʾān like any other phenomenon is a multifaceted object that should be studied from different points of view. As a result, this complex study will allow obtaining a three-dimensional image rather than a flat picture alone.

Keywords: Arabic, Islamic studies, linguistics, religion

Procedia PDF Downloads 223
648 Miniaturization of Germanium Photo-Detectors by Using Micro-Disk Resonator

Authors: Haifeng Zhou, Tsungyang Liow, Xiaoguang Tu, Eujin Lim, Chao Li, Junfeng Song, Xianshu Luo, Ying Huang, Lianxi Jia, Lianwee Luo, Kim Dowon, Qing Fang, Mingbin Yu, Guoqiang Lo

Abstract:

Several Germanium photodetectors (PD) built on silicon micro-disks are fabricated on the standard Si photonics multiple project wafers (MPW) and demonstrated to exhibit very low dark current, satisfactory operation bandwidth and moderate responsivity. Among them, a vertical p-i-n Ge PD based on a 2.0 µm-radius micro-disk has a dark current of as low as 35 nA, compared to a conventional PD current of 1 µA with an area of 100 µm2. The operation bandwidth is around 15 GHz at a reverse bias of 1V. The responsivity is about 0.6 A/W. Microdisk is a striking planar structure in integrated optics to enhance light-matter interaction and construct various photonics devices. The disk geometries feature in strongly and circularly confining light into an ultra-small volume in the form of whispering gallery modes. A laser may benefit from a microdisk in which a single mode overlaps the gain materials both spatially and spectrally. Compared to microrings, micro-disk removes the inner boundaries to enable even better compactness, which also makes it very suitable for some scenarios that electrical connections are needed. For example, an ultra-low power (≈ fJ) athermal Si modulator has been demonstrated with a bit rate of 25Gbit/s by confining both photons and electrically-driven carriers into a microscale volume.In this work, we study Si-based PDs with Ge selectively grown on a microdisk with the radius of a few microns. The unique feature of using microdisk for Ge photodetector is that mode selection is not important. In the applications of laser or other passive optical components, microdisk must be designed very carefully to excite the fundamental mode in a microdisk in that essentially the microdisk usually supports many higher order modes in the radial directions. However, for detector applications, this is not an issue because the local light absorption is mode insensitive. Light power carried by all modes are expected to be converted into photo-current. Another benefit of using microdisk is that the power circulation inside avoids any introduction of the reflector. A complete simulation model with all involved materials taken into account is established to study the promise of microdisk structures for photodetector by using finite difference time domain (FDTD) method. By viewing from the current preliminary data, the directions to further improve the device performance are also discussed.

Keywords: integrated optical devices, silicon photonics, micro-resonator, photodetectors

Procedia PDF Downloads 407
647 Experiences of Homophobia, Machismo and Misogyny in Tourist Destinations: A Netnography in a Facebook Community of LGBT Backpackers

Authors: Renan De Caldas Honorato, Ana Augusta Ferreira De Freitas

Abstract:

Homosexuality is still criminalized in a large number of countries. In some of them, being gay or lesbian can even be punished by death. Added to this context, the experiences of social discrimination faced by the LGBT population, including homophobia, machismo and misogyny, cause numerous restrictions throughout their lives. The possibility of confronting these challenges in moments that should be pleasant, such as on a trip or on vacation, is unpleasant, to say the least. In the current scenario of intensifying the use of Social network sites (SNSs) to search for information, including in the tourist area, this work aims to analyze the sharing of tourist experiences with situations of confrontation and perceptions of homophobia, machismo and misogyny, and restrictions suffered in tourist destinations. The fieldwork is a community of LGBT backpackers based on Facebook. Netnography was the core method adopted. A qualitative approach was conducted and 463 publications posted from January to December 2020 were assessed through the computer-mediated discourse analysis (CMDA). The results suggest that these publications exist to identify the potential exposure to these offensive behaviors while traveling. Individuals affirm that the laws, positive or not, in relation to the LGBT public are not the only factors for a place to be defined as safe or not for gay travelers. The social situation of a country and its laws are quite different and this is the main target of these publications. The perception of others about the chosen destination is more important than knowing your rights and the legal status of each country and it also lessens uncertainty, even when they are never totally confident when choosing a travel destination. In certain circumstances, sexual orientation also needs to be protected from the judgment of hosts and residents. The systemic treatment of homophobic behavior and the construction of a more inclusive society are urgent.

Keywords: homophobia, hospitality, machismo, misogyny

Procedia PDF Downloads 188
646 Quantifying Stakeholders’ Values of Technical and Vocational Education and Training Provision in Nigeria

Authors: Lidimma Benjamin, Nimmyel Gwakzing, Wuyep Nanyi

Abstract:

Technical and Vocational Education and Training (TVET) has many stakeholders, each with their own values and interests. This study will focus on the diversity of the values and interests within and across groups of stakeholders by quantifying the value that stakeholders attached to several quality attributes of TVET, and also find out to what extent TVET stakeholders differ in their values. The quality of TVET therefore, depends on how well it aligns with the values and interests of these stakeholders. The five stakeholders are parents, students, teachers, policy makers, and work place training supervisors. The 9 attributes are employer appreciation of students, graduation rate, obtained computer skills of students, mentoring hours in workplace learning/Students Industrial Work Experience Scheme (SIWES), challenge, structure, students’ appreciation of teachers, schooling hours, and attention to civic education. 346 respondents (comprising Parents, Students, Teachers, Policy Makers, and Workplace Training Supervisors) were repeatedly asked to rank a set of 4 programs, each with a specific value on the nine quality indicators. Conjoint analysis was used to obtain the values that the stakeholders assigned to the 9 attributes when evaluating the quality of TVET programs. Rank-ordered logistic regression was the statistical/tool used for ranking the respondents values assign to the attributes. The similarities and diversity in values and interests of the different stakeholders will be of use by both Nigerian government and TVET colleges, to improve the overall quality of education and the match between vocational programs and their stakeholders simultaneous evaluation and combination of information in product attributes. Such approach models the decision environment by confronting a respondent with choices that are close to real-life choices. Therefore, it is more realistically than traditional survey methods.

Keywords: TVET, vignette study, conjoint analysis, quality perception, educational stakeholders

Procedia PDF Downloads 80
645 Evaluating Daylight Performance in an Office Environment in Malaysia, Using Venetian Blind System: Case Study

Authors: Fatemeh Deldarabdolmaleki, Mohamad Fakri Zaky Bin Ja'afar

Abstract:

Having a daylit space together with view results in a pleasant and productive environment for office employees. A daylit space is a space which utilizes daylight as a basic source of illumination to fulfill user’s visual demands and minimizes the electric energy consumption. Malaysian weather is hot and humid all over the year because of its location in the equatorial belt. however, because most of the commercial buildings in Malaysia are air-conditioned, huge glass windows are normally installed in order to keep the physical and visual relation between inside and outside. As a result of climatic situation and mentioned new trend, an ordinary office has huge heat gain, glare, and discomfort for occupants. Balancing occupant’s comfort and energy conservation in a tropical climate is a real challenge. This study concentrates on evaluating a venetian blind system using per pixel analyzing tools based on the suggested cut-out metrics by the literature. Workplace area in a private office room has been selected as a case study. Eight-day measurement experiment was conducted to investigate the effect of different venetian blind angles in an office area under daylight conditions in Serdang, Malaysia. The study goal was to explore daylight comfort of a commercially available venetian blind system, its’ daylight sufficiency and excess (8:00 AM to 5 PM) as well as Glare examination. Recently developed software, analyzing High Dynamic Range Images (HDRI captured by CCD camera), such as radiance based Evalglare and hdrscope help to investigate luminance-based metrics. The main key factors are illuminance and luminance levels, mean and maximum luminance, daylight glare probability (DGP) and luminance ratio of the selected mask regions. The findings show that in most cases, morning session needs artificial lighting in order to achieve daylight comfort. However, in some conditions (e.g. 10° and 40° slat angles) in the second half of day the workplane illuminance level exceeds the maximum of 2000 lx. Generally, a rising trend is discovered toward mean window luminance and the most unpleasant cases occur after 2 P.M. Considering the luminance criteria rating, the uncomfortable conditions occur in the afternoon session. Surprisingly in no blind condition, extreme case of window/task ratio is not common. Studying the daylight glare probability, there is not any DGP value higher than 0.35 in this experiment.

Keywords: daylighting, energy simulation, office environment, Venetian blind

Procedia PDF Downloads 256
644 Factors Influencing Fertility Preferences and Contraceptive Use among Reproductive Aged Married Women in Eastern Ethiopia

Authors: Heroda Gebru, Berhanu Seyoum, Melake Damena, Gezahegn Tesfaye

Abstract:

Background: In Ethiopia there is a population policy aimed at reducing fertility and increasing contraceptive prevalence. Objective: To assess the fertility preference and contraceptive use status of married women who were living in Dire Dawa administrative city. Methods: Cross sectional study which included a sample size of 421 married women of reproductive age were performed. Data was collected using structured questionnaire during house to house survey and semi-structured questionnaire during in-depth interview. Data was processed and analyzed using SPSS version 16 computer software. Univariate, bi variate and multi variate analysis was employed. Results: A total of 421 married women of reproductive age group were interviewed having a response rate of 100 percent. More than half (58.2%) of the respondent have desire of more children. While 41.8% want no more children. Regarding contraceptive use 52.5% of the respondents were using contraceptive at the time of survey. Fertility preference and contraceptive use were significantly associated with age of the respondent, history of child death, number of living children, religion and age at first birth. Conclusions: Those women with younger age group, who had no child death history and women with lesser number of surviving children were more likely desire additional children. Women with older age at first birth and protestant in religion were more likely practiced contraceptive use. Strong information and education regarding contraceptive for younger age group should be provided, advocacy at level of religious leader is important, comprehensive family planning counselling and education should be available for the community, husbands, and religious leaders and the aim for increasing contraceptive use should focus on the practical aspect.

Keywords: fertility preference, contraceptive use, univariate analysis, family planning

Procedia PDF Downloads 379
643 Stochastic Pi Calculus in Financial Markets: An Alternate Approach to High Frequency Trading

Authors: Jerome Joshi

Abstract:

The paper presents the modelling of financial markets using the Stochastic Pi Calculus model. The Stochastic Pi Calculus model is mainly used for biological applications; however, the feature of this model promotes its use in financial markets, more prominently in high frequency trading. The trading system can be broadly classified into exchange, market makers or intermediary traders and fundamental traders. The exchange is where the action of the trade is executed, and the two types of traders act as market participants in the exchange. High frequency trading, with its complex networks and numerous market participants (intermediary and fundamental traders) poses a difficulty while modelling. It involves the participants to seek the advantage of complex trading algorithms and high execution speeds to carry out large volumes of trades. To earn profits from each trade, the trader must be at the top of the order book quite frequently by executing or processing multiple trades simultaneously. This would require highly automated systems as well as the right sentiment to outperform other traders. However, always being at the top of the book is also not best for the trader, since it was the reason for the outbreak of the ‘Hot – Potato Effect,’ which in turn demands for a better and more efficient model. The characteristics of the model should be such that it should be flexible and have diverse applications. Therefore, a model which has its application in a similar field characterized by such difficulty should be chosen. It should also be flexible in its simulation so that it can be further extended and adapted for future research as well as be equipped with certain tools so that it can be perfectly used in the field of finance. In this case, the Stochastic Pi Calculus model seems to be an ideal fit for financial applications, owing to its expertise in the field of biology. It is an extension of the original Pi Calculus model and acts as a solution and an alternative to the previously flawed algorithm, provided the application of this model is further extended. This model would focus on solving the problem which led to the ‘Flash Crash’ which is the ‘Hot –Potato Effect.’ The model consists of small sub-systems, which can be integrated to form a large system. It is designed in way such that the behavior of ‘noise traders’ is considered as a random process or noise in the system. While modelling, to get a better understanding of the problem, a broader picture is taken into consideration with the trader, the system, and the market participants. The paper goes on to explain trading in exchanges, types of traders, high frequency trading, ‘Flash Crash,’ ‘Hot-Potato Effect,’ evaluation of orders and time delay in further detail. For the future, there is a need to focus on the calibration of the module so that they would interact perfectly with other modules. This model, with its application extended, would provide a basis for researchers for further research in the field of finance and computing.

Keywords: concurrent computing, high frequency trading, financial markets, stochastic pi calculus

Procedia PDF Downloads 77
642 Challenge in Teaching Physics during the Pandemic: Another Way of Teaching and Learning

Authors: Edson Pierre, Gustavo de Jesus Lopez Nunez

Abstract:

The objective of this work is to analyze how physics can be taught remotely through the use of platforms and software to attract the attention of 2nd-year high school students at Colégio Cívico Militar Professor Carmelita Souza Dias and point out how remote teaching can be a teaching-learning strategy during the period of social distancing. Teaching physics has been a challenge for teachers and students, permeating common sense with the great difficulty of teaching and learning the subject. The challenge increased in 2020 and 2021 with the impact caused by the new coronavirus pandemic (Sars-Cov-2) and its variants that have affected the entire world. With these changes, a new teaching modality emerged: remote teaching. It brought new challenges and one of them was promoting distance research experiences, especially in physics teaching, since there are learning difficulties and it is often impossible for the student to relate the theory observed in class with the reality that surrounds them. Teaching physics in schools faces some difficulties, which makes it increasingly less attractive for young people to choose this profession. Bearing in mind that the study of physics is very important, as it puts students in front of concrete and real situations, situations that physical principles can respond to, helping to understand nature, nourishing and nurturing a taste for science. The use of new platforms and software, such as PhET Interactive Simulations from the University of Colorado at Boulder, is a virtual laboratory that has numerous simulations of scientific experiments, which serve to improve the understanding of the content taught practically, facilitating student learning and absorption of content, being a simple, practical and free simulation tool, attracts more attention from students, causing them to acquire greater knowledge about the subject studied, or even a quiz, bringing certain healthy competitiveness to students, generating knowledge and interest in the themes used. The present study takes the Theory of Social Representations as a theoretical reference, examining the content and process of constructing the representations of teachers, subjects of our investigation, on the evaluation of teaching and learning processes, through a methodology of qualitative. The result of this work has shown that remote teaching was really a very important strategy for the process of teaching and learning physics in the 2nd year of high school. It provided greater interaction between the teacher and the student. Therefore, the teacher also plays a fundamental role since technology is increasingly present in the educational environment, and he is the main protagonist of this process.

Keywords: physics teaching, technologies, remote learning, pandemic

Procedia PDF Downloads 66
641 Corpus-Based Model of Key Concepts Selection for the Master English Language Course "Government Relations"

Authors: Elena Pozdnyakova

Abstract:

“Government Relations” is a field of knowledge presently taught at the majority of universities around the globe. English as the default language can become the language of teaching since the issues discussed are both global and national in character. However for this field of knowledge key concepts and their word representations in English don’t often coincide with those in other languages. International master’s degree students abroad as well as students, taught the course in English at their national universities, are exposed to difficulties, connected with correct conceptualizing of terminology of GR in British and American academic traditions. The study was carried out during the GR English language course elaboration (pilot research: 2013 -2015) at Moscow State Institute of Foreign Relations (University), Russian Federation. Within this period, English language instructors designed and elaborated the three-semester course of GR. Methodologically the course design was based on elaboration model with the special focus on conceptual elaboration sequence and theoretical elaboration sequence. The course designers faced difficulties in concept selection and theoretical elaboration sequence. To improve the results and eliminate the problems with concept selection, a new, corpus-based approach was worked out. The computer-based tool WordSmith 6.0 was used with the aim to build a model of key concept selection. The corpus of GR English texts consisted of 1 million words (the study corpus). The approach was based on measuring effect size, i.e. the percent difference of the frequency of a word in the study corpus when compared to that in the reference corpus. The results obtained proved significant improvement in the process of concept selection. The corpus-based model also facilitated theoretical elaboration of teaching materials.

Keywords: corpus-based study, English as the default language, key concepts, measuring effect size, model of key concept selection

Procedia PDF Downloads 306
640 Development of Interactional Competence: Listener Responses of Long-Term Stay Abroad Chinese L1 Speakers in Australian Universities

Authors: Wei Gao

Abstract:

The current study investigates the change of listener responses in social conversations of the second language (L2) speakers who are staying abroad with Chinese L1 speakers in Australian universities and how their long-term stay abroad impacted their design for L2 recipient actions. There is a limited amount of empirical work on L2 English listener response acquisition, particularly regarding the influence of long-term stay abroad in English-speaking countries. Little is known whether the development of L2 listener responses and the improvement of interactional competence is affected by the prolonged residency in the target L2 country. Forty-eight participants were recruited, and they participated in the designed speaking task through Computer-Mediated Communication. Results showed that long-term stay abroad Chinese L1 speakers demonstrated an English-like pattern of listener responses in communication. Long-term stay abroad experience had a significant impact on L2 English listener responses production and organization in social conversation. Long-term stay abroad L1 Chinese speakers had an active and productive response in listenership than their non-stay abroad counterparts in terms of frequency and placement in producing listener responses. However, the L2 English listener response production only occurred to be partial in response tokens, such as backchannels and reactive expressions, also in resumptive openers' employment. This study shows that L2 English listener responses could be acquired during a long-term stay abroad in English-speaking countries but showed partial acquisition in collaborative finishes production. In addition, the most prominent finding was that Chinese L1 speakers changed their overall listener responses pattern from L1 Chinese to L2 English. The study reveals specific interactional changes in English L2 listener responses acquisition. It generates pedagogical implications for cross-cultural communication and L2 pragmatics acquisition during a long-term stay abroad.

Keywords: listener responses, stay abroad, interactional competence, L2 pragmatics acquisition

Procedia PDF Downloads 84
639 Design of Photonic Crystal with Defect Layer to Eliminate Interface Corrugations for Obtaining Unidirectional and Bidirectional Beam Splitting under Normal Incidence

Authors: Evrim Colak, Andriy E. Serebryannikov, Pavel V. Usik, Ekmel Ozbay

Abstract:

Working with a dielectric photonic crystal (PC) structure which does not include surface corrugations, unidirectional transmission and dual-beam splitting are observed under normal incidence as a result of the strong diffractions caused by the embedded defect layer. The defect layer has twice the period of the regular PC segments which sandwich the defect layer. Although the PC has even number of rows, the structural symmetry is broken due to the asymmetric placement of the defect layer with respect to the symmetry axis of the regular PC. The simulations verify that efficient splitting and occurrence of strong diffractions are related to the dispersion properties of the Floquet-Bloch modes of the photonic crystal. Unidirectional and bi-directional splitting, which are associated with asymmetric transmission, arise due to the dominant contribution of the first positive and first negative diffraction orders. The effect of the depth of the defect layer is examined by placing single defect layer in varying rows, preserving the asymmetry of PC. Even for deeply buried defect layer, asymmetric transmission is still valid even if the zeroth order is not coupled. This transmission is due to evanescent waves which reach to the deeply embedded defect layer and couple to higher order modes. In an additional selected performance, whichever surface is illuminated, i.e., in both upper and lower surface illumination cases, incident beam is split into two beams of equal intensity at the output surface where the intensity of the out-going beams are equal for both illumination cases. That is, although the structure is asymmetric, symmetric bidirectional transmission with equal transmission values is demonstrated and the structure mimics the behavior of symmetric structures. Finally, simulation studies including the examination of a coupled-cavity defect for two different permittivity values (close to the permittivity values of GaAs or Si and alumina) reveal unidirectional splitting for a wider band of operation in comparison to the bandwidth obtained in the case of a single embedded defect layer. Since the dielectric materials that are utilized are low-loss and weakly dispersive in a wide frequency range including microwave and optical frequencies, the studied structures should be scalable to the mentioned ranges.

Keywords: asymmetric transmission, beam deflection, blazing, bi-directional splitting, defect layer, dual beam splitting, Floquet-Bloch modes, isofrequency contours, line defect, oblique incidence, photonic crystal, unidirectionality

Procedia PDF Downloads 184
638 The Influence of Steel Connection on Fire Resistance of Composite Steel-Framed Buildings

Authors: Mohammed Kadhim, Zhaohui Huang

Abstract:

Steel connections can play an important role in enhancing the robustness of structures under fire conditions. Therefore, it is significant to examine the influence of steel connections on the fire resistance of composite steel-framed buildings. In this paper, both the behavior of steel connections and their influence on composite steel frame are analyzed using the non-linear finite element computer software VULCAN at ambient and elevated temperatures. The chosen frame is subjected to ISO834 fire. The comparison between end plate connections, pinned connection, and rigid connection has been carried out. By applying different compartment fires, some cases are studied to show the behavior of steel connection when the fire is applied at certain beams. In addition, different plate thickness and deferent applied loads have been analyzed to examine the behavior of chosen steel connection under ISO834 fire. It was found from the analytical results that the beam with extended end plate is stronger and has better performance in terms of axial forces than those beams with flush end plate connection. It was also found that extended end plate connection has highest limiting temperatures compared to the flush end plate connection. In addition, it was found that the performance of end-plate connections is very close to rigid connection and very far from pinned connections. Furthermore, plate thickness has less effect on the influence of steel connection on fire resistance. In conclusion, the behavior of composite steel framed buildings is largely dependent on the steel connection due to their high impact under fire condition. It is recommended to consider the extended end-plate in the design proposes because of its higher properties compared to the flush end plate connection. Finally, this paper shows a steel connection has an important effect on the fire resistance of composite steel framed buildings.

Keywords: composite steel-framed buildings, connection behavior, end-plate connections, finite element modeling, fire resistance

Procedia PDF Downloads 160