Search results for: earth size
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6621

Search results for: earth size

5721 Momentum in the Stock Exchange of Thailand

Authors: Mussa Hussaini, Supasith Chonglerttham

Abstract:

Stocks are usually classified according to their characteristics which are unique enough such that the performance of each category can be differentiated from another. The reasons behind such classifications in the financial market are sometimes financial innovation or it can also be because of finding a premium in a group of stocks with similar features. One of the major classifications in stocks market is called momentum strategy. Based on this strategy stocks are classified according to their past performances into past winners and past losers. Momentum in a stock market refers to the idea that stocks will keep moving in the same direction. In other word, stocks with rising prices (past winners stocks) will continue to rise and those stocks with falling prices (past losers stocks) will continue to fall. The performance of this classification has been well documented in numerous studies in different countries. These studies suggest that past winners tend to outperform past losers in the future. However, academic research in this direction has been limited in countries such as Thailand and to the best of our knowledge, there has been no such study in Thailand after the financial crisis of 1997. The significance of this study stems from the fact that Thailand is an open market and has been encouraging foreign investments as one of the means to enhance employment, promote economic development, and technology transfer and the main equity market in Thailand, the Stock Exchange of Thailand is a crucial channel for Foreign Investment inflow into the country. The equity market size in Thailand increased from $1.72 billion in 1984 to $133.66 billion in 1993, an increase of over 77 times within a decade. The main contribution of this paper is evidence for size category in the context of the equity market in Thailand. Almost all previous studies have focused solely on large stocks or indices. This paper extends the scope beyond large stocks and indices by including small and tiny stocks as well. Further, since there is a distinct absence of detailed academic research on momentum strategy in the Stock Exchange of Thailand after the crisis, this paper also contributes to the extension of existing literature of the study. This research is also of significance for those researchers who would like to compare the performance of this strategy in different countries and markets. In the Stock Exchange of Thailand, we examined the performance of momentum strategy from 2010 to 2014. Returns on portfolios are calculated on monthly basis. Our results on momentum strategy confirm that there is positive momentum profit in large size stocks whereas there is negative momentum profit in small size stocks during the period of 2010 to 2014. Furthermore, the equal weighted average of momentum profit of both small and large size category do not provide any indication of overall momentum profit.

Keywords: momentum strategy, past loser, past winner, stock exchange of Thailand

Procedia PDF Downloads 305
5720 Sentinel-2 Based Burn Area Severity Assessment Tool in Google Earth Engine

Authors: D. Madhushanka, Y. Liu, H. C. Fernando

Abstract:

Fires are one of the foremost factors of land surface disturbance in diverse ecosystems, causing soil erosion and land-cover changes and atmospheric effects affecting people's lives and properties. Generally, the severity of the fire is calculated as the Normalized Burn Ratio (NBR) index. This is performed manually by comparing two images obtained afterward. Then by using the bitemporal difference of the preprocessed satellite images, the dNBR is calculated. The burnt area is then classified as either unburnt (dNBR<0.1) or burnt (dNBR>= 0.1). Furthermore, Wildfire Severity Assessment (WSA) classifies burnt areas and unburnt areas using classification levels proposed by USGS and comprises seven classes. This procedure generates a burn severity report for the area chosen by the user manually. This study is carried out with the objective of producing an automated tool for the above-mentioned process, namely the World Wildfire Severity Assessment Tool (WWSAT). It is implemented in Google Earth Engine (GEE), which is a free cloud-computing platform for satellite data processing, with several data catalogs at different resolutions (notably Landsat, Sentinel-2, and MODIS) and planetary-scale analysis capabilities. Sentinel-2 MSI is chosen to obtain regular processes related to burnt area severity mapping using a medium spatial resolution sensor (15m). This tool uses machine learning classification techniques to identify burnt areas using NBR and to classify their severity over the user-selected extent and period automatically. Cloud coverage is one of the biggest concerns when fire severity mapping is performed. In WWSAT based on GEE, we present a fully automatic workflow to aggregate cloud-free Sentinel-2 images for both pre-fire and post-fire image compositing. The parallel processing capabilities and preloaded geospatial datasets of GEE facilitated the production of this tool. This tool consists of a Graphical User Interface (GUI) to make it user-friendly. The advantage of this tool is the ability to obtain burn area severity over a large extent and more extended temporal periods. Two case studies were carried out to demonstrate the performance of this tool. The Blue Mountain national park forest affected by the Australian fire season between 2019 and 2020 is used to describe the workflow of the WWSAT. This site detected more than 7809 km2, using Sentinel-2 data, giving an error below 6.5% when compared with the area detected on the field. Furthermore, 86.77% of the detected area was recognized as fully burnt out, of which high severity (17.29%), moderate-high severity (19.63%), moderate-low severity (22.35%), and low severity (27.51%). The Arapaho and Roosevelt National Forest Park, California, the USA, which is affected by the Cameron peak fire in 2020, is chosen for the second case study. It was found that around 983 km2 had burned out, of which high severity (2.73%), moderate-high severity (1.57%), moderate-low severity (1.18%), and low severity (5.45%). These spots also can be detected through the visual inspection made possible by cloud-free images generated by WWSAT. This tool is cost-effective in calculating the burnt area since satellite images are free and the cost of field surveys is avoided.

Keywords: burnt area, burnt severity, fires, google earth engine (GEE), sentinel-2

Procedia PDF Downloads 218
5719 Inventive Synthesis and Characterization of a Cesium Molybdate Compound: CsBi(MoO4)2

Authors: Gülşah Çelik Gül, Figen Kurtuluş

Abstract:

Cesium molybdates with general formula CsMIII(MoO4)2, where MIII = Bi, Dy, Pr, Er, exhibit rich polymorphism, and crystallize in a layered structure. These properties cause intensive studies on cesium molybdates. CsBi(MoO4)2 was synthesized by microwave method by using cerium sulphate, bismuth oxide and molybdenum (VI) oxide in an appropriate molar ratio. Characterizations were done by x-ray diffraction (XRD), fourier transform infrared (FTIR) spectroscopy, scanning electron microscopy/energy dispersive analyze (SEM/EDS), thermo gravimetric/differantial thermal analysis (TG/DTA).

Keywords: cesium bismuth dimolybdate, microwave synthesis, powder x-ray diffraction, rare earth dimolybdates

Procedia PDF Downloads 507
5718 Corporate Governance and Audit Report Lag: The Case of Tunisian Listed Companies

Authors: Lajmi Azhaar, Yab Mdallelah

Abstract:

This study examines the Tunisian market in which recent events, notably financial scandals, provide an appropriate framework for studying the impact of corporate governance on the audit report lag. Moreover, very little research has been done to examine this relationship in this context. The objective of this work is, therefore, to understand the factors influencing audit report lag, drawing primarily on agency theory (Jensen and Meckling, 1976), which shows that the characteristics of the board of directors have an impact on the report lag (independence, diligence, and size). In addition, the characteristics of the committee also have an impact on the audit report lag (size, independence, diligence, and expertise). Therefore, our research provides empirical evidence on the impact of governance mechanisms attributes on audit report lag. Using a sample of forty-seven (47) Tunisian companies listed on the Tunis Stock Exchange (BVMT) during the period from 2014 to 2019, and basing on the GMM method of the dynamic panel, multivariate analysis shows that most corporate governance attributes have a significant effect on audit report lag. Specifically, the audit committee diligence and the audit committee expertise have a significant and positive effect on audit report lag. But the diligence of the board has a significant and negative effect on audit report lag. However, this study finds no evidence that the audit committee independence, the size, independence, and diligence of the director’s board are associated with the audit report lag. In addition, the results of this study also show that there is a significant effect of some control variables. Finally, we are contributing to this study by using the GMM method of the dynamic panel. We are also using an emerging context that is very poorly developed and exploited by previous studies.

Keywords: governance mechanisms, audit committee, board of directors, audit report lag

Procedia PDF Downloads 153
5717 Segmentation of Gray Scale Images of Dropwise Condensation on Textured Surfaces

Authors: Helene Martin, Solmaz Boroomandi Barati, Jean-Charles Pinoli, Stephane Valette, Yann Gavet

Abstract:

In the present work we developed an image processing algorithm to measure water droplets characteristics during dropwise condensation on pillared surfaces. The main problem in this process is the similarity between shape and size of water droplets and the pillars. The developed method divides droplets into four main groups based on their size and applies the corresponding algorithm to segment each group. These algorithms generate binary images of droplets based on both their geometrical and intensity properties. The information related to droplets evolution during time including mean radius and drops number per unit area are then extracted from the binary images. The developed image processing algorithm is verified using manual detection and applied to two different sets of images corresponding to two kinds of pillared surfaces.

Keywords: dropwise condensation, textured surface, image processing, watershed

Procedia PDF Downloads 211
5716 A Numerical Investigation of Lamb Wave Damage Diagnosis for Composite Delamination Using Instantaneous Phase

Authors: Haode Huo, Jingjing He, Rui Kang, Xuefei Guan

Abstract:

This paper presents a study of Lamb wave damage diagnosis of composite delamination using instantaneous phase data. Numerical experiments are performed using the finite element method. Different sizes of delamination damages are modeled using finite element package ABAQUS. Lamb wave excitation and responses data are obtained using a pitch-catch configuration. Empirical mode decomposition is employed to extract the intrinsic mode functions (IMF). Hilbert–Huang Transform is applied to each of the resulting IMFs to obtain the instantaneous phase information. The baseline data for healthy plates are also generated using the same procedure. The size of delamination is correlated with the instantaneous phase change for damage diagnosis. It is observed that the unwrapped instantaneous phase of shows a consistent behavior with the increasing delamination size.

Keywords: delamination, lamb wave, finite element method, EMD, instantaneous phase

Procedia PDF Downloads 310
5715 Evolution of Predator-prey Body-size Ratio: Spatial Dimensions of Foraging Space

Authors: Xin Chen

Abstract:

It has been widely observed that marine food webs have significantly larger predator–prey body-size ratios compared with their terrestrial counterparts. A number of hypotheses have been proposed to account for such difference on the basis of primary productivity, trophic structure, biophysics, bioenergetics, habitat features, energy efficiency, etc. In this study, an alternative explanation is suggested based on the difference in the spatial dimensions of foraging arenas: terrestrial animals primarily forage in two dimensional arenas, while marine animals mostly forage in three dimensional arenas. Using 2-dimensional and 3-dimensional random walk simulations, it is shown that marine predators with 3-dimensional foraging would normally have a greater foraging efficiency than terrestrial predators with 2-dimensional foraging. Marine prey with 3-dimensional dispersion usually has greater swarms or aggregations than terrestrial prey with 2-dimensional dispersion, which again favours a greater predator foraging efficiency in marine animals. As an analytical tool, a Lotka-Volterra based adaptive dynamical model is developed with the predator-prey ratio embedded as an adaptive variable. The model predicts that high predator foraging efficiency and high prey conversion rate will dynamically lead to the evolution of a greater predator-prey ratio. Therefore, marine food webs with 3-dimensional foraging space, which generally have higher predator foraging efficiency, will evolve a greater predator-prey ratio than terrestrial food webs.

Keywords: predator-prey, body size, lotka-volterra, random walk, foraging efficiency

Procedia PDF Downloads 66
5714 Design and Development of Permanent Magnet Quadrupoles for Low Energy High Intensity Proton Accelerator

Authors: Vikas Teotia, Sanjay Malhotra, Elina Mishra, Prashant Kumar, R. R. Singh, Priti Ukarde, P. P. Marathe, Y. S. Mayya

Abstract:

Bhabha Atomic Research Centre, Trombay is developing low energy high intensity Proton Accelerator (LEHIPA) as pre-injector for 1 GeV proton accelerator for accelerator driven sub-critical reactor system (ADSS). LEHIPA consists of RFQ (Radio Frequency Quadrupole) and DTL (Drift Tube Linac) as major accelerating structures. DTL is RF resonator operating in TM010 mode and provides longitudinal E-field for acceleration of charged particles. The RF design of drift tubes of DTL was carried out to maximize the shunt impedance; this demands the diameter of drift tubes (DTs) to be as low as possible. The width of the DT is however determined by the particle β and trade-off between a transit time factor and effective accelerating voltage in the DT gap. The array of Drift Tubes inside DTL shields the accelerating particle from decelerating RF phase and provides transverse focusing to the charged particles which otherwise tends to diverge due to Columbic repulsions and due to transverse e-field at entry of DTs. The magnetic lenses housed inside DTS controls the transverse emittance of the beam. Quadrupole magnets are preferred over solenoid magnets due to relative high focusing strength of former over later. The availability of small volume inside DTs for housing magnetic quadrupoles has motivated the usage of permanent magnet quadrupoles rather than Electromagnetic Quadrupoles (EMQ). This provides another advantage as joule heating is avoided which would have added thermal loaded in the continuous cycle accelerator. The beam dynamics requires uniformity of integral magnetic gradient to be better than ±0.5% with the nominal value of 2.05 tesla. The paper describes the magnetic design of the PMQ using Sm2Co17 rare earth permanent magnets. The paper discusses the results of five pre-series prototype fabrications and qualification of their prototype permanent magnet quadrupoles and a full scale DT developed with embedded PMQs. The paper discusses the magnetic pole design for optimizing integral Gdl uniformity and the value of higher order multipoles. A novel but simple method of tuning the integral Gdl is discussed.

Keywords: DTL, focusing, PMQ, proton, rate earth magnets

Procedia PDF Downloads 459
5713 Data-Centric Anomaly Detection with Diffusion Models

Authors: Sheldon Liu, Gordon Wang, Lei Liu, Xuefeng Liu

Abstract:

Anomaly detection, also referred to as one-class classification, plays a crucial role in identifying product images that deviate from the expected distribution. This study introduces Data-centric Anomaly Detection with Diffusion Models (DCADDM), presenting a systematic strategy for data collection and further diversifying the data with image generation via diffusion models. The algorithm addresses data collection challenges in real-world scenarios and points toward data augmentation with the integration of generative AI capabilities. The paper explores the generation of normal images using diffusion models. The experiments demonstrate that with 30% of the original normal image size, modeling in an unsupervised setting with state-of-the-art approaches can achieve equivalent performances. With the addition of generated images via diffusion models (10% equivalence of the original dataset size), the proposed algorithm achieves better or equivalent anomaly localization performance.

Keywords: diffusion models, anomaly detection, data-centric, generative AI

Procedia PDF Downloads 72
5712 Optimization of Bioremediation Process to Remove Hexavalent Chromium from Tannery Effluent

Authors: Satish Babu Rajulapati

Abstract:

The removal of toxic and heavy metal contaminants from wastewater streams and industrial effluents is one of the most important environmental issues being faced world over. In the present study three bacterial cultures tolerating high concentrations of chromium were isolated from the soil and wastewater sample collected from the tanneries located in Warangal, Telangana state. The bacterial species were identified as Bacillus sp., Staphylococcus sp. and pseudomonas sp. Preliminary studies were carried out with the three bacterial species at various operating parameters such as pH and temperature. The results indicate that pseudomonas sp. is the efficient one in the uptake of Cr(VI). Further, detailed investigation of Pseudomonas sp. have been carried out to determine the efficiency of removal of Cr(VI). The various parameters influencing the biosorption of Cr(VI) such as pH, temperature, initial chromium concentration, innoculum size and incubation time have been studied. Response Surface Methodology (RSM) was applied to optimize the removal of Cr(VI). Maximum Cr(VI) removal was found to be 85.72% Cr(VI) atpH 7, temperature 35 °C, initial concentration 67mg/l, inoculums size 9 %(v/v) and time 60 hrs.

Keywords: Staphylococcus sp, chromium, RSM, optimization, Cr(IV)

Procedia PDF Downloads 305
5711 Design of a High Performance T/R Switch for 2.4 GHz RF Wireless Transceiver in 0.13 µm CMOS Technology

Authors: Mohammad Arif Sobhan Bhuiyan, Mamun Bin Ibne Reaz

Abstract:

The rapid advancement of CMOS technology, in the recent years, has led the scientists to fabricate wireless transceivers fully on-chip which results in smaller size and lower cost wireless communication devices with acceptable performance characteristics. Moreover, the performance of the wireless transceivers rigorously depends on the performance of its first block T/R switch. This article proposes a design of a high performance T/R switch for 2.4 GHz RF wireless transceivers in 0.13 µm CMOS technology. The switch exhibits 1- dB insertion loss, 37.2-dB isolation in transmit mode and 1.4-dB insertion loss, 25.6-dB isolation in receive mode. The switch has a power handling capacity (P1dB) of 30.9-dBm. Besides, by avoiding bulky inductors and capacitors, the size of the switch is drastically reduced and it occupies only (0.00296) mm2 which is the lowest ever reported in this frequency band. Therefore, simplicity and low chip area of the circuit will trim down the cost of fabrication as well as the whole transceiver.

Keywords: CMOS, ISM band, SPDT, t/r switch, transceiver

Procedia PDF Downloads 429
5710 Performance the SOFA and APACHEII Scoring System to Predicate the Mortality of the ICU Cases

Authors: Yu-Chuan Huang

Abstract:

Introduction: There is a higher mortality rate for unplanned transfer to intensive care units. It also needs a longer length of stay and makes the intensive care unit beds cannot be effectively used. It affects the immediate medical treatment of critically ill patients, resulting in a drop in the quality of medical care. Purpose: The purpose of this study was using SOFA and APACHEII score to analyze the mortality rate of the cases transferred from ED to ICU. According to the score that should be provide an appropriate care as early as possible. Methods: This study was a descriptive experimental design. The sample size was estimated at 220 to reach a power of 0.8 for detecting a medium effect size of 0.30, with a 0.05 significance level, using G-power. Considering an estimated follow-up loss, the required sample size was estimated as 242 participants. Data were calculated by medical system of SOFA and APACHEII score that cases transferred from ED to ICU in 2016. Results: There were 233 participants meet the study. The medical records showed 33 participants’ mortality. Age and sex with QSOFA , SOFA and sex with APACHEII showed p>0.05. Age with APCHHII in ED and ICU showed r=0.150, 0,268 (p < 0.001**). The score with mortality risk showed: ED QSOFA is r=0.235 (p < 0.001**), exp(B)=1.685(p = 0.007); ICU SOFA 0.78 (p < 0.001**), exp(B)=1.205(p < 0.001). APACHII in ED and ICU showed r= 0.253, 0.286 (p < 0.001**), exp(B) = 1.041,1.073(p = 0.017,0.001). For SOFA, a cutoff score of above 15 points was identified as a predictor of the 95% mortality risk. Conclusions: The SOFA and APACHE II were calculated based on initial laboratory data in the Emergency Department, and during the first 24 hours of ICU admission. In conclusion, the SOFA and APACHII score is significantly associated with mortality and strongly predicting mortality. Early predictors of morbidity and mortality, which we can according the predicting score, and provide patients with a detail assessment and proper care, thereby reducing mortality and length of stay.

Keywords: SOFA, APACHEII, mortality, ICU

Procedia PDF Downloads 138
5709 Heating Behavior of Ni-Embedded Thermoplastic Polyurethane Adhesive Film by Induction Heating

Authors: DuckHwan Bae, YongSung Kwon, Min Young Shon, SanTaek Oh, GuNi Kim

Abstract:

The heating behavior of nanometer and micrometer sized Nickel particle-imbedded thermoplastic polyurethane adhesive (TPU) under induction heating is examined in present study. The effects of particle size and content, TPU film thickness on heating behaviors were examined. The correlation between heating behavior and magnetic properties of Nickel particles were also studied. From the results, heat generation increased with increase of Nickel content and film thickness. However, in terms of particle sizes, heat generation of Nickel-imbedded TPU film were in order of 70nm>1µm>20 µm>70 µm and this results can explain by increasing ration of eddy heating to hysteresis heating with increase of particle size.

Keywords: induction heating, thermoplastic polyurethane, nickel, composite, hysteresis loss, eddy current loss, curie temperature

Procedia PDF Downloads 344
5708 Identification of Viruses Infecting Garlic Plants in Colombia

Authors: Diana M. Torres, Anngie K. Hernandez, Andrea Villareal, Magda R. Gomez, Sadao Kobayashi

Abstract:

Colombian Garlic crops exhibited mild mosaic, yellow stripes, and deformation. This group of symptoms suggested a viral infection. Several viruses belonging to the genera Potyvirus, Carlavirus and Allexivirus are known to infect garlic and lower their yield worldwide, but in Colombia, there are no studies of viral infections in this crop, only leek yellow stripe virus (LYSV) has been reported to our best knowledge. In Colombia, there are no management strategies for viral diseases in garlic because of the lack of information about viral infections on this crop, which is reflected in (i) high prevalence of viral related symptoms in garlic fields and (ii) high dispersal rate. For these reasons, the purpose of the present study was to evaluate the viral status of garlic in Colombia, which can represent a major threat on garlic yield and quality for this country 55 symptomatic leaf samples were collected for virus detection by RT-PCR and mechanical inoculation. Total RNA isolated from infected samples were subjected to RT-PCR with primers 1-OYDV-G/2-OYDV-G for Onion yellow dwarf virus (OYDV) (expected size 774pb), 1LYSV/2LYSV for LYSV (expected size 1000pb), SLV 7044/SLV 8004 for Shallot latent virus (SLV) (expected size 960pb), GCL-N30/GCL-C40 for Garlic common latent virus (GCLV) (expected size 481pb) and EF1F/EF1R for internal control (expected size 358pb). GCLV, SLV, and LYSV were detected in infected samples; in 95.6% of the analyzed samples was detected at least one of the viruses. GCLV and SLV were detected in single infection with low prevalence (9.3% and 7.4%, respectively). Garlic generally becomes coinfected with several types of viruses. Four viral complexes were identified: three double infection (64% of analyzed samples) and one triple infection (15%). The most frequent viral complex was SLV + GCLV infecting 48.1% of the samples. The other double complexes identified had a prevalence of 7% (GCLV + LYSV and SLV + LYSV) and 5.6% of the samples were free from these viruses. Mechanical transmission experiments were set up using leaf tissues of collected samples from infected fields, different test plants were assessed to know the host range, but it was restricted to C. quinoa, confirming the presence of detected viruses which have limited host range and were detected in C. quinoa by RT-PCR. The results of molecular and biological tests confirm the presence of SLV, LYSV, and GCLV; this is the first report of SLV and LYSV in garlic plants in Colombia, which can represent a serious threat for this crop in this country.

Keywords: SLV, GCLV, LYSV, leek yellow stripe virus, Allium sativum

Procedia PDF Downloads 135
5707 Seismic Design Approach for Areas with Low Seismicity

Authors: Mogens Saberi

Abstract:

The following article focuses on a new seismic design approach for Denmark. Denmark is located in a low seismic zone and up till now a general and very simplified approach has been used to accommodate the effect of seismic loading. The current used method is presented and it is found that the approach is on the unsafe side for many building types in Denmark. The damages during time due to earth quake is presented and a seismic map for Denmark is developed and presented. Furthermore, a new design approach is suggested and compared to the existing one. The new approach is relatively simple but captures the effect of seismic loading more realistic than the existing one. The new approach is believed to the incorporated in the Danish Deign Code for building structures.

Keywords: low seismicity, new design approach, earthquakes, Denmark

Procedia PDF Downloads 353
5706 Anatomical Adaptations of Three Astragalus Species under Salt Stress

Authors: Faycal Boughalleb, Raoudha Abdellaoui

Abstract:

The effect of NaCl stress on root and leaf anatomy was investigated in three Astragalus species grown in 0-300 mM NaCl for 30 days under greenhouse conditions. Root cross section and cortex thickness was reduced under salt stress in both species while A. tenuifolius showed thinner cortex and the root cross section was unchanged. The epidermis stele thickness was unaffected by salinity in A. armatus and A. tenuifolius and was reduced in A. mareoticus with smaller xylem vessel size. In addition, vessel density and wall thickness of xylem was increased under salt conditions in the studies species. The entire lamina and mesophyll of the three species were thinner in salt-stressed plants. A. armatus and A. tenuifolius showed the higher thickness with increased size of the lower epidermis. NaCl (300 mM) reduced leaf water content by 41.5 % in A. mareoticus while it was unchanged in the other species. The size of the vascular bundle increased under salinity in A. tenuifolius leaves and it was unchanged in the other ones. A longer distance between leaf vascular bundle was occurred in A. mareoticus. The effects of NaCl on root and leaf ultrastructure are discussed in relation to the degree of salt resistance of these species. The unchanged biomass production under salt stress confirmed the higher tolerance oft A. tenuifolius to salinity. A. armatus was moderately salt tolerant with decrease of biomass production by 14.2 % while A. mareoticus was considered as salt sensitive plant when the decrease in biomass production reached 56.8%.

Keywords: Astragalus species, leaf ultrastructure, root anatomy, salt stress

Procedia PDF Downloads 376
5705 The Effect of Market Orientation on Business Performance of Auto Parts Industry

Authors: Vithaya Intraphimol

Abstract:

The purpose of this study is to investigate the relationship between market orientation and business performance through innovations that include product innovation and process innovation. Auto parts and accessories companies in Thailand were used as sample for this investigation. Survey research with structured questionnaire was used as the key instrument in collecting the data. The structural equation modeling (SEM) was assigned test the hypotheses. The sample size in this study requires the minimum sample size of 200. The result found that competitor orientation, and interfunctional coordination has an effect on product innovation. Moreover, interfunctional coordination has an effect on process innovation, and return on asset. This indicates that within- firm coordination has crucial to firms’ performances. The implication for practice, firms should support interfunctional coordination that members of different functional areas of an organization communicate and work together for the creation of value to target buyers they may have better profitability.

Keywords: auto parts industry, business performance, innovations, market orientation

Procedia PDF Downloads 294
5704 Petri Net Modeling and Simulation of a Call-Taxi System

Authors: T. Godwin

Abstract:

A call-taxi system is a type of taxi service where a taxi could be requested through a phone call or mobile app. A schematic functioning of a call-taxi system is modeled using Petri net, which provides the necessary conditions for a taxi to be assigned by a dispatcher to pick a customer as well as the conditions for the taxi to be released by the customer. A Petri net is a graphical modeling tool used to understand sequences, concurrences, and confluences of activities in the working of discrete event systems. It uses tokens on a directed bipartite multi-graph to simulate the activities of a system. The Petri net model is translated into a simulation model and a call-taxi system is simulated. The simulation model helps in evaluating the operation of a call-taxi system based on the fleet size as well as the operating policies for call-taxi assignment and empty call-taxi repositioning. The developed Petri net based simulation model can be used to decide the fleet size as well as the call-taxi assignment policies for a call-taxi system.

Keywords: call-taxi, discrete event system, petri net, simulation modeling

Procedia PDF Downloads 410
5703 Hansen Solubility Parameters, Quality by Design Tool for Developing Green Nanoemulsion to Eliminate Sulfamethoxazole from Contaminated Water

Authors: Afzal Hussain, Mohammad A. Altamimi, Syed Sarim Imam, Mudassar Shahid, Osamah Abdulrahman Alnemer

Abstract:

Exhaustive application of sulfamethoxazole (SUX) became as a global threat for human health due to water contamination through diverse sources. The addressed combined application of Hansen solubility (HSPiP software) parameters and Quality by Design tool for developing various green nanoemulsions. HSPiP program assisted to screen suitable excipients based on Hansen solubility parameters and experimental solubility data. Various green nanoemulsions were prepared and characterized for globular size, size distribution, zeta potential, and removal efficiency. Design Expert (DoE) software further helped to identify critical factors responsible to have direct impact on percent removal efficiency, size, and viscosity. Morphological investigation was visualized under transmission electron microscopy (TEM). Finally, the treated was studied to negate the presence of the tested drug employing ICP-OES (inductively coupled plasma optical emission microscopy) technique and HPLC (high performance liquid chromatography). Results showed that HSPiP predicted biocompatible lipid, safe surfactant (lecithin), and propylene glycol (PG). Experimental solubility of the drug in the predicted excipients were quite convincing and vindicated. Various green nanoemulsions were fabricated, and these were evaluated for in vitro findings. Globular size (100-300 nm), PDI (0.1-0.5), zeta potential (~ 25 mV), and removal efficiency (%RE = 70-98%) were found to be in acceptable range for deciding input factors with level in DoE. Experimental design tool assisted to identify the most critical variables controlling %RE and optimized content of nanoemulsion under set constraints. Dispersion time was varied from 5-30 min. Finally, ICP-OES and HPLC techniques corroborated the absence of SUX in the treated water. Thus, the strategy is simple, economic, selective, and efficient.

Keywords: quality by design, sulfamethoxazole, green nanoemulsion, water treatment, icp-oes, hansen program (hspip software

Procedia PDF Downloads 68
5702 Channel Estimation/Equalization with Adaptive Modulation and Coding over Multipath Faded Channels for WiMAX

Authors: B. Siva Kumar Reddy, B. Lakshmi

Abstract:

WiMAX has adopted an Adaptive Modulation and Coding (AMC) in OFDM to endure higher data rates and error free transmission. AMC schemes employ the Channel State Information (CSI) to efficiently utilize the channel and maximize the throughput and for better spectral efficiency. This CSI has given to the transmitter by the channel estimators. In this paper, LSE (Least Square Error) and MMSE (Minimum Mean square Error) estimators are suggested and BER (Bit Error Rate) performance has been analyzed. Channel equalization is also integrated with with AMC-OFDM system and presented with Constant Modulus Algorithm (CMA) and Least Mean Square (LMS) algorithms with convergence rates analysis. Simulation results proved that increment in modulation scheme size causes to improvement in throughput along with BER value. There is a trade-off among modulation size, throughput, BER value and spectral efficiency. Results also reported the requirement of channel estimation and equalization in high data rate systems.

Keywords: AMC, CSI, CMA, OFDM, OFDMA, WiMAX

Procedia PDF Downloads 384
5701 Hydrometallurgical Treatment of Abu Ghalaga Ilmenite Ore

Authors: I. A. Ibrahim, T. A. Elbarbary, N. Abdelaty, A. T. Kandil, H. K. Farhan

Abstract:

The present work aims to study the leaching of Abu Ghalaga ilmenite ore by hydrochloric acid and simultaneous reduction by iron powder method to dissolve its titanium and iron contents. Iron content in the produced liquor is separated by solvent extraction using TBP as a solvent. All parameters affecting the efficiency of the dissolution process were separately studied including the acid concentration, solid/liquid ratio which controls the ilmenite/acid molar ratio, temperature, time and grain size. The optimum conditions at which maximum leaching occur are 30% HCl acid with a solid/liquid ratio of 1/30 at 80 °C for 4 h using ore ground to -350 mesh size. At the same time, all parameters affecting on solvent extraction and stripping of iron content from the produced liquor were studied. Results show that the best extraction is at solvent/solution 1/1 by shaking at 240 RPM for 45 minutes at 30 °C whereas best striping of iron at H₂O/solvent 2/1.

Keywords: ilmenite ore, leaching, titanium solvent extraction, Abu Ghalaga ilmenite ore

Procedia PDF Downloads 276
5700 Using The Flight Heritage From >150 Electric Propulsion Systems To Design The Next Generation Field Emission Electric Propulsion Thrusters

Authors: David Krejci, Tony Schönherr, Quirin Koch, Valentin Hugonnaud, Lou Grimaud, Alexander Reissner, Bernhard Seifert

Abstract:

In 2018 the NANO thruster became the first Field Emission Electric Propulsion (FEEP) system ever to be verified in space in an In-Orbit Demonstration mission conducted together with Fotec. Since then, 160 additional ENPULSION NANO propulsion systems have been deployed in orbit on 73 different spacecraft across multiple customers and missions. These missions included a variety of different satellite bus sizes ranging from 3U Cubesats to >100kg buses, and different orbits in Low Earth Orbit and Geostationary Earth orbit, providing an abundance of on orbit data for statistical analysis. This large-scale industrialization and flight heritage allows for a holistic way of gathering data from testing, integration and operational phases, deriving lessons learnt over a variety of different mission types, operator approaches, use cases and environments. Based on these lessons learnt a new generation of propulsion systems is developed, addressing key findings from the large NANO heritage and adding new capabilities, including increased resilience, thrust vector steering and increased power and thrust level. Some of these successor products have already been validated in orbit, including the MICRO R3 and the NANO AR3. While the MICRO R3 features increased power and thrust level, the NANO AR3 is a successor of the heritage NANO thruster with added thrust vectoring capability. 5 NANO AR3 have been launched to date on two different spacecraft. This work presents flight telemetry data of ENPULSION NANO systems and onorbit statistical data of the ENPULSION NANO as well as lessons learnt during onorbit operations, customer assembly, integration and testing support and ground test campaigns conducted at different facilities. We discuss how transfer of lessons learnt and operational improvement across independent missions across customers has been accomplished. Building on these learnings and exhaustive heritage, we present the design of the new generation of propulsion systems that increase the power and thrust level of FEEP systems to address larger spacecraft buses.

Keywords: FEEP, field emission electric propulsion, electric propulsion, flight heritage

Procedia PDF Downloads 71
5699 Solar and Galactic Cosmic Ray Impacts on Ambient Dose Equivalent Considering a Flight Path Statistic Representative to World-Traffic

Authors: G. Hubert, S. Aubry

Abstract:

The earth is constantly bombarded by cosmic rays that can be of either galactic or solar origin. Thus, humans are exposed to high levels of galactic radiation due to altitude aircraft. The typical total ambient dose equivalent for a transatlantic flight is about 50 μSv during quiet solar activity. On the contrary, estimations differ by one order of magnitude for the contribution induced by certain solar particle events. Indeed, during Ground Level Enhancements (GLE) event, the Sun can emit particles of sufficient energy and intensity to raise radiation levels on Earth's surface. Analyses of GLE characteristics occurring since 1942 showed that for the worst of them, the dose level is of the order of 1 mSv and more. The largest of these events was observed on February 1956 for which the ambient dose equivalent rate is in the orders of 10 mSv/hr. The extra dose at aircraft altitudes for a flight during this event might have been about 20 mSv, i.e. comparable with the annual limit for aircrew. The most recent GLE, occurred on September 2017 resulting from an X-class solar flare, and it was measured on the surface of both the Earth and Mars using the Radiation Assessment Detector on the Mars Science Laboratory's Curiosity Rover. Recently, Hubert et al. proposed a GLE model included in a particle transport platform (named ATMORAD) describing the extensive air shower characteristics and allowing to assess the ambient dose equivalent. In this approach, the GCR is based on the Force-Field approximation model. The physical description of the Solar Cosmic Ray (i.e. SCR) considers the primary differential rigidity spectrum and the distribution of primary particles at the top of the atmosphere. ATMORAD allows to determine the spectral fluence rate of secondary particles induced by extensive showers, considering altitude range from ground to 45 km. Ambient dose equivalent can be determined using fluence-to-ambient dose equivalent conversion coefficients. The objective of this paper is to analyze the GCR and SCR impacts on ambient dose equivalent considering a high number statistic of world-flight paths. Flight trajectories are based on the Eurocontrol Demand Data Repository (DDR) and consider realistic flight plan with and without regulations or updated with Radar Data from CFMU (Central Flow Management Unit). The final paper will present exhaustive analyses implying solar impacts on ambient dose equivalent level and will propose detailed analyses considering route and airplane characteristics (departure, arrival, continent, airplane type etc.), and the phasing of the solar event. Preliminary results show an important impact of the flight path, particularly the latitude which drives the cutoff rigidity variations. Moreover, dose values vary drastically during GLE events, on the one hand with the route path (latitude, longitude altitude), on the other hand with the phasing of the solar event. Considering the GLE occurred on 23 February 1956, the average ambient dose equivalent evaluated for a flight Paris - New York is around 1.6 mSv, which is relevant to previous works This point highlights the importance of monitoring these solar events and of developing semi-empirical and particle transport method to obtain a reliable calculation of dose levels.

Keywords: cosmic ray, human dose, solar flare, aviation

Procedia PDF Downloads 199
5698 The Effect of Soil Fractal Dimension on the Performance of Cement Stabilized Soil

Authors: Nkiru I. Ibeakuzie, Paul D. J. Watson, John F. Pescatore

Abstract:

In roadway construction, the cost of soil-cement stabilization per unit area is significantly influenced by the binder content, hence the need to optimise cement usage. This research work will characterize the influence of soil fractal geometry on properties of cement-stabilized soil, and strive to determine a correlation between mechanical proprieties of cement-stabilized soil and the mass fractal dimension Dₘ indicated by particle size distribution (PSD) of aggregate mixtures. Since strength development in cemented soil relies not only on cement content but also on soil PSD, this study will investigate the possibility of reducing cement content by changing the PSD of soil, without compromising on strength, reduced permeability, and compressibility. A series of soil aggregate mixes will be prepared in the laboratory. The mass fractal dimension Dₘ of each mix will be determined from sieve analysis data prior to stabilization with cement. Stabilized soil samples will be tested for strength, permeability, and compressibility.

Keywords: fractal dimension, particle size distribution, cement stabilization, cement content

Procedia PDF Downloads 206
5697 The Effect of Size, Thickness, and Type of the Bonding Interlayer on Bullet Proof Glass as per EN 1063

Authors: Rabinder Singh Bharj, Sandeep Kumar

Abstract:

This investigation presents preparation of sample and analysis of results of ballistic impact test as per EN 1063 on the size, thickness, number, position, and type of the bonding interlayer Polyvinyl Butyral, Poly Carbonate and Poly Urethane on bullet proof glass. It was observed that impact energy absorbed by bullet proof glass increases with the increase of the total thickness from 33mm to 42mm to 51mm for all the three samples respectively. Absorption impact energy is greater for samples with more number of bonding interlayers than with the number of glass layers for uniform increase in total sample thickness. There is no effect on the absorption impact energy with the change in position of the bonding interlayer.

Keywords: absorbed energy, bullet proof glass, laminated glass, safety glass

Procedia PDF Downloads 380
5696 Bank Liquidity Creation in a Dual Banking System: An Empirical Investigation

Authors: Lianne M. Q. Lee, Mohammed Sharaf Shaiban

Abstract:

The importance of bank liquidity management took center stage as policy makers promoted a more resilient global banking system after the market turmoil of 2007. The growing recognition of Islamic banks’ function of intermediating funds in the economy warrants the need to investigate its balance sheet structure which is distinct from its conventional counterparts. Given that asymmetric risk, transformation is inevitable; Islamic banks need to identify the liquidity risk within their distinctive balance sheet structure. Thus, there is a strong need to quantify and assess the liquidity position to ensure proper functioning of a financial institution. It is vital to measure bank liquidity because liquid banks face less liquidity risk. We examine this issue by using two alternative quantitative measures of liquidity creation “cat fat” and “cat nonfat” constructed by Berger and Bouwman (2009). “Cat fat” measures all on balance sheet items including off balance sheet, whilst the latter measures only on balance sheet items. Liquidity creation is measured over the period 2007-2014 in 14 countries where Islamic and conventional commercial banks coexist. Also, separately by bank size class as empirical studies have shown that liquidity creation varies by bank size. An interesting and important finding shows that all size class of Islamic banks, on average have increased creation of aggregate liquidity in real dollar terms over the years for both liquidity creation measures especially for large banks indicating that Islamic banks actually generates more liquidity to the economy compared to its conventional counterparts, including from off-balance sheet items. The liquidity creation for off-balance sheets by conventional banks may have been affected by the global financial crisis when derivatives markets were severely hit. The results also suggest that Islamic banks have the higher volume of assets and deposits and that borrowing/issues of bonds are less in Islamic banks compared to conventional banks because most products are interest-based. As Islamic banks appear to create more liquidity than conventional banks under both measures, it translates that the development of Islamic banking is significant over the decades since its inception. This finding is encouraging as, despite Islamic banking’s overall size, it represents growth opportunities for these countries.

Keywords: financial institution, liquidity creation, liquidity risk, policy and regulation

Procedia PDF Downloads 335
5695 Consumer Experience of 3D Body Scanning Technology and Acceptance of Related E-Commerce Market Applications in Saudi Arabia

Authors: Moudi Almousa

Abstract:

This research paper explores Saudi Arabian female consumers’ experiences using 3D body scanning technology and their level of acceptance of possible market applications of this technology to adopt for apparel online shopping. Data was collected for 82 women after being scanned then viewed a short video explaining three possible scenarios of 3D body scanning applications, which include size prediction, customization, and virtual try-on, before completing the survey questionnaire. Although respondents have strong positive responses towards the scanning experience, the majority were concerned about their privacy during the scanning process. The results indicated that size prediction and virtual try on had greater market application potential and a higher chance of crossing the gap based on consumer interest. The results of the study also indicated a strong positive correlation between respondents’ concern with inability to try on apparel products in online environments and their willingness to use the 3D possible market applications.

Keywords: 3D body scanning, market applications, online, apparel fit

Procedia PDF Downloads 130
5694 Applying Multiple Intelligences to Teach Buddhist Doctrines in a Classroom

Authors: Phalaunnnaphat Siriwongs

Abstract:

The classroom of the 21st century is an ever changing forum for new and innovative thoughts and ideas. With increasing technology and opportunity, students have rapid access to information that only decades ago would have taken weeks to obtain. Unfortunately, new techniques and technology are not the cure for the fundamental problems that have plagued the classroom ever since education was established. Class size has been an issue long debated in academia. While it is difficult to pin point an exact number, it is clear that in this case more does not mean better. By looking into the success and pitfalls of classroom size the true advantages of smaller classes will become clear. Previously, one class was comprised of 50 students. Being seventeen and eighteen- year- old students, sometimes it was quite difficult for them to stay focused. To help them understand and gain much knowledge, a researcher introduced “The Theory of Multiple Intelligence” and this, in fact, enabled students to learn according to their own learning preferences no matter how they were being taught. In this lesson, the researcher designed a cycle of learning activities involving all intelligences so that everyone had equal opportunities to learn.

Keywords: multiple intelligences, role play, performance assessment, formative assessment

Procedia PDF Downloads 263
5693 Pareto System of Optimal Placement and Sizing of Distributed Generation in Radial Distribution Networks Using Particle Swarm Optimization

Authors: Sani M. Lawal, Idris Musa, Aliyu D. Usman

Abstract:

The Pareto approach of optimal solutions in a search space that evolved in multi-objective optimization problems is adopted in this paper, which stands for a set of solutions in the search space. This paper aims at presenting an optimal placement of Distributed Generation (DG) in radial distribution networks with an optimal size for minimization of power loss and voltage deviation as well as maximizing voltage profile of the networks. And these problems are formulated using particle swarm optimization (PSO) as a constraint nonlinear optimization problem with both locations and sizes of DG being continuous. The objective functions adopted are the total active power loss function and voltage deviation function. The multiple nature of the problem, made it necessary to form a multi-objective function in search of the solution that consists of both the DG location and size. The proposed PSO algorithm is used to determine optimal placement and size of DG in a distribution network. The output indicates that PSO algorithm technique shows an edge over other types of search methods due to its effectiveness and computational efficiency. The proposed method is tested on the standard IEEE 34-bus and validated with 33-bus test systems distribution networks. Results indicate that the sizing and location of DG are system dependent and should be optimally selected before installing the distributed generators in the system and also an improvement in the voltage profile and power loss reduction have been achieved.

Keywords: distributed generation, pareto, particle swarm optimization, power loss, voltage deviation

Procedia PDF Downloads 352
5692 White Light Emitting Carbon Dots- Surface Modification of Carbon Dots Using Auxochromes

Authors: Manasa Perikala, Asha Bhardwaj

Abstract:

Fluorescent carbon dots (CDs), a young member of Carbon nanomaterial family, has gained a lot of research attention across the globe due to its highly luminescent emission properties, non-toxic behavior, stable emission properties, and zero re-absorption lose. These dots have the potential to replace the use of traditional semiconductor quantum dots in light-emitting devices (LED’s, fiber lasers) and other photonic devices (temperature sensor, UV detector). However, One major drawback of Carbon dots is that, till date, the actual mechanism of photoluminescence (PL) in carbon dots is still an open topic of discussion among various researchers across the globe. PL mechanism of CDs based on wide particle size distribution, the effect of surface groups, hybridization in carbon, and charge transfer mechanisms have been proposed. Although these mechanisms explain PL of CDs to an extent, no universally accepted mechanism to explain complete PL behavior of these dots is put forth. In our work, we report parameters affecting the size and surface of CDs, such as time of the reaction, synthesis temperature and concentration of precursors and their effects on the optical properties of the carbon dots. The effect of auxochromes on the emission properties and re-modification of carbon surface using an external surface functionalizing agent is discussed in detail. All the explanations have been supported by UV-Visible absorption, emission spectroscopies, Fourier transform infrared spectroscopy and Transmission electron microscopy and X-Ray diffraction techniques. Once the origin of PL in CDs is understood, parameters affecting PL centers can be modified to tailor the optical properties of these dots, which can enhance their applications in the fabrication of LED’s and other photonic devices out of these carbon dots.

Keywords: carbon dots, photoluminescence, size effects on emission in CDs, surface modification of carbon dots

Procedia PDF Downloads 122