Search results for: experimental and numerical modelling
2661 Advanced Technologies for Detector Readout in Particle Physics
Authors: Y. Venturini, C. Tintori
Abstract:
Given the continuous demand for improved readout performances in particle and dark matter physics, CAEN SpA is pushing on the development of advanced technologies for detector readout. We present the Digitizers 2.0, the result of the success of the previous Digitizers generation, combined with expanded capabilities and a renovation of the user experience introducing the open FPGA. The first product of the family is the VX2740 (64 ch, 125 MS/s, 16 bit) for advanced waveform recording and Digital Pulse Processing, fitting with the special requirements of Dark Matter and Neutrino experiments. In parallel, CAEN is developing the FERS-5200 platform, a Front-End Readout System designed to read out large multi-detector arrays, such as SiPMs, multi-anode PMTs, silicon strip detectors, wire chambers, GEM, gas tubes, and others. This is a highly-scalable distributed platform, based on small Front-End cards synchronized and read out by a concentrator board, allowing to build extremely large experimental setup. We plan to develop a complete family of cost-effective Front-End cards tailored to specific detectors and applications. The first one available is the A5202, a 64-channel unit for SiPM readout based on CITIROC ASIC by Weeroc.Keywords: dark matter, digitizers, front-end electronics, open FPGA, SiPM
Procedia PDF Downloads 1302660 Multi-Criteria Test Case Selection Using Ant Colony Optimization
Authors: Niranjana Devi N.
Abstract:
Test case selection is to select the subset of only the fit test cases and remove the unfit, ambiguous, redundant, unnecessary test cases which in turn improve the quality and reduce the cost of software testing. Test cases optimization is the problem of finding the best subset of test cases from a pool of the test cases to be audited. It will meet all the objectives of testing concurrently. But most of the research have evaluated the fitness of test cases only on single parameter fault detecting capability and optimize the test cases using a single objective. In the proposed approach, nine parameters are considered for test case selection and the best subset of parameters for test case selection is obtained using Interval Type-2 Fuzzy Rough Set. Test case selection is done in two stages. The first stage is the fuzzy entropy-based filtration technique, used for estimating and reducing the ambiguity in test case fitness evaluation and selection. The second stage is the ant colony optimization-based wrapper technique with a forward search strategy, employed to select test cases from the reduced test suite of the first stage. The results are evaluated using the Coverage parameters, Precision, Recall, F-Measure, APSC, APDC, and SSR. The experimental evaluation demonstrates that by this approach considerable computational effort can be avoided.Keywords: ant colony optimization, fuzzy entropy, interval type-2 fuzzy rough set, test case selection
Procedia PDF Downloads 6702659 Evaluating Models Through Feature Selection Methods Using Data Driven Approach
Authors: Shital Patil, Surendra Bhosale
Abstract:
Cardiac diseases are the leading causes of mortality and morbidity in the world, from recent few decades accounting for a large number of deaths have emerged as the most life-threatening disorder globally. Machine learning and Artificial intelligence have been playing key role in predicting the heart diseases. A relevant set of feature can be very helpful in predicting the disease accurately. In this study, we proposed a comparative analysis of 4 different features selection methods and evaluated their performance with both raw (Unbalanced dataset) and sampled (Balanced) dataset. The publicly available Z-Alizadeh Sani dataset have been used for this study. Four feature selection methods: Data Analysis, minimum Redundancy maximum Relevance (mRMR), Recursive Feature Elimination (RFE), Chi-squared are used in this study. These methods are tested with 8 different classification models to get the best accuracy possible. Using balanced and unbalanced dataset, the study shows promising results in terms of various performance metrics in accurately predicting heart disease. Experimental results obtained by the proposed method with the raw data obtains maximum AUC of 100%, maximum F1 score of 94%, maximum Recall of 98%, maximum Precision of 93%. While with the balanced dataset obtained results are, maximum AUC of 100%, F1-score 95%, maximum Recall of 95%, maximum Precision of 97%.Keywords: cardio vascular diseases, machine learning, feature selection, SMOTE
Procedia PDF Downloads 1202658 A Study of the Tactile Codification on the Philippine Banknote: Redesigning for the Blind
Authors: Ace Mari S. Simbajon, Rhaella J. Ybañez, Mae G. Nadela, Cherry E. Sagun, Nera Mae A. Puyo
Abstract:
This study determined the usability of the Philippine banknotes. An experimental design was used in the study involving twenty (n=20) randomly selected blind participants. The three aspects of usability were measured: effectiveness, efficiency, and satisfaction. It was found out that the effectiveness rate of the current Philippine Banknotes ranges from 20 percent to 35 percent which means it is not effective basing from Cauro’s threshold of average effectiveness rate which is 78 percent. Its efficiency rate is ranging from 18.06 to 26.22 seconds per denomination. The average satisfaction rate is 1.45 which means the blind are very dissatisfied. These results were used as a guide in making the proposed tactile codification using embossed dots or embossed lines. A round of simulation was conducted with the blind to assess the usability of the two proposals. Results were then statistically treated using t-test. Results show statistically significant difference between the usability of the current banknotes versus the proposed designs. Moreover, it was found out that the use of embossed dots is more effective, more efficient, and more satisfying than the embossed lines with an effectiveness rate ranging from 90 percent to 100 percent, efficiency rate ranging from 6.73 seconds to 12.99 seconds, and satisfaction rate of 3.4 which means the blind are very satisfied.Keywords: blind, Philippine banknotes, tactile codification, usability
Procedia PDF Downloads 2892657 Thermal Reduction of Perfect Well Identified Hexagonal Graphene Oxide Nano-Sheets for Super-Capacitor Applications
Authors: A. N. Fouda
Abstract:
A novel well identified hexagonal graphene oxide (GO) nano-sheets were synthesized using modified Hummer method. Low temperature thermal reduction at 350°C in air ambient was performed. After thermal reduction, typical few layers of thermal reduced GO (TRGO) with dimension of few hundreds nanometers were observed using high resolution transmission electron microscopy (HRTEM). GO has a lot of structure models due to variation of the preparation process. Determining the atomic structure of GO is essential for a better understanding of its fundamental properties and for realization of the future technological applications. Structural characterization was identified by x-ray diffraction (XRD), Fourier transform infra-red spectroscopy (FTIR) measurements. A comparison between exper- imental and theoretical IR spectrum were done to confirm the match between experimentally and theoretically proposed GO structure. Partial overlap of the experimental IR spectrum with the theoretical IR was confirmed. The electrochemical properties of TRGO nano-sheets as electrode materials for supercapacitors were investigated by cyclic voltammetry and electrochemical impedance spectroscopy (EIS) measurements. An enhancement in supercapacitance after reduction was confirmed and the area of the CV curve for the TRGO electrode is larger than those for the GO electrode indicating higher specific capacitance which is promising in super-capacitor applicationsKeywords: hexagonal graphene oxide, thermal reduction, cyclic voltammetry
Procedia PDF Downloads 4952656 Wiedemann-Franz Law Violation Domain for Graphene and Nonrelativistic Systems
Authors: Thandar Zaw Win, Cho Win Aung, Gaurav Khandal, Sabyasachi Ghosh
Abstract:
Systematic and comparative research on Lorenz ratios for graphene and nonrelativistic systems has been studied to identify their Wiedemann-Franz law violation domain. Fermi energy and temperature are the main governing parameters for deciding the values of the Lorenz ratio, which is basically thermal conductivity divided by electrical conductivity times temperature times Lorenz number. Metals as three-dimensional nonrelativistic electron gas are located at higher Fermi-energy by temperature domain, where Lorenz ratio remains one. Hence, they obey the Wiedemann-Franz law. By creating higher doping in a two-dimensional graphene system, one can again reach a higher Fermi-energy by temperature domain and get a constant Lorenz ratio. For both graphene and nonrelativistic systems, the Lorenz ratio goes below one if we go lower Fermi-energy by temperature domain, which is possible for the graphene system by decreasing the doping concentration. Experimentally observed greater than one Lorenz ratio in this lower Fermi-energy by temperature domain or Dirac Fluid domain indicates that nonfluid expressions of Lorenz ratio should be replaced by fluidtype expressions. We have noticed a divergent trend of Lorenz ratio in the Dirac Fluid domain using its fluid-type expression, and it matches the trend of experimental data.Keywords: graphene, Lorentz ratio, specific heat, Wiedeann-Franz law
Procedia PDF Downloads 352655 Oryzanol Recovery from Rice Bran Oil: Adsorption Equilibrium Models Through Kinetics Data Approachments
Authors: A.D. Susanti, W. B. Sediawan, S.K. Wirawan, Budhijanto, Ritmaleni
Abstract:
Oryzanol content in rice bran oil (RBO) naturally has high antioxidant activity. Its reviewed has several health properties and high interested in pharmacy, cosmetics, and nutrition’s. Because of the low concentration of oryzanol in crude RBO (0.9-2.9%) then its need to be further processed for practical usage, such as via adsorption process. In this study, investigation and adjustment of adsorption equilibrium models were conducted through the kinetic data approachments. Mathematical modeling on kinetics of batch adsorption of oryzanol separation from RBO has been set-up and then applied for equilibrium results. The size of adsorbent particles used in this case are usually relatively small then the concentration in the adsorbent is assumed to be not different. Hence, the adsorption rate is controlled by the rate of oryzanol mass transfer from the bulk fluid of RBO to the surface of silica gel. In this approachments, the rate of mass transfer is assumed to be proportional to the concentration deviation from the equilibrium state. The equilibrium models applied were Langmuir, coefficient distribution, and Freundlich with the values of the parameters obtained from equilibrium results. It turned out that the models set-up can quantitatively describe the experimental kinetics data and the adjustment of the values of equilibrium isotherm parameters significantly improves the accuracy of the model. And then the value of mass transfer coefficient per unit adsorbent mass (kca) is obtained by curve fitting.Keywords: adsorption equilibrium, adsorption kinetics, oryzanol, rice bran oil
Procedia PDF Downloads 3232654 Continuous Fixed Bed Reactor Application for Decolourization of Textile Effluent by Adsorption on NaOH Treated Eggshell
Authors: M. Chafi, S. Akazdam, C. Asrir, L. Sebbahi, B. Gourich, N. Barka, M. Essahli
Abstract:
Fixed bed adsorption has become a frequently used industrial application in wastewater treatment processes. Various low cost adsorbents have been studied for their applicability in treatment of different types of effluents. In this work, the intention of the study was to explore the efficacy and feasibility for azo dye, Acid Orange 7 (AO7) adsorption onto fixed bed column of NaOH Treated eggshell (TES). The effect of various parameters like flow rate, initial dye concentration, and bed height were exploited in this study. The studies confirmed that the breakthrough curves were dependent on flow rate, initial dye concentration solution of AO7 and bed depth. The Thomas, Yoon–Nelson, and Adams and Bohart models were analysed to evaluate the column adsorption performance. The adsorption capacity, rate constant and correlation coefficient associated to each model for column adsorption was calculated and mentioned. The column experimental data were fitted well with Thomas model with coefficients of correlation R2 ≥0.93 at different conditions but the Yoon–Nelson, BDST and Bohart–Adams model (R2=0.911), predicted poor performance of fixed-bed column. The (TES) was shown to be suitable adsorbent for adsorption of AO7 using fixed-bed adsorption column.Keywords: adsorption models, acid orange 7, bed depth, breakthrough, dye adsorption, fixed-bed column, treated eggshell
Procedia PDF Downloads 3782653 Image Multi-Feature Analysis by Principal Component Analysis for Visual Surface Roughness Measurement
Authors: Wei Zhang, Yan He, Yan Wang, Yufeng Li, Chuanpeng Hao
Abstract:
Surface roughness is an important index for evaluating surface quality, needs to be accurately measured to ensure the performance of the workpiece. The roughness measurement based on machine vision involves various image features, some of which are redundant. These redundant features affect the accuracy and speed of the visual approach. Previous research used correlation analysis methods to select the appropriate features. However, this feature analysis is independent and cannot fully utilize the information of data. Besides, blindly reducing features lose a lot of useful information, resulting in unreliable results. Therefore, the focus of this paper is on providing a redundant feature removal approach for visual roughness measurement. In this paper, the statistical methods and gray-level co-occurrence matrix(GLCM) are employed to extract the texture features of machined images effectively. Then, the principal component analysis(PCA) is used to fuse all extracted features into a new one, which reduces the feature dimension and maintains the integrity of the original information. Finally, the relationship between new features and roughness is established by the support vector machine(SVM). The experimental results show that the approach can effectively solve multi-feature information redundancy of machined surface images and provides a new idea for the visual evaluation of surface roughness.Keywords: feature analysis, machine vision, PCA, surface roughness, SVM
Procedia PDF Downloads 2142652 Comparison of Double Unit Tunnel Form Building before and after Repair and Retrofit under in-Plane Cyclic Loading
Authors: S. A. Anuar, N. H. Hamid, M. H. Hashim, S. M. D. Salleh
Abstract:
This paper present the experimental work on the seismic performance of double unit tunnel form building (TFB) subjected to in-plane lateral cyclic loading. A one third scale of 3-storey double unit of TFB is tested at ±0.01%, ±0.1%, ±0.25%, ±0.5%, ±0.75% and ±1.0% drifts until the structure achieves its strength degradation. After that, the TFB is repaired and retrofitted using additional shear wall, steel angle and CFRP sheet. A similar testing approach is applied to the specimen after repair and retrofit. The crack patterns, lateral strength, stiffness, ductility and equivalent viscous damping (EVD) were analyzed and compared before and after repair and retrofit. The result indicates that the lateral strength increases by 22 in pushing direction and 27% in pulling direction. Moreover, the stiffness and ductility obtained before and after retrofit increase tremendously by 87.87% and 39.66%, respectively. Meanwhile, the energy absorption measured by equivalent viscous damping obtained after retrofit increase by 12.34% in pulling direction. It can be concluded that the proposed retrofit method is capable to increase the lateral strength capacity, stiffness and energy absorption of double unit TFB.Keywords: tunnel form building, in-plane lateral cyclic loading, crack pattern, lateral strength, stiffness, ductility, equivalent viscous damping, repair and retrofit
Procedia PDF Downloads 3552651 Integrating Artificial Neural Network and Taguchi Method on Constructing the Real Estate Appraisal Model
Authors: Mu-Yen Chen, Min-Hsuan Fan, Chia-Chen Chen, Siang-Yu Jhong
Abstract:
In recent years, real estate prediction or valuation has been a topic of discussion in many developed countries. Improper hype created by investors leads to fluctuating prices of real estate, affecting many consumers to purchase their own homes. Therefore, scholars from various countries have conducted research in real estate valuation and prediction. With the back-propagation neural network that has been popular in recent years and the orthogonal array in the Taguchi method, this study aimed to find the optimal parameter combination at different levels of orthogonal array after the system presented different parameter combinations, so that the artificial neural network obtained the most accurate results. The experimental results also demonstrated that the method presented in the study had a better result than traditional machine learning. Finally, it also showed that the model proposed in this study had the optimal predictive effect, and could significantly reduce the cost of time in simulation operation. The best predictive results could be found with a fewer number of experiments more efficiently. Thus users could predict a real estate transaction price that is not far from the current actual prices.Keywords: artificial neural network, Taguchi method, real estate valuation model, investors
Procedia PDF Downloads 4912650 Thermodynamics of Random Copolymers in Solution
Authors: Maria Bercea, Bernhard A. Wolf
Abstract:
The thermodynamic behavior for solutions of poly (methyl methacrylate-ran-t-butyl methacrylate) of variable composition as compared with the corresponding homopolymers was investigated by light scattering measurements carried out for dilute solutions and vapor pressure measurements of concentrated solutions. The complex dependencies of the Flory Huggins interaction parameter on concentration and copolymer composition in solvents of different polarity (toluene and chloroform) can be understood by taking into account the ability of the polymers to rearrange in a response to changes in their molecular surrounding. A recent unified thermodynamic approach was used for modeling the experimental data, being able to describe the behavior of the different solutions by means of two adjustable parameters, one representing the effective number of solvent segments and another one accounting for the interactions between the components. Thus, it was investigated how the solvent quality changes with the composition of the copolymers through the Gibbs energy of mixing as a function of polymer concentration. The largest reduction of the Gibbs energy at a given composition of the system was observed for the best solvent. The present investigation proves that the new unified thermodynamic approach is a general concept applicable to homo- and copolymers, independent of the chain conformation or shape, molecular and chemical architecture of the components and of other dissimilarities, such as electrical charges.Keywords: random copolymers, Flory Huggins interaction parameter, Gibbs energy of mixing, chemical architecture
Procedia PDF Downloads 2812649 Variation of Streamwise and Vertical Turbulence Intensity in a Smooth and Rough Bed Open Channel Flow
Authors: M. Abdullah Al Faruque, Ram Balachandar
Abstract:
An experimental study with four different types of bed conditions was carried out to understand the effect of roughness in open channel flow at two different Reynolds numbers. The bed conditions include a smooth surface and three different roughness conditions which were generated using sand grains with a median diameter of 2.46 mm. The three rough conditions include a surface with distributed roughness, a surface with continuously distributed roughness and a sand bed with a permeable interface. A commercial two-component fibre-optic LDA system was used to conduct the velocity measurements. The variables of interest include the mean velocity, turbulence intensity, the correlation between the streamwise and the wall normal turbulence, Reynolds shear stress and velocity triple products. Quadrant decomposition was used to extract the magnitude of the Reynolds shear stress of the turbulent bursting events. The effect of roughness was evident throughout the flow depth. The results show that distributed roughness has the greatest roughness effect followed by the sand bed and the continuous roughness. Compared to the smooth bed, the streamwise turbulence intensity reduces but the vertical turbulence intensity increases at a location very close to the bed due to the introduction of roughness. Although the same sand grain is used to create the three different rough bed conditions, the difference in the turbulence intensity is an indication that the specific geometry of the roughness has an influence on turbulence structure.Keywords: open channel flow, smooth and rough bed, Reynolds number, turbulence
Procedia PDF Downloads 3402648 Soil Liquefaction Hazard Evaluation for Infrastructure in the New Bejaia Quai, Algeria
Authors: Mohamed Khiatine, Amal Medjnoun, Ramdane Bahar
Abstract:
The North Algeria is a highly seismic zone, as evidenced by the historical seismicity. During the past two decades, it has experienced several moderate to strong earthquakes. Therefore, the geotechnical engineering problems that involve dynamic loading of soils and soil-structure interaction system requires, in the presence of saturated loose sand formations, liquefaction studies. Bejaia city, located in North-East of Algiers, Algeria, is a part of the alluvial plain which covers an area of approximately 750 hectares. According to the Algerian seismic code, it is classified as moderate seismicity zone. This area had not experienced in the past urban development because of the different hazards identified by hydraulic and geotechnical studies conducted in the region. The low bearing capacity of the soil, its high compressibility and the risk of liquefaction and flooding are among these risks and are a constraint on urbanization. In this area, several cases of structures founded on shallow foundations have suffered damages. Hence, the soils need treatment to reduce the risk. Many field and laboratory investigations, core drilling, pressuremeter test, standard penetration test (SPT), cone penetrometer test (CPT) and geophysical down hole test, were performed in different locations of the area. The major part of the area consists of silty fine sand , sometimes heterogeneous, has not yet reached a sufficient degree of consolidation. The ground water depth changes between 1.5 and 4 m. These investigations show that the liquefaction phenomenon is one of the critical problems for geotechnical engineers and one of the obstacles found in design phase of projects. This paper presents an analysis to evaluate the liquefaction potential, using the empirical methods based on Standard Penetration Test (SPT), Cone Penetration Test (CPT) and shear wave velocity and numerical analysis. These liquefaction assessment procedures indicate that liquefaction can occur to considerable depths in silty sand of harbor zone of Bejaia.Keywords: earthquake, modeling, liquefaction potential, laboratory investigations
Procedia PDF Downloads 3542647 The Study of Cost Accounting in S Company Based on TDABC
Authors: Heng Ma
Abstract:
Third-party warehousing logistics has an important role in the development of external logistics. At present, the third-party logistics in our country is still a new industry, the accounting system has not yet been established, the current financial accounting system of third-party warehousing logistics is mainly in the traditional way of thinking, and only able to provide the total cost information of the entire enterprise during the accounting period, unable to reflect operating indirect cost information. In order to solve the problem of third-party logistics industry cost information distortion, improve the level of logistics cost management, the paper combines theoretical research and case analysis method to reflect cost allocation by building third-party logistics costing model using Time-Driven Activity-Based Costing(TDABC), and takes S company as an example to account and control the warehousing logistics cost. Based on the idea of “Products consume activities and activities consume resources”, TDABC put time into the main cost driver and use time-consuming equation resources assigned to cost objects. In S company, the objects focuses on three warehouse, engaged with warehousing and transportation (the second warehouse, transport point) service. These three warehouse respectively including five departments, Business Unit, Production Unit, Settlement Center, Security Department and Equipment Division, the activities in these departments are classified by in-out of storage forecast, in-out of storage or transit and safekeeping work. By computing capacity cost rate, building the time-consuming equation, the paper calculates the final operation cost so as to reveal the real cost. The numerical analysis results show that the TDABC can accurately reflect the cost allocation of service customers and reveal the spare capacity cost of resource center, verifies the feasibility and validity of TDABC in third-party logistics industry cost accounting. It inspires enterprises focus on customer relationship management and reduces idle cost to strengthen the cost management of third-party logistics enterprises.Keywords: third-party logistics enterprises, TDABC, cost management, S company
Procedia PDF Downloads 3602646 Improving the Method for Characterizing Structural Fabrics for Shear Resistance and Formability
Authors: Dimitrios Karanatsis
Abstract:
Non-crimp fabrics (NCFs) allow for high mechanical performance of a manufacture composite component by maintaining the fibre reinforcements parallel to each other. The handling of NCFs is enabled by the stitching of the tows. Although the stitching material has negligible influence to the performance of the manufactured part, it can affect the ability of the structural fabric to shear and drape over the part’s geometry. High resistance to shearing is attributed to the high tensile strain of the stitching yarn and can cause defects in the fabric. In the current study, a correlation based on the stitch tension and shear behaviour is examined. The purpose of the research is to investigate the upper and lower limits of non-crimp fabrics manufacture and how these affect the shear behaviour of the fabrics. Experimental observations show that shear behaviour of the fabrics is significantly affected by the stitch tension, and there is a linear effect to the degree of shear they experience. It was found that the lowest possible stitch tension on the manufacturing line settings produces an NCF that exhibits very low tensile strain on it’s yarns and that has shear properties similar to a woven fabric. Moreover, the highest allowable stitch tension results in reduced formability of the fabric, as the stitch thread rearranges the fibre filaments where these become packed in a tight formation with constricted movement.Keywords: carbon fibres, composite manufacture, shear testing, textiles
Procedia PDF Downloads 1472645 Time/Temperature-Dependent Finite Element Model of Laminated Glass Beams
Authors: Alena Zemanová, Jan Zeman, Michal Šejnoha
Abstract:
The polymer foil used for manufacturing of laminated glass members behaves in a viscoelastic manner with temperature dependence. This contribution aims at incorporating the time/temperature-dependent behavior of interlayer to our earlier elastic finite element model for laminated glass beams. The model is based on a refined beam theory: each layer behaves according to the finite-strain shear deformable formulation by Reissner and the adjacent layers are connected via the Lagrange multipliers ensuring the inter-layer compatibility of a laminated unit. The time/temperature-dependent behavior of the interlayer is accounted for by the generalized Maxwell model and by the time-temperature superposition principle due to the Williams, Landel, and Ferry. The resulting system is solved by the Newton method with consistent linearization and the viscoelastic response is determined incrementally by the exponential algorithm. By comparing the model predictions against available experimental data, we demonstrate that the proposed formulation is reliable and accurately reproduces the behavior of the laminated glass units.Keywords: finite element method, finite-strain Reissner model, Lagrange multipliers, generalized Maxwell model, laminated glass, Newton method, Williams-Landel-Ferry equation
Procedia PDF Downloads 4332644 An Experimental Study on the Optimum Installation of Fire Detector for Early Stage Fire Detecting in Rack-Type Warehouses
Authors: Ki Ok Choi, Sung Ho Hong, Dong Suck Kim, Don Mook Choi
Abstract:
Rack type warehouses are different from general buildings in the kinds, amount, and arrangement of stored goods, so the fire risk of rack type warehouses is different from those buildings. The fire pattern of rack type warehouses is different in combustion characteristic and storing condition of stored goods. The initial fire burning rate is different in the surface condition of materials, but the running time of fire is closely related with the kinds of stored materials and stored conditions. The stored goods of the warehouse are consisted of diverse combustibles, combustible liquid, and so on. Fire detection time may be delayed because the residents are less than office and commercial buildings. If fire detectors installed in rack type warehouses are inadaptable, the fire of the warehouse may be the great fire because of delaying of fire detection. In this paper, we studied what kinds of fire detectors are optimized in early detecting of rack type warehouse fire by real-scale fire tests. The fire detectors used in the tests are rate of rise type, fixed type, photo electric type, and aspirating type detectors. We considered optimum fire detecting method in rack type warehouses suggested by the response characteristic and comparative analysis of the fire detectors.Keywords: fire detector, rack, response characteristic, warehouse
Procedia PDF Downloads 7472643 The Effects of Anthropomorphism on Complex Technological Innovations
Authors: Chyi Jaw
Abstract:
Many companies have suffered as a result of consumers’ rejection of complex new products and experienced huge losses in the market. Marketers have to understand what block from new technology adoption or positive product attitude may exist in the market. This research examines the effects of techno-complexity and anthropomorphism on consumer psychology and product attitude when new technologies are introduced to the market. This study conducted a pretest and a 2 x 2 between-subjects experiment. Four simulated experimental web pages were constructed to collect data. The empirical analysis tested the moderation-mediation relationships among techno-complexity, technology anxiety, ability, and product attitude. These empirical results indicate (1) Techno-complexity of an innovation is negatively related to consumers’ product attitude, as well as increases consumers’ technology anxiety and reduces their self-ability perception. (2) Consumers’ technology anxiety and ability perception towards an innovation completely mediate the relationship between techno-complexity and product attitude. (3) Product anthropomorphism is positively related to consumers’ attitude of new technology, and also significantly moderates the effect of techno-complexity in the hypothesized model. In this work, the study presents the moderation-mediation model and the effects of anthropomorphized strategy, which describes how managers can better predict and influence the diffusion of complex technological innovations.Keywords: ability, anthropomorphic effect, innovation, techno-complexity, technology anxiety
Procedia PDF Downloads 1932642 Analysis of Attention to the Confucius Institute from Domestic and Foreign Mainstream Media
Authors: Wei Yang, Xiaohui Cui, Weiping Zhu, Liqun Liu
Abstract:
The rapid development of the Confucius Institute is attracting more and more attention from mainstream media around the world. Mainstream media plays a large role in public information dissemination and public opinion. This study presents efforts to analyze the correlation and functional relationship between domestic and foreign mainstream media by analyzing the amount of reports on the Confucius Institute. Three kinds of correlation calculation methods, the Pearson correlation coefficient (PCC), the Spearman correlation coefficient (SCC), and the Kendall rank correlation coefficient (KCC), were applied to analyze the correlations among mainstream media from three regions: mainland of China; Hong Kong and Macao (the two special administration regions of China denoted as SARs); and overseas countries excluding China, such as the United States, England, and Canada. Further, the paper measures the functional relationships among the regions using a regression model. The experimental analyses found high correlations among mainstream media from the different regions. Additionally, we found that there is a linear relationship between the mainstream media of overseas countries and those of the SARs by analyzing the amount of reports on the Confucius Institute based on a data set obtained by crawling the websites of 106 mainstream media during the years 2004 to 2014.Keywords: mainstream media, Confucius institute, correlation analysis, regression model
Procedia PDF Downloads 3192641 Vibration Energy Harvesting from Aircraft Structure Using Piezoelectric Transduction
Authors: M. Saifudin Ahmed Atique, Santosh Paudyal, Caixia Yang
Abstract:
In an aircraft, a great portion of energy is wasted due to its inflight structural vibration. Structural components vibrate due to aeroelastic instabilities, gust perturbations and engine rotation at very high rpm. Energy losses due to mechanical vibration can be utilized by harvesting energy from aircraft structure as electrical energy. This harvested energy can be stored in battery panels built into aircraft fuselage and can be used to power inflight auxiliary accessories i.e., lighting and entertainment systems. Moreover, this power can be used for wireless Structural Health Monitoring System (SHM) for aircraft and as an excellent replacement of aircraft Ground Power Unit (GPU)/Auxiliary Power Unit (APU) during passenger onboard time to power aircraft cabin accessories to reduce aircraft ground operation cost significantly. In this paper, we propose the design of a noble aircraft wing in which Piezoelectric panels placed under the composite skin of aircraft wing will generate electrical charges from any inflight aerodynamics or mechanical vibration and store it into battery to power auxiliary inflight systems/accessories as per requirement. Experimental results show that a well-engineered piezoelectric energy harvester based aircraft wing can produce adequate energy to support in-flight lighting and auxiliary cabin accessories.Keywords: vibration energy, aircraft wing, piezoelectric material, inflight accessories
Procedia PDF Downloads 1602640 Speeding Up Lenia: A Comparative Study Between Existing Implementations and CUDA C++ with OpenGL Interop
Authors: L. Diogo, A. Legrand, J. Nguyen-Cao, J. Rogeau, S. Bornhofen
Abstract:
Lenia is a system of cellular automata with continuous states, space and time, which surprises not only with the emergence of interesting life-like structures but also with its beauty. This paper reports ongoing research on a GPU implementation of Lenia using CUDA C++ and OpenGL Interoperability. We demonstrate how CUDA as a low-level GPU programming paradigm allows optimizing performance and memory usage of the Lenia algorithm. A comparative analysis through experimental runs with existing implementations shows that the CUDA implementation outperforms the others by one order of magnitude or more. Cellular automata hold significant interest due to their ability to model complex phenomena in systems with simple rules and structures. They allow exploring emergent behavior such as self-organization and adaptation, and find applications in various fields, including computer science, physics, biology, and sociology. Unlike classic cellular automata which rely on discrete cells and values, Lenia generalizes the concept of cellular automata to continuous space, time and states, thus providing additional fluidity and richness in emerging phenomena. In the current literature, there are many implementations of Lenia utilizing various programming languages and visualization libraries. However, each implementation also presents certain drawbacks, which serve as motivation for further research and development. In particular, speed is a critical factor when studying Lenia, for several reasons. Rapid simulation allows researchers to observe the emergence of patterns and behaviors in more configurations, on bigger grids and over longer periods without annoying waiting times. Thereby, they enable the exploration and discovery of new species within the Lenia ecosystem more efficiently. Moreover, faster simulations are beneficial when we include additional time-consuming algorithms such as computer vision or machine learning to evolve and optimize specific Lenia configurations. We developed a Lenia implementation for GPU using the C++ and CUDA programming languages, and CUDA/OpenGL Interoperability for immediate rendering. The goal of our experiment is to benchmark this implementation compared to the existing ones in terms of speed, memory usage, configurability and scalability. In our comparison we focus on the most important Lenia implementations, selected for their prominence, accessibility and widespread use in the scientific community. The implementations include MATLAB, JavaScript, ShaderToy GLSL, Jupyter, Rust and R. The list is not exhaustive but provides a broad view of the principal current approaches and their respective strengths and weaknesses. Our comparison primarily considers computational performance and memory efficiency, as these factors are critical for large-scale simulations, but we also investigate the ease of use and configurability. The experimental runs conducted so far demonstrate that the CUDA C++ implementation outperforms the other implementations by one order of magnitude or more. The benefits of using the GPU become apparent especially with larger grids and convolution kernels. However, our research is still ongoing. We are currently exploring the impact of several software design choices and optimization techniques, such as convolution with Fast Fourier Transforms (FFT), various GPU memory management scenarios, and the trade-off between speed and accuracy using single versus double precision floating point arithmetic. The results will give valuable insights into the practice of parallel programming of the Lenia algorithm, and all conclusions will be thoroughly presented in the conference paper. The final version of our CUDA C++ implementation will be published on github and made freely accessible to the Alife community for further development.Keywords: artificial life, cellular automaton, GPU optimization, Lenia, comparative analysis.
Procedia PDF Downloads 442639 Piping Fragility Composed of Different Materials by Using OpenSees Software
Authors: Woo Young Jung, Min Ho Kwon, Bu Seog Ju
Abstract:
A failure of the non-structural component can cause significant damages in critical facilities such as nuclear power plants and hospitals. Historically, it was reported that the damage from the leakage of sprinkler systems, resulted in the shutdown of hospitals for several weeks by the 1971 San Fernando and 1994 North Ridge earthquakes. In most cases, water leakages were observed at the cross joints, sprinkler heads, and T-joint connections in piping systems during and after the seismic events. Hence, the primary objective of this study was to understand the seismic performance of T-joint connections and to develop an analytical Finite Element (FE) model for the T-joint systems of 2-inch fire protection piping system in hospitals subjected to seismic ground motions. In order to evaluate the FE models of the piping systems using OpenSees, two types of materials were used: 1) Steel 02 materials and 2) Pinching 4 materials. Results of the current study revealed that the nonlinear moment-rotation FE models for the threaded T-joint reconciled well with the experimental results in both FE material models. However, the system-level fragility determined from multiple nonlinear time history analyses at the threaded T-joint was slightly different. The system-level fragility at the T-joint, determined by Pinching 4 material was more conservative than that of using Steel 02 material in the piping system.Keywords: fragility, t-joint, piping, leakage, sprinkler
Procedia PDF Downloads 3052638 Training Can Increase Knowledge and Skill of Teacher's on Measurement and Assessment Nutritional Status Children
Authors: Herawati Tri Siswati, Nurhidayat Ana Sıdık Fatimah
Abstract:
The Indonesia Basic Health Research, 2013 showed that prevalence of stunting of 6–12 children years old was 35,6%, wasting was 12,2% and obesiy was 9,2%. The Indonesian Goverment have School Health Program, held in coordination, plans, directing and responsible, developing and implement health student. However, it's implementation still under expected, while Indonesian Ministry of Health has initiated the School Health Program acceleration. This aimed is to know the influencing of training to knowledge and skill of elementary school teacher about measurement and assesment nutrirional status children. The research is quasy experimental with pre-post design, in Sleman disctrict, Yogyakarta province, Indonesia, 2015. Subject was all of elementary school teacher’s who responsible in School Health Program in Gamping sub-district, Sleman, Yogyakarta, i.e. 32 persons. The independent variable is training, while the dependent variable are teacher’s klowledge and skill on measurement and assesment nutrirional status children. The data was analized by t-test. The result showed that the knowledge score before training is 31,6±9,7 and after 56,4±12,6, with an increase 24,8±15,7, and p=0.00. The skill score before training is 46,6±11,1 and after 61,7±13, with an increase 15,2±14,2, p = 0.00. Training can increase the teacher’s klowledge and skill on measurement and assesment nutrirional status.Keywords: training, school health program, nutritional status, children.
Procedia PDF Downloads 3942637 Fatigue Analysis and Life Estimation of the Helicopter Horizontal Tail under Cyclic Loading by Using Finite Element Method
Authors: Defne Uz
Abstract:
Horizontal Tail of helicopter is exposed to repeated oscillatory loading generated by aerodynamic and inertial loads, and bending moments depending on operating conditions and maneuvers of the helicopter. In order to ensure that maximum stress levels do not exceed certain fatigue limit of the material and to prevent damage, a numerical analysis approach can be utilized through the Finite Element Method. Therefore, in this paper, fatigue analysis of the Horizontal Tail model is studied numerically to predict high-cycle and low-cycle fatigue life related to defined loading. The analysis estimates the stress field at stress concentration regions such as around fastener holes where the maximum principal stresses are considered for each load case. Critical element identification of the main load carrying structural components of the model with rivet holes is performed as a post-process since critical regions with high-stress values are used as an input for fatigue life calculation. Once the maximum stress is obtained at the critical element and the related mean and alternating components, it is compared with the endurance limit by applying Soderberg approach. The constant life straight line provides the limit for several combinations of mean and alternating stresses. The life calculation based on S-N (Stress-Number of Cycles) curve is also applied with fully reversed loading to determine the number of cycles corresponds to the oscillatory stress with zero means. The results determine the appropriateness of the design of the model for its fatigue strength and the number of cycles that the model can withstand for the calculated stress. The effect of correctly determining the critical rivet holes is investigated by analyzing stresses at different structural parts in the model. In the case of low life prediction, alternative design solutions are developed, and flight hours can be estimated for the fatigue safe operation of the model.Keywords: fatigue analysis, finite element method, helicopter horizontal tail, life prediction, stress concentration
Procedia PDF Downloads 1482636 The Impact of Text Modifications on Ethiopian Students’ Reading Comprehension and Motivation
Authors: Asefa Kenefergib, Dawit Amogne, Yinager Teklesellassie
Abstract:
A study investigated the effects of text modifications on reading comprehension and motivation among Ethiopian secondary school students. A total of 120 students participated, initially taking a reading comprehension pretest and completing a reading motivation questionnaire. Afterward, they were divided into three groups: control, simplified, and elaborated. Each group then took part in a reading comprehension posttest and another reading motivation questionnaire following an eight-week instructional intervention. Despite initial differences, both the simplified and elaborated text groups showed comparable levels of reading motivation and comprehension. The data were analyzed using SPSS version 25, with a one-way ANOVA used to assess the effectiveness of the modified texts in enhancing reading comprehension. The results indicated that the experimental groups performed significantly better on the posttest compared to the control group, suggesting that text modifications can positively influence students' comprehension skills. Furthermore, the impact of text modifications on student reading motivation was assessed using a one-way ANOVA. The findings revealed that both the elaborated and simplified text groups scored higher than the control group in various dimensions of reading motivation, including reading efficacy, curiosity, challenge, compliance, and reading work avoidance. However, the control and simplified groups had nearly similar mean scores in the dimension of reading competition. These results clearly demonstrate that modifying texts can enhance EFL learners' reading motivation and comprehension.Keywords: simplification, elaboration, reading motivation, reading comprehension
Procedia PDF Downloads 432635 Design and Control of a Knee Rehabilitation Device Using an MR-Fluid Brake
Authors: Mina Beheshti, Vida Shams, Mojtaba Esfandiari, Farzaneh Abdollahi, Abdolreza Ohadi
Abstract:
Most of the people who survive a stroke need rehabilitation tools to regain their mobility. The core function of these devices is a brake actuator. The goal of this study is to design and control a magnetorheological brake which can be used as a rehabilitation tool. In fact, the fluid used in this brake is called magnetorheological fluid or MR that properties can change by variation of the magnetic field. The braking properties can be set as control by using this feature of the fluid. In this research, different MR brake designs are first introduced in each design, and the dimensions of the brake have been determined based on the required torque for foot movement. To calculate the brake dimensions, it is assumed that the shear stress distribution in the fluid is uniform and the fluid is in its saturated state. After designing the rehabilitation brake, the mathematical model of the healthy movement of a healthy person is extracted. Due to the nonlinear nature of the system and its variability, various adaptive controllers, neural networks, and robust have been implemented to estimate the parameters and control the system. After calculating torque and control current, the best type of controller in terms of error and control current has been selected. Finally, this controller is implemented on the experimental data of the patient's movements, and the control current is calculated to achieve the desired torque and motion.Keywords: rehabilitation, magnetorheological fluid, knee, brake, adaptive control, robust control, neural network control, torque control
Procedia PDF Downloads 1542634 Changes in the Body Weight and Wound Contraction Rate Following Treatment with Piper betel Extract in Diabetic Wounds
Authors: Nurul Z. Sani, Amalina N. Ghazali, Azree Elmy, Lee C. Yuen, Zar C. Thent
Abstract:
Piper betel (P. betel) leaves is widely used in Asian countries for treating diabetes mellitus and its complication. In our previous study, we observed the positive effect of P.betel extract on diabetic wounds following 3 and 7 days of treatment. The aim of the present study was to determine the effect of P.betel leaves extract in the diabetic rats was observed in terms of body weight and wound contraction rates following 5 days of the treatment. Total 64 male Sprague-Dawley rats were used and the experimental rats received a single dose of 60mg/kg of Streptozotocin (STZ) injection, intraperitoneally. Four full thickness (6mm) cutaneous wounds were created on dorsum of each rat. The rats were divideid into (n=8): Non-treated Control (NC), Non-treated Diabetic (ND), diabetic treated with commercial cream (SN) and diabetic treated with 50mg/kg of P.betel extract (PB). The rats were sacrificed on day 0 and 5 post wounding. Significant increased in wound closure rate, body weight was observed in PB group compared to ND. Histological deterioration was restored in the P. betel extract treated wounds. It is concluded that topical application with P.betel extract for 5 days of post wounding offers positive scientific value in diabetic rats.Keywords: diabetes, piper betel, wound healing, body weight, morphology
Procedia PDF Downloads 5552633 Anti-Implantation Activity of Kepel (Stelechocarpus burahol) Pulp Ethanol Extract in Female Mice
Authors: Suparmi, Israhnanto Isradji, Dina Fatmawati, Iwang Yusuf
Abstract:
Kepel (Stelechocarpus burahol) is one of the traditional plants originating from Indonesia that can be used to prevent pregnancy, launched urine and kidney inflammation. Kepel pulp has compounds alkaloid, triterpenoid, tannin, saponin, and flavonoid, when used will give the hormonal and cytotoxic effect. This study was aimed at evaluating ethanol extract of kepel in vivo for anti-implantation activities. In this experimental study with post test only control group design, 20 female mice were randomly divided into 4 groups. It was divided into the control, the 0,65 mg dose, 1,3 mg dose, and 3,6 mg dose of kepel pulp extract group. The extract soluted in DMSO’s solution and was given 1 ml per mice. The extract was given 10 days before copulation until 18 days of pregnancy. Then, the number of implantation, presence of fetus, and embrio resorbtion were recorded and used to calculate the percentage anti-implantation effect. The results were tested by One-way ANOVA. The mean number of implantation in group control, 0,65 mg;1,3 mg; and 2,6 mg were 5,60±1,14; 6,20± 1,64; 7,60±1,51; 8,00± 1,58, respectively. One way Annova test showed that there is no significant difference in the number of implantation between the group (p > 0,05). The administration of kepel pulp ethanol extract had no effect on the percentage anti-implantation effect and the number of and embrio resorbtion.Keywords: antiimplantation, fetus, Stelechocarpus burahol, flavonoid
Procedia PDF Downloads 4382632 Evaluation of the Internal Quality for Pineapple Based on the Spectroscopy Approach and Neural Network
Authors: Nonlapun Meenil, Pisitpong Intarapong, Thitima Wongsheree, Pranchalee Samanpiboon
Abstract:
In Thailand, once pineapples are harvested, they must be classified into two classes based on their sweetness: sweet and unsweet. This paper has studied and developed the assessment of internal quality of pineapples using a low-cost compact spectroscopy sensor according to the Spectroscopy approach and Neural Network (NN). During the experiments, Batavia pineapples were utilized, generating 100 samples. The extracted pineapple juice of each sample was used to determine the Soluble Solid Content (SSC) labeling into sweet and unsweet classes. In terms of experimental equipment, the sensor cover was specifically designed to install the sensor and light source to read the reflectance at a five mm depth from pineapple flesh. By using a spectroscopy sensor, data on visible and near-infrared reflectance (Vis-NIR) were collected. The NN was used to classify the pineapple classes. Before the classification step, the preprocessing methods, which are Class balancing, Data shuffling, and Standardization were applied. The 510 nm and 900 nm reflectance values of the middle parts of pineapples were used as features of the NN. With the Sequential model and Relu activation function, 100% accuracy of the training set and 76.67% accuracy of the test set were achieved. According to the abovementioned information, using a low-cost compact spectroscopy sensor has achieved favorable results in classifying the sweetness of the two classes of pineapples.Keywords: neural network, pineapple, soluble solid content, spectroscopy
Procedia PDF Downloads 80