Search results for: large deviation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7787

Search results for: large deviation

7487 Reliability Analysis of Variable Stiffness Composite Laminate Structures

Authors: A. Sohouli, A. Suleman

Abstract:

This study focuses on reliability analysis of variable stiffness composite laminate structures to investigate the potential structural improvement compared to conventional (straight fibers) composite laminate structures. A computational framework was developed which it consists of a deterministic design step and reliability analysis. The optimization part is Discrete Material Optimization (DMO) and the reliability of the structure is computed by Monte Carlo Simulation (MCS) after using Stochastic Response Surface Method (SRSM). The design driver in deterministic optimization is the maximum stiffness, while optimization method concerns certain manufacturing constraints to attain industrial relevance. These manufacturing constraints are the change of orientation between adjacent patches cannot be too large and the maximum number of successive plies of a particular fiber orientation should not be too high. Variable stiffness composites may be manufactured by Automated Fiber Machines (AFP) which provides consistent quality with good production rates. However, laps and gaps are the most important challenges to steer fibers that effect on the performance of the structures. In this study, the optimal curved fiber paths at each layer of composites are designed in the first step by DMO, and then the reliability analysis is applied to investigate the sensitivity of the structure with different standard deviations compared to the straight fiber angle composites. The random variables are material properties and loads on the structures. The results show that the variable stiffness composite laminate structures are much more reliable, even for high standard deviation of material properties, than the conventional composite laminate structures. The reason is that the variable stiffness composite laminates allow tailoring stiffness and provide the possibility of adjusting stress and strain distribution favorably in the structures.

Keywords: material optimization, Monte Carlo simulation, reliability analysis, response surface method, variable stiffness composite structures

Procedia PDF Downloads 495
7486 Predictive Analysis of Chest X-rays Using NLP and Large Language Models with the Indiana University Dataset and Random Forest Classifier

Authors: Azita Ramezani, Ghazal Mashhadiagha, Bahareh Sanabakhsh

Abstract:

This study researches the combination of Random. Forest classifiers with large language models (LLMs) and natural language processing (NLP) to improve diagnostic accuracy in chest X-ray analysis using the Indiana University dataset. Utilizing advanced NLP techniques, the research preprocesses textual data from radiological reports to extract key features, which are then merged with image-derived data. This improved dataset is analyzed with Random Forest classifiers to predict specific clinical results, focusing on the identification of health issues and the estimation of case urgency. The findings reveal that the combination of NLP, LLMs, and machine learning not only increases diagnostic precision but also reliability, especially in quickly identifying critical conditions. Achieving an accuracy of 99.35%, the model shows significant advancements over conventional diagnostic techniques. The results emphasize the large potential of machine learning in medical imaging, suggesting that these technologies could greatly enhance clinician judgment and patient outcomes by offering quicker and more precise diagnostic approximations.

Keywords: natural language processing (NLP), large language models (LLMs), random forest classifier, chest x-ray analysis, medical imaging, diagnostic accuracy, indiana university dataset, machine learning in healthcare, predictive modeling, clinical decision support systems

Procedia PDF Downloads 15
7485 The Search of Anomalous Higgs Boson Couplings at the Large Hadron Electron Collider and Future Circular Electron Hadron Collider

Authors: Ilkay Turk Cakir, Murat Altinli, Zekeriya Uysal, Abdulkadir Senol, Olcay Bolukbasi Yalcinkaya, Ali Yilmaz

Abstract:

The Higgs boson was discovered by the ATLAS and CMS experimental groups in 2012 at the Large Hadron Collider (LHC). Production and decay properties of the Higgs boson, Standard Model (SM) couplings, and limits on effective scale of the Higgs boson’s couplings with other bosons are investigated at particle colliders. Deviations from SM estimates are parametrized by effective Lagrangian terms to investigate Higgs couplings. This is a model-independent method for describing the new physics. In this study, sensitivity to neutral gauge boson anomalous couplings with the Higgs boson is investigated using the parameters of the Large Hadron electron Collider (LHeC) and the Future Circular electron-hadron Collider (FCC-eh) with a model-independent approach. By using MadGraph5_aMC@NLO multi-purpose event generator with the parameters of LHeC and FCC-eh, the bounds on the anomalous Hγγ, HγZ and HZZ couplings in e− p → e− q H process are obtained. Detector simulations are also taken into account in the calculations.

Keywords: anomalos couplings, FCC-eh, Higgs, Z boson

Procedia PDF Downloads 190
7484 Shape Management Method of Large Structure Based on Octree Space Partitioning

Authors: Gichun Cha, Changgil Lee, Seunghee Park

Abstract:

The objective of the study is to construct the shape management method contributing to the safety of the large structure. In Korea, the research of the shape management is lack because of the new attempted technology. Terrestrial Laser Scanning (TLS) is used for measurements of large structures. TLS provides an efficient way to actively acquire accurate the point clouds of object surfaces or environments. The point clouds provide a basis for rapid modeling in the industrial automation, architecture, construction or maintenance of the civil infrastructures. TLS produce a huge amount of point clouds. Registration, Extraction and Visualization of data require the processing of a massive amount of scan data. The octree can be applied to the shape management of the large structure because the scan data is reduced in the size but, the data attributes are maintained. The octree space partitioning generates the voxel of 3D space, and the voxel is recursively subdivided into eight sub-voxels. The point cloud of scan data was converted to voxel and sampled. The experimental site is located at Sungkyunkwan University. The scanned structure is the steel-frame bridge. The used TLS is Leica ScanStation C10/C5. The scan data was condensed 92%, and the octree model was constructed with 2 millimeter in resolution. This study presents octree space partitioning for handling the point clouds. The basis is created by shape management of the large structures such as double-deck tunnel, building and bridge. The research will be expected to improve the efficiency of structural health monitoring and maintenance. "This work is financially supported by 'U-City Master and Doctor Course Grant Program' and the National Research Foundation of Korea(NRF) grant funded by the Korea government (MSIP) (NRF- 2015R1D1A1A01059291)."

Keywords: 3D scan data, octree space partitioning, shape management, structural health monitoring, terrestrial laser scanning

Procedia PDF Downloads 278
7483 Development of Family Quality of Life Scale for a Family Which Has a Person with Disability: Results of a Delphi Study

Authors: Thirakorn Maneerat, Darunee Jongudomkarn, Jiraporn Khiewyoo

Abstract:

Family quality of life of families who have persons with disabilities is a core concern in government services and community health promotion to deal with the multidimensionality of today’s health and societal issues. The number of families who have persons with disabilities in Thailand is gradually increasing. However, facilitation and evaluation of such family quality of life are limited by the lack of feasible tools. As a consequence, service provided for the families is not optimally facilitated and evaluated. This paper is part of a larger project which is aimed to develop a scale for measuring of family quality of life of families who have persons with developmental disabilities in Thailand, presenting the results of a three-round Delphi method involving 11 experts. The study was obtained during December 2013 to May 2014. The first round consisted of open-ended questionnaire and content analysis of the answers. The second round comprised a 5-point Likert scale structured questionnaire based on the first round analysis, with required the experts to identify the most relevant studied tool aspects. Their feedbacks levels of agreements were statistic analysis using the median, interquartile range and quartile deviation. The included criteria for items acceptance were greater than 3.50 of the median, lesser than 1.50 of interquartile range, and 0.65 or less of a quartile deviation. Finally, the proposed questionnaire was structured and validated by the experts in the third round. The results found that across all three rounds, the experts achieved 100% agreement on the five factors regarding to quality of life of a family who have person with disability were considered. These five factors with 38 items were included: 1) 10 items of family interactions; 2) 9 items of child rearing; 3) 7 items of physical and material resources; 4) 5 items of social-emotional status; and 7 items of disability-related services and welfare. Next step of the study was examined the construct validity by using factor analysis methods.

Keywords: tool development, family quality of life scale, person with disability, Delphi study

Procedia PDF Downloads 335
7482 A Safety-Door for Earthquake Disaster Prevention - Part II

Authors: Daniel Y. Abebe, Jaehyouk Choi

Abstract:

The safety of door has not given much attention. The main problem of doors during and after earthquake is that they are unable to be opened because deviation from its original position by the lateral load. The aim of this research is to develop and evaluate a safety door that keeps the door frame in its original position or keeps its edge angles perpendicular during and post-earthquake. Nonlinear finite element analysis was conducted in order to evaluate the structural performance and behavior of the proposed door under both monotonic and cyclic loading.

Keywords: safety-door, earthquake disaster, low yield point steel, passive energy dissipating device, FE analysis

Procedia PDF Downloads 451
7481 Imputation of Incomplete Large-Scale Monitoring Count Data via Penalized Estimation

Authors: Mohamed Dakki, Genevieve Robin, Marie Suet, Abdeljebbar Qninba, Mohamed A. El Agbani, Asmâa Ouassou, Rhimou El Hamoumi, Hichem Azafzaf, Sami Rebah, Claudia Feltrup-Azafzaf, Nafouel Hamouda, Wed a.L. Ibrahim, Hosni H. Asran, Amr A. Elhady, Haitham Ibrahim, Khaled Etayeb, Essam Bouras, Almokhtar Saied, Ashrof Glidan, Bakar M. Habib, Mohamed S. Sayoud, Nadjiba Bendjedda, Laura Dami, Clemence Deschamps, Elie Gaget, Jean-Yves Mondain-Monval, Pierre Defos Du Rau

Abstract:

In biodiversity monitoring, large datasets are becoming more and more widely available and are increasingly used globally to estimate species trends and con- servation status. These large-scale datasets challenge existing statistical analysis methods, many of which are not adapted to their size, incompleteness and heterogeneity. The development of scalable methods to impute missing data in incomplete large-scale monitoring datasets is crucial to balance sampling in time or space and thus better inform conservation policies. We developed a new method based on penalized Poisson models to impute and analyse incomplete monitoring data in a large-scale framework. The method al- lows parameterization of (a) space and time factors, (b) the main effects of predic- tor covariates, as well as (c) space–time interactions. It also benefits from robust statistical and computational capability in large-scale settings. The method was tested extensively on both simulated and real-life waterbird data, with the findings revealing that it outperforms six existing methods in terms of missing data imputation errors. Applying the method to 16 waterbird species, we estimated their long-term trends for the first time at the entire North African scale, a region where monitoring data suffer from many gaps in space and time series. This new approach opens promising perspectives to increase the accuracy of species-abundance trend estimations. We made it freely available in the r package ‘lori’ (https://CRAN.R-project.org/package=lori) and recommend its use for large- scale count data, particularly in citizen science monitoring programmes.

Keywords: biodiversity monitoring, high-dimensional statistics, incomplete count data, missing data imputation, waterbird trends in North-Africa

Procedia PDF Downloads 127
7480 Using the Semantic Web Technologies to Bring Adaptability in E-Learning Systems

Authors: Fatima Faiza Ahmed, Syed Farrukh Hussain

Abstract:

The last few decades have seen a large proportion of our population bending towards e-learning technologies, starting from learning tools used in primary and elementary schools to competency based e-learning systems specifically designed for applications like finance and marketing. The huge diversity in this crowd brings about a large number of challenges for the designers of these e-learning systems, one of which is the adaptability of such systems. This paper focuses on adaptability in the learning material in an e-learning course and how artificial intelligence and the semantic web can be used as an effective tool for this purpose. The study proved that the semantic web, still a hot topic in the area of computer science can prove to be a powerful tool in designing and implementing adaptable e-learning systems.

Keywords: adaptable e-learning, HTMLParser, information extraction, semantic web

Procedia PDF Downloads 296
7479 Exploring the Sources of Innovation in Food Processing SMEs of Kerala

Authors: Bhumika Gupta, Jeayaram Subramanian, Hardik Vachhrajani, Avinash Shivdas

Abstract:

Indian food processing industry is one of the largest in the world in terms of production, consumption, exports and growth opportunities. SMEs play a crucial role within this. Large manufacturing firms largely dominate innovation studies in India. Innovation sources used by SMEs are often different from that of large firms. This paper focuses on exploring various sources of innovation adopted by food processing SMEs in Kerala, South India. Outcome suggests that SMEs use various sources like suppliers, competitors, employees, government/research institutions and customers to get new ideas.

Keywords: food processing, innovation, SMEs, sources of innovation

Procedia PDF Downloads 393
7478 An Association Model to Correlate the Experimentally Determined Mixture Solubilities of Methyl 10-Undecenoate with Methyl Ricinoleate in Supercritical Carbon Dioxide

Authors: V. Mani Rathnam, Giridhar Madras

Abstract:

Fossil fuels are depleting rapidly as the demand for energy, and its allied chemicals are continuously increasing in the modern world. Therefore, sustainable renewable energy sources based on non-edible oils are being explored as a viable option as they do not compete with the food commodities. Oils such as castor oil are rich in fatty acids and thus can be used for the synthesis of biodiesel, bio-lubricants, and many other fine industrial chemicals. There are several processes available for the synthesis of different chemicals obtained from the castor oil. One such process is the transesterification of castor oil, which results in a mixture of fatty acid methyl esters. The main products in the above reaction are methyl ricinoleate and methyl 10-undecenoate. To separate these compounds, supercritical carbon dioxide (SCCO₂) was used as a green solvent. SCCO₂ was chosen as a solvent due to its easy availability, non-toxic, non-flammable, and low cost. In order to design any separation process, the preliminary requirement is the solubility or phase equilibrium data. Therefore, the solubility of a mixture of methyl ricinoleate with methyl 10-undecenoate in SCCO₂ was determined in the present study. The temperature and pressure range selected for the investigation were T = 313 K to 333 K and P = 10 MPa to 18 MPa. It was observed that the solubility (mol·mol⁻¹) of methyl 10-undecenoate varied from 2.44 x 10⁻³ to 8.42 x 10⁻³ whereas it varied from 0.203 x 10⁻³ to 6.28 x 10⁻³ for methyl ricinoleate within the chosen operating conditions. These solubilities followed a retrograde behavior (characterized by the decrease in the solubility values with the increase in temperature) throughout the range of investigated operating conditions. An association theory model, coupled with regular solution theory for activity coefficients, was developed in the present study. The deviation from the experimental data using this model can be quantified using the average absolute relative deviation (AARD). The AARD% for the present compounds is 4.69 and 8.08 for methyl 10-undecenoate and methyl ricinoleate, respectively in a mixture of methyl ricinoleate and methyl 10-undecenoate. The maximum solubility enhancement of 32% was observed for the methyl ricinoleate in a mixture of methyl ricinoleate and methyl 10-undecenoate. The highest selectivity of SCCO₂ was observed to be 12 for methyl 10-undecenoate in a mixture of methyl ricinoleate and methyl 10-undecenoate.

Keywords: association theory, liquid mixtures, solubilities, supercritical carbon dioxide

Procedia PDF Downloads 115
7477 Hyperspectral Image Classification Using Tree Search Algorithm

Authors: Shreya Pare, Parvin Akhter

Abstract:

Remotely sensing image classification becomes a very challenging task owing to the high dimensionality of hyperspectral images. The pixel-wise classification methods fail to take the spatial structure information of an image. Therefore, to improve the performance of classification, spatial information can be integrated into the classification process. In this paper, the multilevel thresholding algorithm based on a modified fuzzy entropy function is used to perform the segmentation of hyperspectral images. The fuzzy parameters of the MFE function have been optimized by using a new meta-heuristic algorithm based on the Tree-Search algorithm. The segmented image is classified by a large distribution machine (LDM) classifier. Experimental results are shown on a hyperspectral image dataset. The experimental outputs indicate that the proposed technique (MFE-TSA-LDM) achieves much higher classification accuracy for hyperspectral images when compared to state-of-art classification techniques. The proposed algorithm provides accurate segmentation and classification maps, thus becoming more suitable for image classification with large spatial structures.

Keywords: classification, hyperspectral images, large distribution margin, modified fuzzy entropy function, multilevel thresholding, tree search algorithm, hyperspectral image classification using tree search algorithm

Procedia PDF Downloads 147
7476 Steepest Descent Method with New Step Sizes

Authors: Bib Paruhum Silalahi, Djihad Wungguli, Sugi Guritman

Abstract:

Steepest descent method is a simple gradient method for optimization. This method has a slow convergence in heading to the optimal solution, which occurs because of the zigzag form of the steps. Barzilai and Borwein modified this algorithm so that it performs well for problems with large dimensions. Barzilai and Borwein method results have sparked a lot of research on the method of steepest descent, including alternate minimization gradient method and Yuan method. Inspired by previous works, we modified the step size of the steepest descent method. We then compare the modification results against the Barzilai and Borwein method, alternate minimization gradient method and Yuan method for quadratic function cases in terms of the iterations number and the running time. The average results indicate that the steepest descent method with the new step sizes provide good results for small dimensions and able to compete with the results of Barzilai and Borwein method and the alternate minimization gradient method for large dimensions. The new step sizes have faster convergence compared to the other methods, especially for cases with large dimensions.

Keywords: steepest descent, line search, iteration, running time, unconstrained optimization, convergence

Procedia PDF Downloads 522
7475 Distribution of Micro Silica Powder at a Ready Mixed Concrete

Authors: Kyong-Ku Yun, Dae-Ae Kim, Kyeo-Re Lee, Kyong Namkung, Seung-Yeon Han

Abstract:

Micro silica is collected as a by-product of the silicon and ferrosilicon alloy production in electric arc furnace using highly pure quartz, wood chips, coke and the like. It consists of about 85% of silicon which has spherical particles with an average particle size of 150 μm. The bulk density of micro silica varies from 150 to 700kg/m^3 and the fineness ranges from 150,000 to 300,000cm^2/g. An amorphous structure with a high silicon oxide content of micro silica induces an active reaction with calcium hydroxide (Ca(OH)₂) generated by the cement hydrate of a large surface area (about 20 m^² / g), and they are also known to form calcium, silicate, hydrate conjugate (C-S-H). Micro silica tends to act as a filler because of the fine particles and the spherical shape. These particles do not get covered by water and they fit well in the space between the relatively rough cement grains which does not freely fluidize concrete. On the contrary, water demand increases since micro silica particles have a tendency to absorb water because of the large surface area. The overall effect of micro silica depends on the amount of micro silica added with other parameters in the water-(cement + micro silica) ratio, and the availability of superplasticizer. In this research, it was studied on cellular sprayed concrete. This method involves a direct re-production of ready mixed concrete into a high performance at a job site. It could reduce the cost of construction by an adding a cellular and a micro silica into a ready mixed concrete truck in a field. Also, micro silica which is difficult with mixing due to high fineness in the field can be added and dispersed in concrete by increasing the fluidity of ready mixed concrete through the surface activity of cellular. Increased air content is converged to a certain level of air content by spraying and it also produces high-performance concrete by remixing of powders in the process of spraying. As it does not use a field mixing equipment the cost of construction decrease and it can be constructed after installing special spray machine in a commercial pump car. Therefore, use of special equipment is minimized, providing economic feasibility through the utilization of existing equipment. This study was carried out to evaluate a highly reliable method of confirming dispersion through a high performance cellular sprayed concrete. A mixture of 25mm coarse aggregate and river sand was applied to the concrete. In addition, by applying silica fume and foam, silica fume dispersion is confirmed in accordance with foam mixing, and the mean and standard deviation is obtained. Then variation coefficient is calculated to finally evaluate the dispersion. Comparison and analysis of before and after spraying were conducted on the experiment variables of 21L, 35L foam for each 7%, 14% silica fume respectively. Taking foam and silica fume as variables, the experiment proceed. Casting a specimen for each variable, a five-day sample is taken from each specimen for EDS test. In this study, it was examined by an experiment materials, plan and mix design, test methods, and equipment, for the evaluation of dispersion in accordance with micro silica and foam.

Keywords: micro silica, distribution, ready mixed concrete, foam

Procedia PDF Downloads 192
7474 GBKMeans: A Genetic Based K-Means Applied to the Capacitated Planning of Reading Units

Authors: Anderson S. Fonseca, Italo F. S. Da Silva, Robert D. A. Santos, Mayara G. Da Silva, Pedro H. C. Vieira, Antonio M. S. Sobrinho, Victor H. B. Lemos, Petterson S. Diniz, Anselmo C. Paiva, Eliana M. G. Monteiro

Abstract:

In Brazil, the National Electric Energy Agency (ANEEL) establishes that electrical energy companies are responsible for measuring and billing their customers. Among these regulations, it’s defined that a company must bill your customers within 27-33 days. If a relocation or a change of period is required, the consumer must be notified in writing, in advance of a billing period. To make it easier to organize a workday’s measurements, these companies create a reading plan. These plans consist of grouping customers into reading groups, which are visited by an employee responsible for measuring consumption and billing. The creation process of a plan efficiently and optimally is a capacitated clustering problem with constraints related to homogeneity and compactness, that is, the employee’s working load and the geographical position of the consuming unit. This process is a work done manually by several experts who have experience in the geographic formation of the region, which takes a large number of days to complete the final planning, and because it’s human activity, there is no guarantee of finding the best optimization for planning. In this paper, the GBKMeans method presents a technique based on K-Means and genetic algorithms for creating a capacitated cluster that respects the constraints established in an efficient and balanced manner, that minimizes the cost of relocating consumer units and the time required for final planning creation. The results obtained by the presented method are compared with the current planning of a real city, showing an improvement of 54.71% in the standard deviation of working load and 11.97% in the compactness of the groups.

Keywords: capacitated clustering, k-means, genetic algorithm, districting problems

Procedia PDF Downloads 177
7473 Factors Influencing Milk Yield, Quality, and Revenue of Dairy Farms in Southern Vietnam

Authors: Ngoc-Hieu Vu

Abstract:

Dairy production in Vietnam is a relatively new agricultural activity and milk production increased remarkably in recent years. Smallholders are still the main drivers for this development, especially in the southern part of the country. However, information on the farming practices is very limited. Therefore, this study aimed to determine factors influencing milk yield and quality (milk fat, total solids, solids-not-fat, total number of bacteria, and somatic cell count) and revenue of dairy farms in Southern Vietnam. The collection of data was at the farm level; individual animal records were unavailable. The 539 studied farms were located in the provinces Lam Dong (N=111 farms), Binh Duong (N=69 farms), Long An (N=174 farms), and Ho Chi Minh city (N=185 farms). The dataset included 9221 monthly test-day records of the farms from January 2013 to May 2015. Seasons were defined as rainy and dry. Farms sizes were classified as small (< 10 milking cows), medium (10 to 19 milking cows) and large (≥ 20 milking cows). The model for each trait contained year-season and farm region-farm size as subclass fixed effects, and individual farm and residual as random effects. Results showed that year-season, region, and farm size were determining sources of variation affecting all studied traits. Milk yield was higher in dry than in rainy seasons (P < 0.05), while it tended to increase from years 2013 to 2015. Large farms had higher yields (445.6 kg/cow) than small (396.7 kg/cow) and medium (428.0 kg/cow) farms (P < 0.05). Small farms, in contrast, were superior to large farms in terms of milk fat, total solids, solids-not-fat, total number of bacteria, and somatic cell count than large farms (P < 0.05). Revenue per cow was higher in large compared with medium and small farms. In conclusion, large farms achieved higher milk yields and revenues per cow, while small farms were superior in milk quality. Overall, milk yields were low and better training, financial support and marketing opportunities for farmers are needed to improve dairy production and increase farm revenues in Southern Vietnam.

Keywords: farm size, milk yield and quality, season, Southern Vietnam

Procedia PDF Downloads 336
7472 An Analysis of Privacy and Security for Internet of Things Applications

Authors: Dhananjay Singh, M. Abdullah-Al-Wadud

Abstract:

The Internet of Things is a concept of a large scale ecosystem of wireless actuators. The actuators are defined as things in the IoT, those which contribute or produces some data to the ecosystem. However, ubiquitous data collection, data security, privacy preserving, large volume data processing, and intelligent analytics are some of the key challenges into the IoT technologies. In order to solve the security requirements, challenges and threats in the IoT, we have discussed a message authentication mechanism for IoT applications. Finally, we have discussed data encryption mechanism for messages authentication before propagating into IoT networks.

Keywords: Internet of Things (IoT), message authentication, privacy, security

Procedia PDF Downloads 353
7471 Understanding and Political Participation in Constitutional Monarchy of Dusit District Residents

Authors: Sudaporn Arundee

Abstract:

The purposes of this research were to study in three areas: (1) to study political understanding and participating of the constitutional monarchy, (2) to study the level of participation. This paper drew upon data collected from 395 Dusit residents by using questionnaire. In addition, a simple random sampling was utilized to collect data. The findings revealed that 94 percent of respondents had a very good understanding of constitution monarchy with a mean of 4.8. However, the respondents overall had a very low level of participation with the mean score of 1.69 and standard deviation of .719.

Keywords: political participation, constitutional monarchy, management and social sciences

Procedia PDF Downloads 236
7470 Determination of Four Anions in the Ground Layer of Tomb Murals by Ion Chromatography

Authors: Liping Qiu, Xiaofeng Zhang

Abstract:

The ion chromatography method for the rapid determination of four anions (F⁻、Cl⁻、SO₄²⁻、NO₃⁻) in burial ground poles was optimized. The L₉(₃⁴) orthogonal test was used to determine the optimal parameters of sample pretreatment: accurately weigh 2.000g of sample, add 10mL of ultrapure water, and extract for 40min under the conditions of shaking temperature 40℃ and shaking speed 180 r·min-1. The eluent was 25 mmol/L KOH solution, the analytical column was Ion Pac® AS11-SH (250 mm × 4.0 mm), and the purified filtrate was measured by a conductivity detector. Under this method, the detection limit of each ion is 0.066~0.078mg/kg, the relative standard deviation is 0.86%~2.44% (n=7), and the recovery rate is 94.6~101.9.

Keywords: ion chromatography, tomb, anion (F⁻, Cl⁻, SO₄²⁻, NO₃⁻), environmental protection

Procedia PDF Downloads 79
7469 Production and Distribution Network Planning Optimization: A Case Study of Large Cement Company

Authors: Lokendra Kumar Devangan, Ajay Mishra

Abstract:

This paper describes the implementation of a large-scale SAS/OR model with significant pre-processing, scenario analysis, and post-processing work done using SAS. A large cement manufacturer with ten geographically distributed manufacturing plants for two variants of cement, around 400 warehouses serving as transshipment points, and several thousand distributor locations generating demand needed to optimize this multi-echelon, multi-modal transport supply chain separately for planning and allocation purposes. For monthly planning as well as daily allocation, the demand is deterministic. Rail and road networks connect any two points in this supply chain, creating tens of thousands of such connections. Constraints include the plant’s production capacity, transportation capacity, and rail wagon batch size constraints. Each demand point has a minimum and maximum for shipments received. Price varies at demand locations due to local factors. A large mixed integer programming model built using proc OPTMODEL decides production at plants, demand fulfilled at each location, and the shipment route to demand locations to maximize the profit contribution. Using base SAS, we did significant pre-processing of data and created inputs for the optimization. Using outputs generated by OPTMODEL and other processing completed using base SAS, we generated several reports that went into their enterprise system and created tables for easy consumption of the optimization results by operations.

Keywords: production planning, mixed integer optimization, network model, network optimization

Procedia PDF Downloads 41
7468 Analysis of 3 dB Directional Coupler Based On Silicon-On-Insulator (SOI) Large Cross-Section Rib Waveguide

Authors: Nurdiani Zamhari, Abang Annuar Ehsan

Abstract:

The 3 dB directional coupler is designed by using silicon-on-insulator (SOI) large cross-section and simulate by Beam Propagation Method at the communication wavelength of 1.55 µm and 1.48 µm. The geometry is shaped with rib height (H) of 6 µm and varied in step factor (r) which is 0.5, 0.6, 0.7 and 0.8. The wave guide spacing is also fixed to 5 µm and the slab width is symmetrical. In general, the 3 dB coupling lengths for four different cross-sections are several millimetre long. The 1.48 of wavelength give the longer coupling length if compare to 1.55 at the same step factor (r). Besides, the low loss propagation is achieved with less than 2 % of propagation loss.

Keywords: 3 dB directional couplers, silicon-on-insulator, symmetrical rib waveguide, OptiBPM 9

Procedia PDF Downloads 496
7467 Approximate-Based Estimation of Single Event Upset Effect on Statistic Random-Access Memory-Based Field-Programmable Gate Arrays

Authors: Mahsa Mousavi, Hamid Reza Pourshaghaghi, Mohammad Tahghighi, Henk Corporaal

Abstract:

Recently, Statistic Random-Access Memory-based (SRAM-based) Field-Programmable Gate Arrays (FPGAs) are widely used in aeronautics and space systems where high dependability is demanded and considered as a mandatory requirement. Since design’s circuit is stored in configuration memory in SRAM-based FPGAs; they are very sensitive to Single Event Upsets (SEUs). In addition, the adverse effects of SEUs on the electronics used in space are much higher than in the Earth. Thus, developing fault tolerant techniques play crucial roles for the use of SRAM-based FPGAs in space. However, fault tolerance techniques introduce additional penalties in system parameters, e.g., area, power, performance and design time. In this paper, an accurate estimation of configuration memory vulnerability to SEUs is proposed for approximate-tolerant applications. This vulnerability estimation is highly required for compromising between the overhead introduced by fault tolerance techniques and system robustness. In this paper, we study applications in which the exact final output value is not necessarily always a concern meaning that some of the SEU-induced changes in output values are negligible. We therefore define and propose Approximate-based Configuration Memory Vulnerability Factor (ACMVF) estimation to avoid overestimating configuration memory vulnerability to SEUs. In this paper, we assess the vulnerability of configuration memory by injecting SEUs in configuration memory bits and comparing the output values of a given circuit in presence of SEUs with expected correct output. In spite of conventional vulnerability factor calculation methods, which accounts any deviations from the expected value as failures, in our proposed method a threshold margin is considered depending on user-case applications. Given the proposed threshold margin in our model, a failure occurs only when the difference between the erroneous output value and the expected output value is more than this margin. The ACMVF is subsequently calculated by acquiring the ratio of failures with respect to the total number of SEU injections. In our paper, a test-bench for emulating SEUs and calculating ACMVF is implemented on Zynq-7000 FPGA platform. This system makes use of the Single Event Mitigation (SEM) IP core to inject SEUs into configuration memory bits of the target design implemented in Zynq-7000 FPGA. Experimental results for 32-bit adder show that, when 1% to 10% deviation from correct output is considered, the counted failures number is reduced 41% to 59% compared with the failures number counted by conventional vulnerability factor calculation. It means that estimation accuracy of the configuration memory vulnerability to SEUs is improved up to 58% in the case that 10% deviation is acceptable in output results. Note that less than 10% deviation in addition result is reasonably tolerable for many applications in approximate computing domain such as Convolutional Neural Network (CNN).

Keywords: fault tolerance, FPGA, single event upset, approximate computing

Procedia PDF Downloads 172
7466 Numerical Modeling of Large Scale Dam Break Flows

Authors: Amanbek Jainakov, Abdikerim Kurbanaliev

Abstract:

The work presents the results of mathematical modeling of large-scale flows in areas with a complex topographic relief. The Reynolds-averaged Navier—Stokes equations constitute the basis of the three-dimensional unsteady modeling. The well-known Volume of Fluid method implemented in the solver interFoam of the open package OpenFOAM 2.3 is used to track the free-boundary location. The mathematical model adequacy is checked by comparing with experimental data. The efficiency of the applied technology is illustrated by the example of modeling the breakthrough of the dams of the Andijan (Uzbekistan) and Papan (near the Osh town, Kyrgyzstan) reservoir.

Keywords: three-dimensional modeling, free boundary, the volume-of-fluid method, dam break, flood, OpenFOAM

Procedia PDF Downloads 378
7465 Zonal and Sequential Extraction Design for Large Flat Space to Achieve Perpetual Tenability

Authors: Mingjun Xu, Man Pun Wan

Abstract:

This study proposed an effective smoke control strategy for the large flat space with a low ceiling to achieve the requirement of perpetual tenability. For the large flat space with a low ceiling, the depth of the smoke reservoir is very shallow, and it is difficult to perpetually constrain the smoke within a limited space. A series of numerical tests were conducted to determine the smoke strategy. A zonal design i.e., the fire zone and two adjacent zones was proposed and validated to be effective in controlling smoke. Once a fire happens in a compartment space, the Engineered Smoke Control (ESC) system will be activated in three zones i.e., the fire zone, in which the fire happened, and two adjacent zones. The smoke can be perpetually constrained within the three smoke zones. To further improve the extraction efficiency, sequential activation of the ESC system within the 3 zones turned out to be more efficient than simultaneous activation. Additionally, the proposed zonal and sequential extraction design can reduce the mechanical extraction flow rate by up to 40.7 % as compared to the conventional method, which is much more economical than that of the conventional method.

Keywords: performance-based design, perpetual tenability, smoke control, fire plume

Procedia PDF Downloads 50
7464 Shear Buckling of a Large Pultruded Composite I-Section under Asymmetric Loading

Authors: Jin Y. Park, Jeong Wan Lee

Abstract:

An experimental and analytical research on shear buckling of a comparably large polymer composite I-section is presented. It is known that shear buckling load of a large span composite beam is difficult to determine experimentally. In order to sensitively detect shear buckling of the tested I-section, twenty strain rosettes and eight displacement sensors were applied and attached on the web and flange surfaces. The tested specimen was a pultruded composite beam made of vinylester resin, E-glass, carbon fibers and micro-fillers. Various coupon tests were performed before the shear buckling test to obtain fundamental material properties of the I-section. An asymmetric four-point bending loading scheme was utilized for the shear test. The loading scheme resulted a high shear and almost zeros moment condition at the center of the web panel. The shear buckling load was successfully determined after analyzing the obtained test data from strain rosettes and displacement sensors. An analytical approach was also performed to verify the experimental results and to support the discussed experimental program.

Keywords: strain sensor, displacement sensor, shear buckling, polymer composite I-section, asymmetric loading

Procedia PDF Downloads 430
7463 Conductivity-Depth Inversion of Large Loop Transient Electromagnetic Sounding Data over Layered Earth Models

Authors: Ravi Ande, Mousumi Hazari

Abstract:

One of the common geophysical techniques for mapping subsurface geo-electrical structures, extensive hydro-geological research, and engineering and environmental geophysics applications is the use of time domain electromagnetic (TDEM)/transient electromagnetic (TEM) soundings. A large transmitter loop for energising the ground and a small receiver loop or magnetometer for recording the transient voltage or magnetic field in the air or on the surface of the earth, with the receiver at the center of the loop or at any random point inside or outside the source loop, make up a large loop TEM system. In general, one can acquire data using one of the configurations with a large loop source, namely, with the receiver at the center point of the loop (central loop method), at an arbitrary in-loop point (in-loop method), coincident with the transmitter loop (coincidence-loop method), and at an arbitrary offset loop point (offset-loop method), respectively. Because of the mathematical simplicity associated with the expressions of EM fields, as compared to the in-loop and offset-loop systems, the central loop system (for ground surveys) and coincident loop system (for ground as well as airborne surveys) have been developed and used extensively for the exploration of mineral and geothermal resources, for mapping contaminated groundwater caused by hazardous waste and thickness of permafrost layer. Because a proper analytical expression for the TEM response over the layered earth model for the large loop TEM system does not exist, the forward problem used in this inversion scheme is first formulated in the frequency domain and then it is transformed in the time domain using Fourier cosine or sine transforms. Using the EMLCLLER algorithm, the forward computation is initially carried out in the frequency domain. As a result, the EMLCLLER modified the forward calculation scheme in NLSTCI to compute frequency domain answers before converting them to the time domain using Fourier Cosine and/or Sine transforms.

Keywords: time domain electromagnetic (TDEM), TEM system, geoelectrical sounding structure, Fourier cosine

Procedia PDF Downloads 72
7462 Illegal Anthropogenic Activity Drives Large Mammal Population Declines in an African Protected Area

Authors: Oluseun A. Akinsorotan, Louise K. Gentle, Md. Mofakkarul Islam, Richard W. Yarnell

Abstract:

High levels of anthropogenic activity such as habitat destruction, poaching and encroachment into natural habitat have resulted in significant global wildlife declines. In order to protect wildlife, many protected areas such as national parks have been created. However, it is argued that many protected areas are only protected in name and are often exposed to continued, and often illegal, anthropogenic pressure. In West African protected areas, declines of large mammals have been documented between 1962 and 2008. This study aimed to produce occupancy estimates of the remaining large mammal fauna in the third largest National Park in Nigeria, Old Oyo, and to compare the estimates with historic estimates while also attempting to quantify levels of illegal anthropogenic activity using a multi-disciplinary approach. Large mammal populations and levels of illegal anthropogenic activity were assessed using empirical field data (camera trapping and transect surveys) in combination with data from questionnaires completed by local villagers and park rangers. Four of the historically recorded species in the park, lion (Panthera leo), hunting dog (Lycaon pictus), elephant (Loxodonta africana) and buffalo (Syncerus caffer) were not detected during field studies nor were they reported by respondents. In addition, occupancy estimates of hunters and illegal grazers were higher than the majority of large mammal species inside the park. This finding was reinforced by responses from the villagers and rangers who’s perception was that large mammal densities in the park were declining, and that a large proportion of the local people were entering the park to hunt wild animals and graze their domestic livestock. Our findings also suggest that widespread poverty and a lack of alternative livelihood opportunities, culture of consuming bushmeat, lack of education and awareness of the value of protected areas, and weak law enforcement are some of the reasons for the illegal activity. Law enforcement authorities were often constrained by insufficient on-site personnel and a lack of modern equipment and infrastructure to deter illegal activities. We conclude that there is a need to address the issue of illegal hunting and livestock grazing, via provision of alternative livelihoods, in combination with community outreach programmes that aim to improve conservation education and awareness and develop the capacity of the conservation authorities in order to achieve conservation goals. Our findings have implications for the conservation management of all protected areas that are available for exploitation by local communities.

Keywords: camera trapping, conservation, extirpation, illegal grazing, large mammals, national park, occupancy estimates, poaching

Procedia PDF Downloads 272
7461 Challenge of Baseline Hydrology Estimation at Large-Scale Watersheds

Authors: Can Liu, Graham Markowitz, John Balay, Ben Pratt

Abstract:

Baseline or natural hydrology is commonly employed for hydrologic modeling and quantification of hydrologic alteration due to manmade activities. It can inform planning and policy related efforts for various state and federal water resource agencies to restore natural streamflow flow regimes. A common challenge faced by hydrologists is how to replicate unaltered streamflow conditions, particularly in large watershed settings prone to development and regulation. Three different methods were employed to estimate baseline streamflow conditions for 6 major subbasins the Susquehanna River Basin; those being: 1) incorporation of consumptive water use and reservoir operations back into regulated gaged records; 2) using a map correlation method and flow duration (exceedance probability) regression equations; 3) extending the pre-regulation streamflow records based on the relationship between concurrent streamflows at unregulated and regulated gage locations. Parallel analyses were perform among the three methods and limitations associated with each are presented. Results from these analyses indicate that generating baseline streamflow records at large-scale watersheds remain challenging, even with long-term continuous stream gage records available.

Keywords: baseline hydrology, streamflow gage, subbasin, regression

Procedia PDF Downloads 303
7460 Effectiveness of Medication and Non-Medication Therapy on Working Memory of Children with Attention Deficit and Hyperactivity Disorder

Authors: Mohaammad Ahmadpanah, Amineh Akhondi, Mohammad Haghighi, Ali Ghaleiha, Leila Jahangard, Elham Salari

Abstract:

Background: Working memory includes the capability to keep and manipulate information in a short period of time. This capability is the basis of complicated judgments and has been attended to as the specific and constant character of individuals. Children with attention deficit and hyperactivity are among the people suffering from deficiency in the active memory, and this deficiency has been attributed to the problem of frontal lobe. This study utilizes a new approach with suitable tasks and methods for training active memory and assessment of the effects of the trainings. Participants: The children participating in this study were of 7-15 year age, who were diagnosed by the psychiatrist and psychologist as hyperactive and attention deficit based on DSM-IV criteria. The intervention group was consisted of 8 boys and 6 girls with the average age of 11 years and standard deviation of 2, and the control group was consisted of 2 girls and 5 boys with an average age of 11.4 and standard deviation of 3. Three children in the test group and two in the control group were under medicinal therapy. Results: Working memory training meaningfully improved the performance in not-trained areas as visual-spatial working memory as well as the performance in Raven progressive tests which are a perfect example of non-verbal, complicated reasoning tasks. In addition, motional activities – measured based on the number of head movements during computerized measuring program – was meaningfully reduced in the medication group. The results of the second test showed that training similar exercise to teenagers and adults results in the improvement of cognition functions, as in hyperactive people. Discussion: The results of this study showed that the performance of working memory is improved through training, and these trainings are extended and generalized in other areas of cognition functions not receiving any training. Trainings resulted in the improvement of performance in the tasks related to prefrontal. They had also a positive and meaningful impact on the moving activities of hyperactive children.

Keywords: attention deficit hyperactivity disorder, working memory, non-medical treatment, children

Procedia PDF Downloads 342
7459 Execution of Joinery in Large Scale Projects: Middle East Region as a Case Study

Authors: Arsany Philip Fawzy

Abstract:

This study is going to address the hurdles of project management in the joinery field. It is widely divided into two sections; the first one will shed light on how to execute large-scale projects with a specific focus on the middle east region. It will also raise major obstacles that may face the joinery team from the site clearance and the coordination between the joinery team and the construction team. The second section is going to technically analyze the commercial side of the joinery and how to control the main cost of the project to avoid financial problems. It will also suggest empirical solutions to monitor the cost impact (e.g., Variation of contract quantity and claims).

Keywords: clearance, quality, cost, variation, claim

Procedia PDF Downloads 77
7458 Evaluation of the Efficacy and Tolerance of Gabapentin in the Treatment of Neuropathic Pain

Authors: A. Ibovi Mouondayi, S. Zaher, R. Assadi, K. Erraoui, S. Sboul, J. Daoudim, S. Bousselham, K. Nassar, S. Janani

Abstract:

INTRODUCTION: Neuropathic pain (NP) caused by damage to the somatosensory nervous system has a significant impact on quality of life and is associated with a high economic burden on the individual and society. The treatment of neuropathic pain consists of the use of a wide range of therapeutic agents, including gabapentin, which is used in the treatment of neuropathic pain. OBJECTIF: The objective of this study was to evaluate the efficacy and tolerance of gabapentin in the treatment of neuropathic pain. MATERIAL AND METHOD: This is a monocentric, cross-sectional, descriptive, retrospective study conducted in our department over a period of 19 months from October 2020 to April 2022. The missing parameters were collected during phone calls of the patients concerned. The diagnostic tool adopted was the DN4 questionnaire in the dialectal Arabic version. The impact of NP was assessed by the visual analog scale (VAS) on pain, sleep, and function. The impact of PN on mood was assessed by the "Hospital anxiety, and depression scale HAD" score in the validated Arabic version. The exclusion criteria were patients followed up for depression and other psychiatric pathologies. RESULTS: A total of 67 patients' data were collected. The average age was 64 years (+/- 15 years), with extremes ranging from 26 years to 94 years. 58 women and 9 men with an M/F sex ratio of 0.15. Cervical radiculopathy was found in 21% of this population, and lumbosacral radiculopathy in 61%. Gabapentin was introduced in doses ranging from 300 to 1800 mg per day with an average dose of 864 mg (+/- 346) per day for an average duration of 12.6 months. Before treatment, 93% of patients had a non-restorative sleep quality (VAS>3). 54% of patients had a pain VAS greater than 5. The function was normal in only 9% of patients. The mean anxiety score was 3.25 (standard deviation: 2.70), and the mean HAD depression score was 3.79 (standard deviation: 1.79). After treatment, all patients had improved the quality of their sleep (p<0.0001). A significant difference was noted in pain VAS, function, as well as anxiety and depression, and HAD score. Gabapentin was stopped for side effects (dizziness and drowsiness) and/or unsatisfactory response. CONCLUSION: Our data demonstrate a favorable effect of gabapentin on the management of neuropathic pain with a significant difference before and after treatment on the quality of life of patients associated with an acceptable tolerance profile.

Keywords: neuropathic pain, chronic pain, treatment, gabapentin

Procedia PDF Downloads 80