Search results for: loss models
8977 Drying Kinects of Soybean Seeds
Authors: Amanda Rithieli Pereira Dos Santos, Rute Quelvia De Faria, Álvaro De Oliveira Cardoso, Anderson Rodrigo Da Silva, Érica Leão Fernandes Araújo
Abstract:
The study of the kinetics of drying has great importance for the mathematical modeling, allowing to know about the processes of transference of heat and mass between the products and to adjust dryers managing new technologies for these processes. The present work had the objective of studying the kinetics of drying of soybean seeds and adjusting different statistical models to the experimental data varying cultivar and temperature. Soybean seeds were pre-dried in a natural environment in order to reduce and homogenize the water content to the level of 14% (b.s.). Then, drying was carried out in a forced air circulation oven at controlled temperatures of 38, 43, 48, 53 and 58 ± 1 ° C, using two soybean cultivars, BRS 8780 and Sambaíba, until reaching a hygroscopic equilibrium. The experimental design was completely randomized in factorial 5 x 2 (temperature x cultivar) with 3 replicates. To the experimental data were adjusted eleven statistical models used to explain the drying process of agricultural products. Regression analysis was performed using the least squares Gauss-Newton algorithm to estimate the parameters. The degree of adjustment was evaluated from the analysis of the coefficient of determination (R²), the adjusted coefficient of determination (R² Aj.) And the standard error (S.E). The models that best represent the drying kinetics of soybean seeds are those of Midilli and Logarítmico.Keywords: curve of drying seeds, Glycine max L., moisture ratio, statistical models
Procedia PDF Downloads 6268976 Effect of Welding Parameters on Dilution and Bead Height for Variable Plate Thickness in Submerged Arc Welding
Authors: Harish Kumar Arya, Kulwant Singh, R. K Saxena, Deepti Jaiswal
Abstract:
The heat flow in weldment changes its nature from 2D to 3D with the increase in plate thickness. For welding of thicker plates the heat loss in thickness direction increases the cooling rate of plate. Since the cooling rate changes, the various bead parameters like bead penetration, bead height and bead width also got affected by it. The present study incorporates the effect of variable plate thickness on bead geometry and dilution. The penetration reduces with increase in plate thickness due to heat loss in thickness direction, while bead width and reinforcement increases for thicker plate due to faster cooling.Keywords: submerged arc welding, plate thickness, bead geometry, cooling rate
Procedia PDF Downloads 2868975 Design of a High Performance T/R Switch for 2.4 GHz RF Wireless Transceiver in 0.13 µm CMOS Technology
Authors: Mohammad Arif Sobhan Bhuiyan, Mamun Bin Ibne Reaz
Abstract:
The rapid advancement of CMOS technology, in the recent years, has led the scientists to fabricate wireless transceivers fully on-chip which results in smaller size and lower cost wireless communication devices with acceptable performance characteristics. Moreover, the performance of the wireless transceivers rigorously depends on the performance of its first block T/R switch. This article proposes a design of a high performance T/R switch for 2.4 GHz RF wireless transceivers in 0.13 µm CMOS technology. The switch exhibits 1- dB insertion loss, 37.2-dB isolation in transmit mode and 1.4-dB insertion loss, 25.6-dB isolation in receive mode. The switch has a power handling capacity (P1dB) of 30.9-dBm. Besides, by avoiding bulky inductors and capacitors, the size of the switch is drastically reduced and it occupies only (0.00296) mm2 which is the lowest ever reported in this frequency band. Therefore, simplicity and low chip area of the circuit will trim down the cost of fabrication as well as the whole transceiver.Keywords: CMOS, ISM band, SPDT, t/r switch, transceiver
Procedia PDF Downloads 4468974 Cost Sensitive Feature Selection in Decision-Theoretic Rough Set Models for Customer Churn Prediction: The Case of Telecommunication Sector Customers
Authors: Emel Kızılkaya Aydogan, Mihrimah Ozmen, Yılmaz Delice
Abstract:
In recent days, there is a change and the ongoing development of the telecommunications sector in the global market. In this sector, churn analysis techniques are commonly used for analysing why some customers terminate their service subscriptions prematurely. In addition, customer churn is utmost significant in this sector since it causes to important business loss. Many companies make various researches in order to prevent losses while increasing customer loyalty. Although a large quantity of accumulated data is available in this sector, their usefulness is limited by data quality and relevance. In this paper, a cost-sensitive feature selection framework is developed aiming to obtain the feature reducts to predict customer churn. The framework is a cost based optional pre-processing stage to remove redundant features for churn management. In addition, this cost-based feature selection algorithm is applied in a telecommunication company in Turkey and the results obtained with this algorithm.Keywords: churn prediction, data mining, decision-theoretic rough set, feature selection
Procedia PDF Downloads 4448973 Tribological Investigation of Piston Ring Liner Assembly
Authors: Bharatkumar Sutaria, Tejaskumar Chaudhari
Abstract:
An engine performance can be increased by minimizing losses. There are various losses observed in the engines. i.e. thermal loss, heat loss and mechanical losses. Mechanical losses are in the tune of 15 to 20 % of the overall losses. Piston ring assembly contributes the highest friction in the mechanical frictional losses. The variation of piston speed in stroke length the friction force development is not uniform. In present work, comparison has been made between theoretical and experimental friction force under different operating conditions. The experiments are performed using variable operating parameters such as load, speed, temperature and lubricants. It is found that reducing trend of friction force and friction coefficient is in good nature with mixed lubrication regime of the Stribeck curve. Overall outcome from the laboratory test performance of segmented piston ring assembly using multi-grade oil offers reasonably good results at room and elevated temperatures.Keywords: friction force, friction coefficient, piston rings, Stribeck curve
Procedia PDF Downloads 4808972 Gear Wear Product Analysis as Applied for Tribological Maintenance Diagnostics
Authors: Surapol Raadnui
Abstract:
This paper describes an experimental investigation on a pair of gears in which wear and pitting were intentionally allowed to occur, namely, moisture corrosion pitting, acid-induced corrosion pitting, hard contaminant-related pitting and mechanical induced wear. A back-to-back spur gear test rig was used. The test samples of wear debris were collected and assessed through the utilization of an optical microscope in order to correlate and compare the debris morphology to pitting and wear degradation of the worn gears. In addition, weight loss from all test gear pairs was assessed with the utilization of the statistical design of the experiment. It can be deduced that wear debris characteristics exhibited a direct relationship with different pitting and wear modes. Thus, it should be possible to detect and diagnose gear pitting and wear utilization of worn surfaces, generated wear debris and quantitative measurement such as weight loss.Keywords: tribology, spur gear wear, predictive maintenance, wear particle analysis
Procedia PDF Downloads 2498971 Heating Behavior of Ni-Embedded Thermoplastic Polyurethane Adhesive Film by Induction Heating
Authors: DuckHwan Bae, YongSung Kwon, Min Young Shon, SanTaek Oh, GuNi Kim
Abstract:
The heating behavior of nanometer and micrometer sized Nickel particle-imbedded thermoplastic polyurethane adhesive (TPU) under induction heating is examined in present study. The effects of particle size and content, TPU film thickness on heating behaviors were examined. The correlation between heating behavior and magnetic properties of Nickel particles were also studied. From the results, heat generation increased with increase of Nickel content and film thickness. However, in terms of particle sizes, heat generation of Nickel-imbedded TPU film were in order of 70nm>1µm>20 µm>70 µm and this results can explain by increasing ration of eddy heating to hysteresis heating with increase of particle size.Keywords: induction heating, thermoplastic polyurethane, nickel, composite, hysteresis loss, eddy current loss, curie temperature
Procedia PDF Downloads 3608970 Can Bone Resorption Reduce with Nanocalcium Particles in Astronauts?
Authors: Ravi Teja Mandapaka, Prasanna Kumar Kukkamalla
Abstract:
Poor absorption of calcium, elevated levels in serum and loss of bone are major problems of astronauts during space travel. Supplementation of calcium could not reveal this problem. In normal condition only 33% of calcium is absorbed from dietary sources. In this paper effect of space environment on calcium metabolism was discussed. Many surprising study findings were found during literature survey. Clinical trials on ovariectomized mice showed that reduction of calcium particles to nano level make them more absorbable and bioavailable. Control of bone loss in astronauts in critical important In Fortification of milk with nana calcium particles showed reduces urinary pyridinoline, deoxypyridinoline levels. Dietary calcium and supplementation do not show much retention of calcium in zero gravity environment where absorption is limited. So, the fortification of foods with nano calcium particles seemed beneficial for astronauts during and after space travel in their speedy recovery.Keywords: nano calcium, astronauts, fortification, supplementation
Procedia PDF Downloads 4928969 UNIX Source Code Leak: Evaluation and Feasible Solutions
Authors: Gu Dongxing, Li Yuxuan, Nong Tengxiao, Burra Venkata Durga Kumar
Abstract:
Since computers are widely used in business models, more and more companies choose to store important information in computers to improve productivity. However, this information can be compromised in many cases, such as when it is stored locally on the company's computers or when it is transferred between servers and clients. Of these important information leaks, source code leaks are probably the most costly. Because the source code often represents the core technology of the company, especially for the Internet companies, source code leakage may even lead to the company's core products lose market competitiveness, and then lead to the bankruptcy of the company. In recent years, such as Microsoft, AMD and other large companies have occurred source code leakage events, suffered a huge loss. This reveals to us the importance and necessity of preventing source code leakage. This paper aims to find ways to prevent source code leakage based on the direction of operating system, and based on the fact that most companies use Linux or Linux-like system to realize the interconnection between server and client, to discuss how to reduce the possibility of source code leakage during data transmission.Keywords: data transmission, Linux, source code, operating system
Procedia PDF Downloads 2688968 Infilling Strategies for Surrogate Model Based Multi-disciplinary Analysis and Applications to Velocity Prediction Programs
Authors: Malo Pocheau-Lesteven, Olivier Le Maître
Abstract:
Engineering and optimisation of complex systems is often achieved through multi-disciplinary analysis of the system, where each subsystem is modeled and interacts with other subsystems to model the complete system. The coherence of the output of the different sub-systems is achieved through the use of compatibility constraints, which enforce the coupling between the different subsystems. Due to the complexity of some sub-systems and the computational cost of evaluating their respective models, it is often necessary to build surrogate models of these subsystems to allow repeated evaluation these subsystems at a relatively low computational cost. In this paper, gaussian processes are used, as their probabilistic nature is leveraged to evaluate the likelihood of satisfying the compatibility constraints. This paper presents infilling strategies to build accurate surrogate models of the subsystems in areas where they are likely to meet the compatibility constraint. It is shown that these infilling strategies can reduce the computational cost of building surrogate models for a given level of accuracy. An application of these methods to velocity prediction programs used in offshore racing naval architecture further demonstrates these method's applicability in a real engineering context. Also, some examples of the application of uncertainty quantification to field of naval architecture are presented.Keywords: infilling strategy, gaussian process, multi disciplinary analysis, velocity prediction program
Procedia PDF Downloads 1568967 Designing Elevations by Photocatalysis of Precast Concrete Materials, in Reducing Energy Consumption of Buildings: Case Study of Tabriz
Authors: Mahsa Faramarzi Asli, Mina Sarabi
Abstract:
The important issues that are addressed in most advanced industrial countries in recent decades, discussion of minimizing heat losses through the buildings. And the most influential parameters in the calculation of building energy consumption, is heat exchange, which takes place between the interior and outer space. One of the solutions to reduce heat loss is using materials with low thermal conductivity. The purpose of this article, is the effect of using some frontages with nano-concrete photo catalytic precast materials for reducing energy consumption in buildings. For this purpose, estimating the energy dissipation through the facade built with nano-concrete photo catalytic precast materials on a sample building in Tabriz city by BCS 19 software ( topic 19 simulation) is done and the results demonstrate reduce heat loss through the facade nano- concrete.Keywords: nano materials, optimize energy consumption, themal, stability
Procedia PDF Downloads 5628966 Legal Considerations in Fashion Modeling: Protecting Models' Rights and Ensuring Ethical Practices
Authors: Fatemeh Noori
Abstract:
The fashion industry is a dynamic and ever-evolving realm that continuously shapes societal perceptions of beauty and style. Within this industry, fashion modeling plays a crucial role, acting as the visual representation of brands and designers. However, behind the glamorous façade lies a complex web of legal considerations that govern the rights, responsibilities, and ethical practices within the field. This paper aims to explore the legal landscape surrounding fashion modeling, shedding light on key issues such as contract law, intellectual property, labor rights, and the increasing importance of ethical considerations in the industry. Fashion modeling involves the collaboration of various stakeholders, including models, designers, agencies, and photographers. To ensure a fair and transparent working environment, it is imperative to establish a comprehensive legal framework that addresses the rights and obligations of each party involved. One of the primary legal considerations in fashion modeling is the contractual relationship between models and agencies. Contracts define the terms of engagement, including payment, working conditions, and the scope of services. This section will delve into the essential elements of modeling contracts, the negotiation process, and the importance of clarity to avoid disputes. Models are not just individuals showcasing clothing; they are integral to the creation and dissemination of artistic and commercial content. Intellectual property rights, including image rights and the use of a model's likeness, are critical aspects of the legal landscape. This section will explore the protection of models' image rights, the use of their likeness in advertising, and the potential for unauthorized use. Models, like any other professionals, are entitled to fair and ethical treatment. This section will address issues such as working conditions, hours, and the responsibility of agencies and designers to prioritize the well-being of models. Additionally, it will explore the global movement toward inclusivity, diversity, and the promotion of positive body image within the industry. The fashion industry has faced scrutiny for perpetuating harmful standards of beauty and fostering a culture of exploitation. This section will discuss the ethical responsibilities of all stakeholders, including the promotion of diversity, the prevention of exploitation, and the role of models as influencers for positive change. In conclusion, the legal considerations in fashion modeling are multifaceted, requiring a comprehensive approach to protect the rights of models and ensure ethical practices within the industry. By understanding and addressing these legal aspects, the fashion industry can create a more transparent, fair, and inclusive environment for all stakeholders involved in the art of modeling.Keywords: fashion modeling contracts, image rights in modeling, labor rights for models, ethical practices in fashion, diversity and inclusivity in modeling
Procedia PDF Downloads 748965 Deep Learning-Based Classification of 3D CT Scans with Real Clinical Data; Impact of Image format
Authors: Maryam Fallahpoor, Biswajeet Pradhan
Abstract:
Background: Artificial intelligence (AI) serves as a valuable tool in mitigating the scarcity of human resources required for the evaluation and categorization of vast quantities of medical imaging data. When AI operates with optimal precision, it minimizes the demand for human interpretations and, thereby, reduces the burden on radiologists. Among various AI approaches, deep learning (DL) stands out as it obviates the need for feature extraction, a process that can impede classification, especially with intricate datasets. The advent of DL models has ushered in a new era in medical imaging, particularly in the context of COVID-19 detection. Traditional 2D imaging techniques exhibit limitations when applied to volumetric data, such as Computed Tomography (CT) scans. Medical images predominantly exist in one of two formats: neuroimaging informatics technology initiative (NIfTI) and digital imaging and communications in medicine (DICOM). Purpose: This study aims to employ DL for the classification of COVID-19-infected pulmonary patients and normal cases based on 3D CT scans while investigating the impact of image format. Material and Methods: The dataset used for model training and testing consisted of 1245 patients from IranMehr Hospital. All scans shared a matrix size of 512 × 512, although they exhibited varying slice numbers. Consequently, after loading the DICOM CT scans, image resampling and interpolation were performed to standardize the slice count. All images underwent cropping and resampling, resulting in uniform dimensions of 128 × 128 × 60. Resolution uniformity was achieved through resampling to 1 mm × 1 mm × 1 mm, and image intensities were confined to the range of (−1000, 400) Hounsfield units (HU). For classification purposes, positive pulmonary COVID-19 involvement was designated as 1, while normal images were assigned a value of 0. Subsequently, a U-net-based lung segmentation module was applied to obtain 3D segmented lung regions. The pre-processing stage included normalization, zero-centering, and shuffling. Four distinct 3D CNN models (ResNet152, ResNet50, DensNet169, and DensNet201) were employed in this study. Results: The findings revealed that the segmentation technique yielded superior results for DICOM images, which could be attributed to the potential loss of information during the conversion of original DICOM images to NIFTI format. Notably, ResNet152 and ResNet50 exhibited the highest accuracy at 90.0%, and the same models achieved the best F1 score at 87%. ResNet152 also secured the highest Area under the Curve (AUC) at 0.932. Regarding sensitivity and specificity, DensNet201 achieved the highest values at 93% and 96%, respectively. Conclusion: This study underscores the capacity of deep learning to classify COVID-19 pulmonary involvement using real 3D hospital data. The results underscore the significance of employing DICOM format 3D CT images alongside appropriate pre-processing techniques when training DL models for COVID-19 detection. This approach enhances the accuracy and reliability of diagnostic systems for COVID-19 detection.Keywords: deep learning, COVID-19 detection, NIFTI format, DICOM format
Procedia PDF Downloads 858964 Deep Learning for Image Correction in Sparse-View Computed Tomography
Authors: Shubham Gogri, Lucia Florescu
Abstract:
Medical diagnosis and radiotherapy treatment planning using Computed Tomography (CT) rely on the quantitative accuracy and quality of the CT images. At the same time, requirements for CT imaging include reducing the radiation dose exposure to patients and minimizing scanning time. A solution to this is the sparse-view CT technique, based on a reduced number of projection views. This, however, introduces a new problem— the incomplete projection data results in lower quality of the reconstructed images. To tackle this issue, deep learning methods have been applied to enhance the quality of the sparse-view CT images. A first approach involved employing Mir-Net, a dedicated deep neural network designed for image enhancement. This showed promise, utilizing an intricate architecture comprising encoder and decoder networks, along with the incorporation of the Charbonnier Loss. However, this approach was computationally demanding. Subsequently, a specialized Generative Adversarial Network (GAN) architecture, rooted in the Pix2Pix framework, was implemented. This GAN framework involves a U-Net-based Generator and a Discriminator based on Convolutional Neural Networks. To bolster the GAN's performance, both Charbonnier and Wasserstein loss functions were introduced, collectively focusing on capturing minute details while ensuring training stability. The integration of the perceptual loss, calculated based on feature vectors extracted from the VGG16 network pretrained on the ImageNet dataset, further enhanced the network's ability to synthesize relevant images. A series of comprehensive experiments with clinical CT data were conducted, exploring various GAN loss functions, including Wasserstein, Charbonnier, and perceptual loss. The outcomes demonstrated significant image quality improvements, confirmed through pertinent metrics such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) between the corrected images and the ground truth. Furthermore, learning curves and qualitative comparisons added evidence of the enhanced image quality and the network's increased stability, while preserving pixel value intensity. The experiments underscored the potential of deep learning frameworks in enhancing the visual interpretation of CT scans, achieving outcomes with SSIM values close to one and PSNR values reaching up to 76.Keywords: generative adversarial networks, sparse view computed tomography, CT image correction, Mir-Net
Procedia PDF Downloads 1598963 Effect of Moisture Removal from Molten Salt on Corrosion of Alloys
Authors: Bhavesh D. Gajbhiye, Divya Raghunandanan, C. S. Sona, Channamallikarjun S. Mathpati
Abstract:
Molten fluoride salt FLiNaK (LiF-NaF-KF: 46.5-11.5-42 mol %) is a promising candidate as high temperature coolant for next generation nuclear reactors due to its superior thermophysical properties. Corrosion of alloys in molten FLiNaK has however been recognized as a serious issue in the selection of structural materials. Corrosion experiments of alloys Inconel-625 (Fe-Ni alloy) and Hastelloy-B (Ni-Mo alloy) were performed in FLiNaK salt. The tests were carried out at a temperature of 650°C in graphite crucibles for 60 hours under inert atmosphere. Corrosion experiments were performed to study the effect of moisture removal in the salt by pre heating and vacuum drying. Weight loss of the alloy samples due to corrosion was measured and corrosion rate was estimated. The surface morphology of the alloy samples was analyzed by Scanning Electron Microscopy. A significant decrease in the corrosion rate was observed for the alloys studied in moisture removed salt.Keywords: FLiNaK, hastelloy, inconel, weight loss
Procedia PDF Downloads 4938962 Determining the Sources of Sediment at Different Areas of the Catchment: A Case Study of Welbedacht Reservoir, South Africa
Authors: D. T. Chabalala, J. M. Ndambuki, M. F. Ilunga
Abstract:
Sedimentation includes the processes of erosion, transportation, deposition, and the compaction of sediment. Sedimentation in reservoir results in a decrease in water storage capacity, downstream problems involving aggregation and degradation, blockage of the intake, and change in water quality. A study was conducted in Caledon River catchment in the upstream of Welbedacht Reservoir located in the South Eastern part of Free State province, South Africa. The aim of this research was to investigate and develop a model for an Integrated Catchment Modelling of Sedimentation processes and management for the Welbedacht reservoir. Revised Universal Soil Loss Equation (RUSLE) was applied to determine sources of sediment at different areas of the catchment. The model has been also used to determine the impact of changes from management practice on erosion generation. The results revealed that the main sources of sediment in the watershed are cultivated land (273 ton per hectare), built up and forest (103.3 ton per hectare), and grassland, degraded land, mining and quarry (3.9, 9.8 and 5.3 ton per hectare) respectively. After application of soil conservation practices to developed Revised Universal Soil Loss Equation model, the results revealed that the total average annual soil loss in the catchment decreased by 76% and sediment yield from cultivated land decreased by 75%, while the built up and forest area decreased by 42% and 99% respectively. Thus, results of this study will be used by government departments in order to develop sustainable policies.Keywords: Welbedacht reservoir, sedimentation, RUSLE, Caledon River
Procedia PDF Downloads 1928961 A Stepwise Approach to Automate the Search for Optimal Parameters in Seasonal ARIMA Models
Authors: Manisha Mukherjee, Diptarka Saha
Abstract:
Reliable forecasts of univariate time series data are often necessary for several contexts. ARIMA models are quite popular among practitioners in this regard. Hence, choosing correct parameter values for ARIMA is a challenging yet imperative task. Thus, a stepwise algorithm is introduced to provide automatic and robust estimates for parameters (p; d; q)(P; D; Q) used in seasonal ARIMA models. This process is focused on improvising the overall quality of the estimates, and it alleviates the problems induced due to the unidimensional nature of the methods that are currently used such as auto.arima. The fast and automated search of parameter space also ensures reliable estimates of the parameters that possess several desirable qualities, consequently, resulting in higher test accuracy especially in the cases of noisy data. After vigorous testing on real as well as simulated data, the algorithm doesn’t only perform better than current state-of-the-art methods, it also completely obviates the need for human intervention due to its automated nature.Keywords: time series, ARIMA, auto.arima, ARIMA parameters, forecast, R function
Procedia PDF Downloads 1618960 MCDM Spectrum Handover Models for Cognitive Wireless Networks
Authors: Cesar Hernández, Diego Giral, Fernando Santa
Abstract:
The spectral handoff is important in cognitive wireless networks to ensure an adequate quality of service and performance for secondary user communications. This work proposes a benchmarking of performance of the three spectrum handoff models: VIKOR, SAW and MEW. Four evaluation metrics are used. These metrics are, accumulative average of failed handoffs, accumulative average of handoffs performed, accumulative average of transmission bandwidth and, accumulative average of the transmission delay. As a difference with related work, the performance of the three spectrum handoff models was validated with captured data of spectral occupancy in experiments realized at the GSM frequency band (824 MHz-849 MHz). These data represent the actual behavior of the licensed users for this wireless frequency band. The results of the comparative show that VIKOR Algorithm provides 15.8% performance improvement compared to a SAW Algorithm and, 12.1% better than the MEW Algorithm.Keywords: cognitive radio, decision making, MEW, SAW, spectrum handoff, VIKOR
Procedia PDF Downloads 4368959 Nonlinear Estimation Model for Rail Track Deterioration
Authors: M. Karimpour, L. Hitihamillage, N. Elkhoury, S. Moridpour, R. Hesami
Abstract:
Rail transport authorities around the world have been facing a significant challenge when predicting rail infrastructure maintenance work for a long period of time. Generally, maintenance monitoring and prediction is conducted manually. With the restrictions in economy, the rail transport authorities are in pursuit of improved modern methods, which can provide precise prediction of rail maintenance time and location. The expectation from such a method is to develop models to minimize the human error that is strongly related to manual prediction. Such models will help them in understanding how the track degradation occurs overtime under the change in different conditions (e.g. rail load, rail type, rail profile). They need a well-structured technique to identify the precise time that rail tracks fail in order to minimize the maintenance cost/time and secure the vehicles. The rail track characteristics that have been collected over the years will be used in developing rail track degradation prediction models. Since these data have been collected in large volumes and the data collection is done both electronically and manually, it is possible to have some errors. Sometimes these errors make it impossible to use them in prediction model development. This is one of the major drawbacks in rail track degradation prediction. An accurate model can play a key role in the estimation of the long-term behavior of rail tracks. Accurate models increase the track safety and decrease the cost of maintenance in long term. In this research, a short review of rail track degradation prediction models has been discussed before estimating rail track degradation for the curve sections of Melbourne tram track system using Adaptive Network-based Fuzzy Inference System (ANFIS) model.Keywords: ANFIS, MGT, prediction modeling, rail track degradation
Procedia PDF Downloads 3338958 Parametric Study for Obtaining the Structural Response of Segmental Tunnels in Soft Soil by Using No-Linear Numerical Models
Authors: Arturo Galván, Jatziri Y. Moreno-Martínez, Israel Enrique Herrera Díaz, José Ramón Gasca Tirado
Abstract:
In recent years, one of the methods most used for the construction of tunnels in soft soil is the shield-driven tunneling. The advantage of this construction technique is that it allows excavating the tunnel while at the same time a primary lining is placed, which consists of precast segments. There are joints between segments, also called longitudinal joints, and joints between rings (called as circumferential joints). This is the reason because of this type of constructions cannot be considered as a continuous structure. The effect of these joints influences in the rigidity of the segmental lining and therefore in its structural response. A parametric study was performed to take into account the effect of different parameters in the structural response of typical segmental tunnels built in soft soil by using non-linear numerical models based on Finite Element Method by means of the software package ANSYS v. 11.0. In the first part of this study, two types of numerical models were performed. In the first one, the segments were modeled by using beam elements based on Timoshenko beam theory whilst the segment joints were modeled by using inelastic rotational springs considering the constitutive moment-rotation relation proposed by Gladwell. In this way, the mechanical behavior of longitudinal joints was simulated. On the other hand for simulating the mechanical behavior of circumferential joints elastic springs were considered. As well as, the stability given by the soil was modeled by means of elastic-linear springs. In the second type of models, the segments were modeled by means of three-dimensional solid elements and the joints with contact elements. In these models, the zone of the joints is modeled as a discontinuous (increasing the computational effort) therefore a discrete model is obtained. With these contact elements the mechanical behavior of joints is simulated considering that when the joint is closed, there is transmission of compressive and shear stresses but not of tensile stresses and when the joint is opened, there is no transmission of stresses. This type of models can detect changes in the geometry because of the relative movement of the elements that form the joints. A comparison between the numerical results with two types of models was carried out. In this way, the hypothesis considered in the simplified models were validated. In addition, the numerical models were calibrated with (Lab-based) experimental results obtained from the literature of a typical tunnel built in Europe. In the second part of this work, a parametric study was performed by using the simplified models due to less used computational effort compared to complex models. In the parametric study, the effect of material properties, the geometry of the tunnel, the arrangement of the longitudinal joints and the coupling of the rings were studied. Finally, it was concluded that the mechanical behavior of segment and ring joints and the arrangement of the segment joints affect the global behavior of the lining. As well as, the effect of the coupling between rings modifies the structural capacity of the lining.Keywords: numerical models, parametric study, segmental tunnels, structural response
Procedia PDF Downloads 2288957 Bridging the Gap between M and E, and KM: Towards the Integration of Evidence-Based Information and Policy Decision-Making
Authors: Xueqing Ivy Chen, Christo De Coning
Abstract:
It is clear from practice that a gap exists between Result-Based Monitoring and Evaluation (RBME) as a discipline, and Knowledge Management (KM) on the other hand. Whereas various government departments have institutionalised these functions, KM and M&E has functioned in isolation from each other in a practical sense in the public sector. It’s therefore necessary to explore the relationship between KM and M&E and the necessity for integration, so that a convergence of these disciplines can be established. An integration of KM and M&E will lead to integration and improvement of evidence-based information and policy decision-making. M&E and KM process models are available but the complementarity between specific process steps of these process models are not exploited. A need exists to clarify the relationships between these functions in order to ensure evidence based information and policy decision-making. This paper will depart from the well-known policy process models, such as the generic model and consider recent on the interface between policy, M&E and KM.Keywords: result-based monitoring and evaluation, RBME, knowledge management, KM, evident based decision making, public policy, information systems, institutional arrangement
Procedia PDF Downloads 1528956 Prediction of Mechanical Strength of Multiscale Hybrid Reinforced Cementitious Composite
Authors: Salam Alrekabi, A. B. Cundy, Mohammed Haloob Al-Majidi
Abstract:
Novel multiscale hybrid reinforced cementitious composites based on carbon nanotubes (MHRCC-CNT), and carbon nanofibers (MHRCC-CNF) are new types of cement-based material fabricated with micro steel fibers and nanofilaments, featuring superior strain hardening, ductility, and energy absorption. This study focused on established models to predict the compressive strength, and direct and splitting tensile strengths of the produced cementitious composites. The analysis was carried out based on the experimental data presented by the previous author’s study, regression analysis, and the established models that available in the literature. The obtained models showed small differences in the predictions and target values with experimental verification indicated that the estimation of the mechanical properties could be achieved with good accuracy.Keywords: multiscale hybrid reinforced cementitious composites, carbon nanotubes, carbon nanofibers, mechanical strength prediction
Procedia PDF Downloads 1608955 A Multiple Case Study of How Bilingual-Bicultural Teachers' Language Shame and Loss Affects Teaching English Language Learners
Authors: Lisa Winstead, Penny Congcong Wang
Abstract:
This two-year multiple case study of eight Spanish-English speaking teachers explores bilingual-bicultural Latino teachers’ lived experiences as English Language Learners and, more recently, as adult teachers who work with English Language Learners in mainstream schools. Research questions explored include: How do bilingual-bicultural teachers perceive their native language use and sense of self within society from childhood to adulthood? Correspondingly, what are bilingual teachers’ perceptions of how their own language learning experience might affect teaching students of similar linguistic and cultural backgrounds? This study took place in an urban area in the Pacific Southwest of the United States. Participants were K-8 teachers and enrolled in a Spanish-English bilingual authorization program. Data were collected from journals, focus group interviews, field notes, and class artifacts. Within case and cross-case analysis revealed that the participants were shamed about their language use as children which contributed to their primary language loss. They similarly reported how experiences of mainstream educator and administrator language shaming invalidated their ability to provide support for Latino heritage ELLs, despite their bilingual-bicultural expertise. However, participants reported that counter-narratives from the bilingual authorization program, parents, community and church organizations, and cultural responsive teachers were effective in promoting their language retention, pride, and feelings of well-being.Keywords: teacher education, bilingual education, English language learners, emergent bilinguals, social justice, language shame, language loss, translanguaging
Procedia PDF Downloads 1878954 Assisting Dating of Greek Papyri Images with Deep Learning
Authors: Asimina Paparrigopoulou, John Pavlopoulos, Maria Konstantinidou
Abstract:
Dating papyri accurately is crucial not only to editing their texts but also for our understanding of palaeography and the history of writing, ancient scholarship, material culture, networks in antiquity, etc. Most ancient manuscripts offer little evidence regarding the time of their production, forcing papyrologists to date them on palaeographical grounds, a method often criticized for its subjectivity. By experimenting with data obtained from the Collaborative Database of Dateable Greek Bookhands and the PapPal online collections of objectively dated Greek papyri, this study shows that deep learning dating models, pre-trained on generic images, can achieve accurate chronological estimates for a test subset (67,97% accuracy for book hands and 55,25% for documents). To compare the estimates of these models with those of humans, experts were asked to complete a questionnaire with samples of literary and documentary hands that had to be sorted chronologically by century. The same samples were dated by the models in question. The results are presented and analysed.Keywords: image classification, papyri images, dating
Procedia PDF Downloads 788953 A Systemic Maturity Model
Authors: Emir H. Pernet, Jeimy J. Cano
Abstract:
Maturity models, used descriptively to explain changes in reality or normatively to guide managers to make interventions to make organizations more effective and efficient, are based on the principles of statistical quality control promulgated by Shewhart in the years 30, and on the principles of PDCA continuous improvement (Plan, Do, Check, Act) developed by Deming and Juran. Some frameworks developed over the concept of maturity models includes COBIT, CMM, and ITIL. This paper presents some limitations of traditional maturity models, most of them based on points of reflection and analysis done by some authors. Almost all limitations are related to the mechanistic and reductionist approach of the principles over those models are built. As Systems Theory helps the understanding of the dynamics of organizations and organizational change, the development of a systemic maturity model can help to overcome some of those limitations. This document proposes a systemic maturity model, based on a systemic conceptualization of organizations, focused on the study of the functioning of the parties, the relationships among them, and their behavior as a whole. The concept of maturity from the system theory perspective is conceptually defined as an emergent property of the organization, which arises from as a result of the degree of alignment and integration of their processes. This concept is operationalized through a systemic function that measures the maturity of an organization, and finally validated by the measuring of maturity in organizations. For its operationalization and validation, the model was applied to measure the maturity of organizational Governance, Risk and Compliance (GRC) processes.Keywords: GRC, maturity model, systems theory, viable system model
Procedia PDF Downloads 3108952 Quality of Bali Beef and Broiler after Immersion in Liquid Smoke on Different Concentrations and Storage Times
Authors: E. Abustam, M. Yusuf, H. M. Ali, M. I. Said, F. N. Yuliati
Abstract:
The aim of this study was to improve the durability and quality of Bali beef (M. Longissimus dorsi) and broiler carcass through the addition of liquid smoke as a natural preservative. This study was using Longissimus dorsi muscle from male Bali beef aged 3 years, broiler breast and thigh aged 40 days. Three types of meat were marinated in liquid smoke with concentrations of 0, 5, and 10% for 30 minutes at the level of 20% of the sample weight (w/w). The samples were storage at 2-5°C for 1 month. This study designed as a factorial experiment 3 x 3 x 4 based on a completely randomized design with 5 replications; the first factor was meat type (beef, chicken breast and chicken thigh); the 2nd factor was liquid smoke concentrations (0, 5, and 10%), and the 3rd factor was storage duration (1, 2, 3, and 4 weeks). Parameters measured were TBA value, total bacterial colonies, water holding capacity (WHC), shear force value both before and after cooking (80°C – 15min.), and cooking loss. The results showed that the type of meat produced WHC, shear force value, cooking loss and TBA differed between the three types of meat. Higher concentration of liquid smoke, the WHC, shear force value, TBA, and total bacterial colonies were decreased; at a concentration of 10% of liquid smoke, the total bacterial colonies decreased by 57.3% from untreated with liquid smoke. Longer storage, the total bacterial colonies and WHC were increased, while the shear force value and cooking loss were decreased. It can be concluded that a 10% concentration of liquid smoke was able to maintain fat oxidation and bacterial growth in Bali beef and chicken breast and thigh.Keywords: Bali beef, chicken meat, liquid smoke, meat quality
Procedia PDF Downloads 3908951 The Evolution of Domestic Terrorism: Global Contemporary Models
Authors: Bret Brooks
Abstract:
As the international community has focused their attention in recent times on international and transnational terrorism, many nations have ignored their own domestic terrorist groups. Domestic terrorism has significantly evolved over the last 15 years and as such nation states must adequately understand their own individual issues as well as the broader worldwide perspective. Contemporary models show that obtaining peace with domestic groups is not only the end goal, but also very obtainable. By evaluating modern examples and incorporating successful strategies, countries around the world have the ability to bring about a diplomatic resolution to domestic extremism and domestic terrorism.Keywords: domestic, evolution, peace, terrorism
Procedia PDF Downloads 5188950 Consideration of Starlight Waves Redshift as Produced by Friction of These Waves on Its Way through Space
Authors: Angel Pérez Sánchez
Abstract:
In 1929, a light redshift was discovered in distant galaxies and was interpreted as produced by galaxies moving away from each other at high speed. This interpretation led to the consideration of a new source of energy, which was called Dark Energy. Redshift is a loss of light wave frequency produced by galaxies moving away at high speed, but the loss of frequency can also be produced by the friction of light waves on their way to Earth. This friction is impossible because outer space is empty, but if it were not empty and a medium existed in this empty space, it would be possible. The consequences would be extraordinary because Universe acceleration and Dark Energy would be in doubt. This article presents evidence that empty space is actually a medium occupied by different particles, among them the most significant would-be Graviton or Higgs Boson, because let's not forget that gravity also affects empty space.Keywords: Big Bang, dark energy, doppler effect, redshift, starlight frequency reduction, universe acceleration
Procedia PDF Downloads 638949 The Role of NAD+ and Nicotinamide (Vitamin B3) in Glaucoma: A Literature Review
Authors: James Pietris
Abstract:
Glaucoma is a collection of irreversible optic neuropathies which, if left untreated, lead to severe visual field loss. These diseases are a leading cause of blindness across the globe and are estimated to affect approximately 80 million people, particularly women and people of Asian descent.1This represents a major burden on healthcare systems worldwide. Recently, there has been increasing interest in the potential of nicotinamide (vitamin B3) as a novel option in the management of glaucoma. This review aims to analyse the currently available literature to determine whether there is evidence of an association between nicotinamide adenine dinucleotide (NAD+) and glaucomatous optic neuropathy and whether nicotinamide has the potential to prevent or reverse these effects. The literature showed a strong connection between reduced NAD+ levels and retinal ganglion cell dysfunction through multiple different studies. There is also evidence of the positive effect of nicotinamide supplementation on retinal ganglion cell function in models of mouse glaucoma and in a study involving humans. Based on the literature findings, a recommendation has been made that more research into the efficacy, appropriate dosing, and potential side effects of nicotinamide supplementation is needed before it can be definitively determined whether it is appropriate for widespread prophylactic and therapeutic use against glaucoma in humans.Keywords: glaucoma, nicotinamide, vitamin B3, optic neuropathy
Procedia PDF Downloads 1068948 Fuzzy Time Series Forecasting Based on Fuzzy Logical Relationships, PSO Technique, and Automatic Clustering Algorithm
Authors: A. K. M. Kamrul Islam, Abdelhamid Bouchachia, Suang Cang, Hongnian Yu
Abstract:
Forecasting model has a great impact in terms of prediction and continues to do so into the future. Although many forecasting models have been studied in recent years, most researchers focus on different forecasting methods based on fuzzy time series to solve forecasting problems. The forecasted models accuracy fully depends on the two terms that are the length of the interval in the universe of discourse and the content of the forecast rules. Moreover, a hybrid forecasting method can be an effective and efficient way to improve forecasts rather than an individual forecasting model. There are different hybrids forecasting models which combined fuzzy time series with evolutionary algorithms, but the performances are not quite satisfactory. In this paper, we proposed a hybrid forecasting model which deals with the first order as well as high order fuzzy time series and particle swarm optimization to improve the forecasted accuracy. The proposed method used the historical enrollments of the University of Alabama as dataset in the forecasting process. Firstly, we considered an automatic clustering algorithm to calculate the appropriate interval for the historical enrollments. Then particle swarm optimization and fuzzy time series are combined that shows better forecasting accuracy than other existing forecasting models.Keywords: fuzzy time series (fts), particle swarm optimization, clustering algorithm, hybrid forecasting model
Procedia PDF Downloads 249