Search results for: single well model experiments of vacuum preloading technology
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 27822

Search results for: single well model experiments of vacuum preloading technology

26892 Different Stages for the Creation of Electric Arc Plasma through Slow Rate Current Injection to Single Exploding Wire, by Simulation and Experiment

Authors: Ali Kadivar, Kaveh Niayesh

Abstract:

This work simulates the voltage drop and resistance of the explosion of copper wires of diameters 25, 40, and 100 µm surrounded by 1 bar nitrogen exposed to a 150 A current and before plasma formation. The absorption of electrical energy in an exploding wire is greatly diminished when the plasma is formed. This study shows the importance of considering radiation and heat conductivity in the accuracy of the circuit simulations. The radiation of the dense plasma formed on the wire surface is modeled with the Net Emission Coefficient (NEC) and is mixed with heat conductivity through PLASIMO® software. A time-transient code for analyzing wire explosions driven by a slow current rise rate is developed. It solves a circuit equation coupled with one-dimensional (1D) equations for the copper electrical conductivity as a function of its physical state and Net Emission Coefficient (NEC) radiation. At first, an initial voltage drop over the copper wire, current, and temperature distribution at the time of expansion is derived. The experiments have demonstrated that wires remain rather uniform lengthwise during the explosion and can be simulated utilizing 1D simulations. Data from the first stage are then used as the initial conditions of the second stage, in which a simplified 1D model for high-Mach-number flows is adopted to describe the expansion of the core. The current was carried by the vaporized wire material before it was dispersed in nitrogen by the shock wave. In the third stage, using a three-dimensional model of the test bench, the streamer threshold is estimated. Electrical breakdown voltage is calculated without solving a full-blown plasma model by integrating Townsend growth coefficients (TdGC) along electric field lines. BOLSIG⁺ and LAPLACE databases are used to calculate the TdGC at different mixture ratios of nitrogen/copper vapor. The simulations show both radiation and heat conductivity should be considered for an adequate description of wire resistance, and gaseous discharges start at lower voltages than expected due to ultraviolet radiation and the exploding shocks, which may have ionized the nitrogen.

Keywords: exploding wire, Townsend breakdown mechanism, streamer, metal vapor, shock waves

Procedia PDF Downloads 71
26891 Disassociating Preferences from Evaluations Towards Pseudo Drink Brands

Authors: Micah Amd

Abstract:

Preferences towards unfamiliar drink brands can be predictably influenced following correlations of subliminally-presented brands (CS) with positively valenced attributes (US). Alternatively, evaluations towards subliminally-presented CS may be more variable, suggesting that CS-evoked evaluations may disassociate from CS-associated preferences following subliminal CS-US conditioning. We assessed this hypothesis over three experiments (Ex1, Ex2, Ex3). Across each experiment, participants first provided preferences and evaluations towards meaningless trigrams (CS) as a baseline, followed by conditioning and a final round of preference and evaluation measurements. During conditioning, four pairs of subliminal and supraliminal/visible CS were respectively correlated with four US categories varying along aggregate valence (e.g., 100% positive, 80% positive, 40% positive, 0% positive – for Ex1 and Ex2). Across Ex1 and Ex2, presentation durations for subliminal CS were 34 and 17 milliseconds, respectively. Across Ex3, aggregate valences of the four US categories were altered (75% positive, 55% positive, 45% positive, 25% positive). Valence across US categories was manipulated to address a supplemental query of whether US-to-CS valence transfer was summative or integrative. During analysis, we computed two sets of difference scores reflecting pre-post preference and evaluation performances, respectively. These were subjected to Bayes tests. Across all experiments, results illustrated US-to-CS valence transfer was most likely to shift evaluations for visible CS, but least likely to shift evaluations for subliminal CS. Alternatively, preferences were likely to shift following correlations with single-valence categories (e.g., 100% positive, 100% negative) across both visible and subliminal CS. Our results suggest that CS preferences can be influenced through subliminal conditioning even as CS evaluations remain unchanged, supporting our central hypothesis. As for whether transfer effects are summative/integrative, our results were more mixed; a comparison of relative likelihoods revealed that preferences are more likely to reflect summative effects whereas evaluations reflect integration, independent of visibility condition.

Keywords: subliminal conditioning, evaluations, preferences, valence transfer

Procedia PDF Downloads 140
26890 The Investigation of Oil Price Shocks by Using a Dynamic Stochastic General Equilibrium: The Case of Iran

Authors: Bahram Fathi, Karim Alizadeh, Azam Mohammadbagheri

Abstract:

The aim of this paper is to investigate the role of oil price shocks in explaining business cycles in Iran using a dynamic stochastic general equilibrium approach. This model incorporates both productivity and oil revenue shocks. The results indicate that productivity shocks are relatively more important to business cycles than oil shocks. The model with two shocks produces different values for volatility, but these values have the same ranking as that of the actual data for most variables. In addition, the actual data are close to the ratio of standard deviations to the output obtained from the model with two shocks. The results indicate that productivity shocks are relatively more important to business cycles than the oil shocks. The model with only a productivity shock produces the most similar figures in term of volatility magnitude to that of the actual data. Next, we use the Impulse Response Functions (IRF) to evaluate the capability of the model. The IRF shows no effect of an oil shock on the capital stocks and on labor hours, which is a feature of the model. When the log-linearized system of equations is solved numerically, investment and labor hours were not found to be functions of the oil shock. This research recommends using different techniques to compare the model’s robustness. One method by which to do this is to have all decision variables as a function of the oil shock by inducing the stationary to the model differently. Another method is to impose a bond adjustment cost. This study intends to fill that gap. To achieve this objective, we derive a DSGE model that allows for the world oil price and productivity shocks. Second, we calibrate the model to the Iran economy. Next, we compare the moments from the theoretical model with both single and multiple shocks with that obtained from the actual data to see the extent to which business cycles in Iran can be explained by total oil revenue shock. Then, we use an impulse response function to evaluate the role of world oil price shocks. Finally, I present implications of the findings and interpretations in accordance with economic theory.

Keywords: oil price, shocks, dynamic stochastic general equilibrium, Iran

Procedia PDF Downloads 417
26889 Derivation of Bathymetry from High-Resolution Satellite Images: Comparison of Empirical Methods through Geographical Error Analysis

Authors: Anusha P. Wijesundara, Dulap I. Rathnayake, Nihal D. Perera

Abstract:

Bathymetric information is fundamental importance to coastal and marine planning and management, nautical navigation, and scientific studies of marine environments. Satellite-derived bathymetry data provide detailed information in areas where conventional sounding data is lacking and conventional surveys are inaccessible. The two empirical approaches of log-linear bathymetric inversion model and non-linear bathymetric inversion model are applied for deriving bathymetry from high-resolution multispectral satellite imagery. This study compares these two approaches by means of geographical error analysis for the site Kankesanturai using WorldView-2 satellite imagery. Based on the Levenberg-Marquardt method calibrated the parameters of non-linear inversion model and the multiple-linear regression model was applied to calibrate the log-linear inversion model. In order to calibrate both models, Single Beam Echo Sounding (SBES) data in this study area were used as reference points. Residuals were calculated as the difference between the derived depth values and the validation echo sounder bathymetry data and the geographical distribution of model residuals was mapped. The spatial autocorrelation was calculated by comparing the performance of the bathymetric models and the results showing the geographic errors for both models. A spatial error model was constructed from the initial bathymetry estimates and the estimates of autocorrelation. This spatial error model is used to generate more reliable estimates of bathymetry by quantifying autocorrelation of model error and incorporating this into an improved regression model. Log-linear model (R²=0.846) performs better than the non- linear model (R²=0.692). Finally, the spatial error models improved bathymetric estimates derived from linear and non-linear models up to R²=0.854 and R²=0.704 respectively. The Root Mean Square Error (RMSE) was calculated for all reference points in various depth ranges. The magnitude of the prediction error increases with depth for both the log-linear and the non-linear inversion models. Overall RMSE for log-linear and the non-linear inversion models were ±1.532 m and ±2.089 m, respectively.

Keywords: log-linear model, multi spectral, residuals, spatial error model

Procedia PDF Downloads 281
26888 Laser-Dicing Modeling: Implementation of a High Accuracy Tool for Laser-Grooving and Cutting Application

Authors: Jeff Moussodji, Dominique Drouin

Abstract:

The highly complex technology requirements of today’s integrated circuits (ICs), lead to the increased use of several materials types such as metal structures, brittle and porous low-k materials which are used in both front end of line (FEOL) and back end of line (BEOL) process for wafer manufacturing. In order to singulate chip from wafer, a critical laser-grooving process, prior to blade dicing, is used to remove these layers of materials out of the dicing street. The combination of laser-grooving and blade dicing allows to reduce the potential risk of induced mechanical defects such micro-cracks, chipping, on the wafer top surface where circuitry is located. It seems, therefore, essential to have a fundamental understanding of the physics involving laser-dicing in order to maximize control of these critical process and reduce their undesirable effects on process efficiency, quality, and reliability. In this paper, the study was based on the convergence of two approaches, numerical and experimental studies which allowed us to investigate the interaction of a nanosecond pulsed laser and BEOL wafer materials. To evaluate this interaction, several laser grooved samples were compared with finite element modeling, in which three different aspects; phase change, thermo-mechanical and optic sensitive parameters were considered. The mathematical model makes it possible to highlight a groove profile (depth, width, etc.) of a single pulse or multi-pulses on BEOL wafer material. Moreover, the heat affected zone, and thermo-mechanical stress can be also predicted as a function of laser operating parameters (power, frequency, spot size, defocus, speed, etc.). After modeling validation and calibration, a satisfying correlation between experiment and modeling, results have been observed in terms of groove depth, width and heat affected zone. The study proposed in this work is a first step toward implementing a quick assessment tool for design and debug of multiple laser grooving conditions with limited experiments on hardware in industrial application. More correlations and validation tests are in progress and will be included in the full paper.

Keywords: laser-dicing, nano-second pulsed laser, wafer multi-stack, multiphysics modeling

Procedia PDF Downloads 190
26887 Effects of Particle Size Distribution on Mechanical Strength and Physical Properties in Engineered Quartz Stone

Authors: Esra Arici, Duygu Olmez, Murat Ozkan, Nurcan Topcu, Furkan Capraz, Gokhan Deniz, Arman Altinyay

Abstract:

Engineered quartz stone is a composite material comprising approximately 90 wt.% fine quartz aggregate with a variety of particle size ranges and `10 wt.% unsaturated polyester resin (UPR). In this study, the objective is to investigate the influence of particle size distribution on mechanical strength and physical properties of the engineered stone slabs. For this purpose, granular quartz with two particle size ranges of 63-200 µm and 100-300 µm were used individually and mixed with a difference in ratios of mixing. The void volume of each granular packing was measured in order to define the amount of filler; quartz powder with the size of less than 38 µm, and UPR required filling inter-particle spaces. Test slabs were prepared using vibration-compression under vacuum. The study reports that both impact strength and flexural strength of samples increased as the mix ratio of the particle size range of 63-200 µm increased. On the other hand, the values of water absorption rate, apparent density and abrasion resistance were not affected by the particle size distribution owing to vacuum compaction. It is found that increasing the mix ratio of the particle size range of 63-200 µm caused the higher porosity. This led to increasing in the amount of the binder paste needed. It is also observed that homogeneity in the slabs was improved with the particle size range of 63-200 µm.

Keywords: engineered quartz stone, fine quartz aggregate, granular packing, mechanical strength, particle size distribution, physical properties.

Procedia PDF Downloads 125
26886 Implementation of a Photo-Curable 3D Additive Manufacturing Technology with Grey Capability by Using Piezo Ink-jets

Authors: Ming-Jong Tsai, Y. L. Cheng, Y. L. Kuo, S. Y. Hsiao, J. W. Chen, P. H. Liu, D. H. Chen

Abstract:

The 3D printing is a combination of digital technology, material science, intelligent manufacturing and control of opto-mechatronics systems. It is called the third industrial revolution from the view of the Economist Journal. A color 3D printing machine may provide the necessary support for high value-added industrial and commercial design, architectural design, personal boutique, and 3D artist’s creation. The main goal of this paper is to develop photo-curable color 3D manufacturing technology and system implementation. The key technologies include (1) Photo-curable color 3D additive manufacturing processes development and materials research (2) Piezo type ink-jet head control and Opto-mechatronics integration technique of the photo-curable color 3D laminated manufacturing system. The proposed system is integrated with single Piezo type ink-jet head with two individual channels for two primary UV light curable color resins which can provide for future colorful 3D printing solutions. The main research results are 16 grey levels and grey resolution of 75 dpi.

Keywords: 3D printing, additive manufacturing, color, photo-curable, Piezo type ink-jet, UV Resin

Procedia PDF Downloads 541
26885 Flexible and Integrated Transport System in India

Authors: Aayushi Patidar, Nishant Parihar

Abstract:

One of the principal causes of failure in existing vehicle brokerage solutions is that they require the introduction of a single trusted third party to whom transport offers and requirements are sent, and which solves the scheduling problem. Advances in planning and scheduling could be utilized to address the scalability issues inherent here, but such refinements do not address the key need to decentralize decision-making. This is not to say that matchmaking of potential transport suppliers to consumers is not essential, but information from such a service should inform rather than determining the transport options for customers. The approach that is proposed, is the use of intelligent commuters that act within the system and to identify options open to users, weighing the evidence for desirability of each option given a model of the user’s priorities, and to drive dialogue among commuters in aiding users to solve their individual (or collective) transport goals. Existing research in commuter support for transport resource management has typically been focused on the provider. Our vision is to explore both the efficient use of limited transport resources and also to support the passengers in the transportation flexibility & integration among various modes in India.

Keywords: flexibility, integration, service design, technology

Procedia PDF Downloads 338
26884 Implementing Biogas Technology in Rural Areas of Limpopo: Analysis of Gawula, Mopani District in South Africa

Authors: Thilivhali E. Rasimphi, David Tinarwo

Abstract:

Access to energy is crucial in poverty alleviation, economic growth, education, and agricultural improvement. The best renewable energy source is one which is locally available, affordable, and can easily be used and managed by local communities. The usage of renewable energy technology has the potential to alleviate many of the current problems facing rural areas. To address energy poverty, biogas technology has become an important part of resolving such. This study, therefore, examines the performance of digesters in Gawula village; it also identifies the contributing factors to the adoption and use of the technology. Data was collected using an open-ended questionnaire from biogas users. To evaluate the performance of the digesters, a data envelopment analysis (DEA) non-parametric technique was used, and to identify key factors affecting adoption, a logit model was applied. The reviewed critical barriers to biogas development in the area seem to be a poor institutional framework, poor infrastructure, a lack of technical support, user training on maintenance and operation, and as such, the implemented plants have failed to make the desired impact. Thus most digesters were abandoned. To create awareness amongst rural communities, government involvement is key, and there is a need for national programs. Biogas technology does what few other renewable energy technologies do, which is to integrate waste management and energy. This creates a substantial opportunity for biogas generation and penetration. That is, a promising pathway towards achieving sustainable development through biogas technology.

Keywords: domestic biogas technology, economic, sustainable, social, rural development

Procedia PDF Downloads 126
26883 Garden City in the Age of ICT: A Case Study of Dali

Authors: Luojie Tang, Libin Ouyang, Yihang Gao

Abstract:

The natural landscape and urban-rural structure, with their attractiveness in the Dali area around Erhai Lake, exhibit striking similarities with Howard's Garden City. With the emergence of the unique phenomenon of the first large-scale gathering of digital nomads in China in Dali, an analysis of Dali's natural, economic, and cultural representations and structures reveals that the Garden City model can no longer fully explain the current overall human living environment in Dali. By interpreting the bottom-up local construction process in Dali based on landscape identity, the transformation of production and lifestyle under new technologies such as ICT(Information and Communication Technology), and the values and lifestyle reshaping embodied in the "reverse urbanization" phenomenon of the middle class in Dali, it is believed that Dali has moved towards a "contemporary garden city influenced by new technology." The article summarizes the characteristics and connotations of this Garden City and provides corresponding strategies for its continued healthy development.

Keywords: dali, ICT, rural-urban relationship, garden city model

Procedia PDF Downloads 54
26882 A Novel Algorithm for Parsing IFC Models

Authors: Raninder Kaur Dhillon, Mayur Jethwa, Hardeep Singh Rai

Abstract:

Information technology has made a pivotal progress across disparate disciplines, one of which is AEC (Architecture, Engineering and Construction) industry. CAD is a form of computer-aided building modulation that architects, engineers and contractors use to create and view two- and three-dimensional models. The AEC industry also uses building information modeling (BIM), a newer computerized modeling system that can create four-dimensional models; this software can greatly increase productivity in the AEC industry. BIM models generate open source IFC (Industry Foundation Classes) files which aim for interoperability for exchanging information throughout the project lifecycle among various disciplines. The methods developed in previous studies require either an IFC schema or MVD and software applications, such as an IFC model server or a Building Information Modeling (BIM) authoring tool, to extract a partial or complete IFC instance model. This paper proposes an efficient algorithm for extracting a partial and total model from an Industry Foundation Classes (IFC) instance model without an IFC schema or a complete IFC model view definition (MVD).

Keywords: BIM, CAD, IFC, MVD

Procedia PDF Downloads 280
26881 Enhancer: An Effective Transformer Architecture for Single Image Super Resolution

Authors: Pitigalage Chamath Chandira Peiris

Abstract:

A widely researched domain in the field of image processing in recent times has been single image super-resolution, which tries to restore a high-resolution image from a single low-resolution image. Many more single image super-resolution efforts have been completed utilizing equally traditional and deep learning methodologies, as well as a variety of other methodologies. Deep learning-based super-resolution methods, in particular, have received significant interest. As of now, the most advanced image restoration approaches are based on convolutional neural networks; nevertheless, only a few efforts have been performed using Transformers, which have demonstrated excellent performance on high-level vision tasks. The effectiveness of CNN-based algorithms in image super-resolution has been impressive. However, these methods cannot completely capture the non-local features of the data. Enhancer is a simple yet powerful Transformer-based approach for enhancing the resolution of images. A method for single image super-resolution was developed in this study, which utilized an efficient and effective transformer design. This proposed architecture makes use of a locally enhanced window transformer block to alleviate the enormous computational load associated with non-overlapping window-based self-attention. Additionally, it incorporates depth-wise convolution in the feed-forward network to enhance its ability to capture local context. This study is assessed by comparing the results obtained for popular datasets to those obtained by other techniques in the domain.

Keywords: single image super resolution, computer vision, vision transformers, image restoration

Procedia PDF Downloads 90
26880 The Evolution of National Technological Capability Roles From the Perspective of Researcher’s Transfer: A Case Study of Artificial Intelligence

Authors: Yating Yang, Xue Zhang, Chengli Zhao

Abstract:

Technology capability refers to the comprehensive ability that influences all factors of technological development. Among them, researchers’ resources serve as the foundation and driving force for technology capability, representing a significant manifestation of a country/region's technological capability. Therefore, the cross-border transfer behavior of researchers to some extent reflects changes in technological capability between countries/regions, providing a unique research perspective for technological capability assessment. This paper proposes a technological capability assessment model based on personnel transfer networks, which consists of a researchers' transfer network model and a country/region role evolution model. It evaluates the changes in a country/region's technological capability roles from the perspective of researcher transfers and conducts an analysis using artificial intelligence as a case study based on literature data. The study reveals that the United States, China, and the European Union are core nodes, and identifies the role evolution characteristics of several major countries/regions.

Keywords: transfer network, technological capability assessment, central-peripheral structure, role evolution

Procedia PDF Downloads 65
26879 Using Large Databases and Interviews to Explore the Temporal Phases of Technology-Based Entrepreneurial Ecosystems

Authors: Elsie L. Echeverri-Carroll

Abstract:

Entrepreneurial ecosystems have become an important concept to explain the birth and sustainability of technology-based entrepreneurship within regions. However, as a theoretical concept, the temporal evolution of entrepreneurship systems remain underdeveloped, making it difficult to understand their dynamic contributions to entrepreneurs. This paper argues that successful technology-based ecosystems go over three cumulative spawning stages: corporate spawning, entrepreneurial spawning, and community spawning. The importance of corporate incubation in vibrant entrepreneurial ecosystems is well documented in the entrepreneurial literature. Similarly, entrepreneurial spawning processes for venture capital-backed startups are well documented in the financial literature. In contrast, there is little understanding of both the third stage of entrepreneurial spawning (when a community of entrepreneurs become a source of firm spawning) and the temporal sequence in which spawning effects occur in a region. We test this three-stage model of entrepreneurial spawning using data from two large databases on firm births—the Secretary of State (160,000 observations) and the National Establishment Time Series (NEST with 150,000 observations)—and information collected from 60 1½-hour interviews with startup founders and representatives of key entrepreneurial organizations. This temporal model is illustrated with case study of Austin, Texas ranked by the Kauffman Foundation as the number one entrepreneurial city in the United States in 2015 and 2016. The 1½-year study founded by the Kauffman Foundation demonstrates the importance of taken into consideration the temporal contributions of both large and entrepreneurial firms in understanding the factors that contribute to the birth and growth of technology-based entrepreneurial regions. More important, these learnings could offer an important road map for regions that pursue to advance their entrepreneurial ecosystems.

Keywords: entrepreneurial ecosystems, entrepreneurial industrial clusters, high-technology, temporal changes

Procedia PDF Downloads 254
26878 Transformational Entrepreneurship: Exploring Pedagogy in Tertiary Education

Authors: S. Karmokar

Abstract:

Over the last 20 years, there has been increasing interest in the topic of entrepreneurship education as it is seen in many countries as a way of enhancing the enterprise culture and promote capability building among community. There is also rapid growth of emerging technologies across the globe and forced entrepreneurs to searching for a new model of economic growth. There are two movements that are dominating and creating waves, Technology Entrepreneurship and Social Entrepreneurship. An increasing number of entrepreneurs are awakening to the possibility of combining the scalable tools and methodology of Technology Entrepreneurship with the value system of Social Entrepreneurship–‘Transformational Entrepreneurship’. To do this transitional educational institute’s need to figure out how to unite the scalable tools of Technology Entrepreneurship with the moral ethos of Social Entrepreneurship. The traditional entrepreneurship education model is wedded to top-down instructive approaches, that is widely used in management education have led to passive educational model. Despite the effort, disruptive’ pedagogies are rare in higher education; they remain underused and often marginalized. High impact and transformational entrepreneurship education and training require universities to adopt new practices and revise current, traditional ways of working. This is a conceptual research paper exploring the potential and growth of transformational entrepreneurship, investigating links between social entrepreneurship. Based on empirical studies and theoretical approaches, this paper outlines some educational approach for both academics and educational institutes to deliver emerging transformational entrepreneurship in tertiary education. The paper presents recommendations for tertiary educators to inform the designing of teaching practices, revise current delivery methods and encourage students to fulfill their potential as entrepreneurs.

Keywords: educational pedagogies, emerging technologies, social entrepreneurship, transformational entrepreneurship

Procedia PDF Downloads 174
26877 A Pattern Recognition Neural Network Model for Detection and Classification of SQL Injection Attacks

Authors: Naghmeh Moradpoor Sheykhkanloo

Abstract:

Structured Query Language Injection (SQLI) attack is a code injection technique in which malicious SQL statements are inserted into a given SQL database by simply using a web browser. Losing data, disclosing confidential information or even changing the value of data are the severe damages that SQLI attack can cause on a given database. SQLI attack has also been rated as the number-one attack among top ten web application threats on Open Web Application Security Project (OWASP). OWASP is an open community dedicated to enabling organisations to consider, develop, obtain, function, and preserve applications that can be trusted. In this paper, we propose an effective pattern recognition neural network model for detection and classification of SQLI attacks. The proposed model is built from three main elements of: a Uniform Resource Locator (URL) generator in order to generate thousands of malicious and benign URLs, a URL classifier in order to: 1) classify each generated URL to either a benign URL or a malicious URL and 2) classify the malicious URLs into different SQLI attack categories, and an NN model in order to: 1) detect either a given URL is a malicious URL or a benign URL and 2) identify the type of SQLI attack for each malicious URL. The model is first trained and then evaluated by employing thousands of benign and malicious URLs. The results of the experiments are presented in order to demonstrate the effectiveness of the proposed approach.

Keywords: neural networks, pattern recognition, SQL injection attacks, SQL injection attack classification, SQL injection attack detection

Procedia PDF Downloads 449
26876 Multitasking Incentives and Employee Performance: Evidence from Call Center Field Experiments and Laboratory Experiments

Authors: Sung Ham, Chanho Song, Jiabin Wu

Abstract:

Employees are commonly incentivized on both quantity and quality performance and much of the extant literature focuses on demonstrating that multitasking incentives lead to tradeoffs. Alternatively, we consider potential solutions to the tradeoff problem from both a theoretical and an experimental perspective. Across two field experiments from a call center, we find that tradeoffs can be mitigated when incentives are jointly enhanced across tasks, where previous research has suggested that incentives be reduced instead of enhanced. In addition, we also propose and test, in a laboratory setting, the implications of revising the metric used to assess quality. Our results indicate that metrics can be adjusted to align quality and quantity more efficiently. Thus, this alignment has the potential to thwart the classic tradeoff problem. Finally, we validate our findings with an economic experiment that verifies that effort is largely consistent with our theoretical predictions.

Keywords: incentives, multitasking, field experiment, experimental economics

Procedia PDF Downloads 150
26875 New Machine Learning Optimization Approach Based on Input Variables Disposition Applied for Time Series Prediction

Authors: Hervice Roméo Fogno Fotsoa, Germaine Djuidje Kenmoe, Claude Vidal Aloyem Kazé

Abstract:

One of the main applications of machine learning is the prediction of time series. But a more accurate prediction requires a more optimal model of machine learning. Several optimization techniques have been developed, but without considering the input variables disposition of the system. Thus, this work aims to present a new machine learning architecture optimization technique based on their optimal input variables disposition. The validations are done on the prediction of wind time series, using data collected in Cameroon. The number of possible dispositions with four input variables is determined, i.e., twenty-four. Each of the dispositions is used to perform the prediction, with the main criteria being the training and prediction performances. The results obtained from a static architecture and a dynamic architecture of neural networks have shown that these performances are a function of the input variable's disposition, and this is in a different way from the architectures. This analysis revealed that it is necessary to take into account the input variable's disposition for the development of a more optimal neural network model. Thus, a new neural network training algorithm is proposed by introducing the search for the optimal input variables disposition in the traditional back-propagation algorithm. The results of the application of this new optimization approach on the two single neural network architectures are compared with the previously obtained results step by step. Moreover, this proposed approach is validated in a collaborative optimization method with a single objective optimization technique, i.e., genetic algorithm back-propagation neural networks. From these comparisons, it is concluded that each proposed model outperforms its traditional model in terms of training and prediction performance of time series. Thus the proposed optimization approach can be useful in improving the accuracy of time series forecasts. This proves that the proposed optimization approach can be useful in improving the accuracy of time series prediction based on machine learning.

Keywords: input variable disposition, machine learning, optimization, performance, time series prediction

Procedia PDF Downloads 85
26874 Image Instance Segmentation Using Modified Mask R-CNN

Authors: Avatharam Ganivada, Krishna Shah

Abstract:

The Mask R-CNN is recently introduced by the team of Facebook AI Research (FAIR), which is mainly concerned with instance segmentation in images. Here, the Mask R-CNN is based on ResNet and feature pyramid network (FPN), where a single dropout method is employed. This paper provides a modified Mask R-CNN by adding multiple dropout methods into the Mask R-CNN. The proposed model has also utilized the concepts of Resnet and FPN to extract stage-wise network feature maps, wherein a top-down network path having lateral connections is used to obtain semantically strong features. The proposed model produces three outputs for each object in the image: class label, bounding box coordinates, and object mask. The performance of the proposed network is evaluated in the segmentation of every instance in images using COCO and cityscape datasets. The proposed model achieves better performance than the state-of-the-networks for the datasets.

Keywords: instance segmentation, object detection, convolutional neural networks, deep learning, computer vision

Procedia PDF Downloads 58
26873 The Role of Technology in Transforming the Finance, Banking, and Insurance Sectors

Authors: Farid Fahami

Abstract:

This article explores the transformative role of technology in the finance, banking, and insurance sectors. It examines key technological trends such as AI, blockchain, data analytics, and digital platforms and their impact on operations, customer experiences, and business models. The article highlights the benefits of technology adoption, including improved efficiency, cost reduction, enhanced customer experiences, and expanded financial inclusion. It also addresses challenges like cybersecurity, data privacy, and the need for upskilling. Real-world case studies demonstrate successful technology integration, and recommendations for stakeholders emphasize embracing innovation and collaboration. The article concludes by emphasizing the importance of technology in shaping the future of these sectors.

Keywords: banking, finance, insurance, technology

Procedia PDF Downloads 54
26872 Recovery of Zn from Different Çinkur Leach Residues by Acidic Leaching

Authors: Mehmet Ali Topçu, Aydın Ruşen

Abstract:

Çinkur is the only plant in Turkey that produces zinc from primary ore containing zinc carbonate from its establishment until 1997. After this year, zinc concentrate coming from Iran was used in this plant. Therefore, there are two different leach residues namely Turkish leach residue (TLR) and Iranian leach residue (ILR), in Çinkur stock piles. This paper describes zinc recovery by sulphuric acid (H2SO4) treatment for each leach residue and includes comparison of blended of TLR and ILR. Before leach experiments; chemical, mineralogical and thermal analysis of three different leach residues was carried out by using atomic absorption spectrometry (AAS), X-Ray diffraction (XRD) and differential thermal analysis (DTA), respectively. Leaching experiments were conducted at optimum conditions; 100 oC, 150 g/L H2SO4 and 2 hours. In the experiments, stirring rate was kept constant at 600 r/min which ensures complete mixing in leaching solution. Results show that zinc recovery for Iranian LR was higher than Turkish LR due to having different chemical composition from each other.

Keywords: hydrometallurgy, leaching, metal extraction, metal recovery

Procedia PDF Downloads 339
26871 Optimization of Lean Methodologies in the Textile Industry Using Design of Experiments

Authors: Ahmad Yame, Ahad Ali, Badih Jawad, Daw Al-Werfalli Mohamed Nasser, Sabah Abro

Abstract:

Industries in general have a lot of waste. Wool textile company, Baniwalid, Libya has many complex problems that led to enormous waste generated due to the lack of lean strategies, expertise, technical support and commitment. To successfully address waste at wool textile company, this study will attempt to develop a methodical approach that integrates lean manufacturing tools to optimize performance characteristics such as lead time and delivery. This methodology will utilize Value Stream Mapping (VSM) techniques to identify the process variables that affect production. Once these variables are identified, Design of Experiments (DOE) Methodology will be used to determine the significantly influential process variables, these variables are then controlled and set at their optimal to achieve optimal levels of productivity, quality, agility, efficiency and delivery to analyze the outputs of the simulation model for different lean configurations. The goal of this research is to investigate how the tools of lean manufacturing can be adapted from the discrete to the continuous manufacturing environment and to evaluate their benefits at a specific industrial.

Keywords: lean manufacturing, DOE, value stream mapping, textiles

Procedia PDF Downloads 437
26870 Effect of Composition on Work Hardening Coefficient of Bismuth-Lead Binary Alloy

Authors: K. A. Mistry, I. B. Patel, A. H. Prajapati

Abstract:

In the present work, the alloy of Bismuth-lead is prepared on the basis of percentage of molecular weight 9:1, 5:5 and 1:9 ratios and grown by Zone- Refining Technique under a vacuum atmosphere. The EDAX of these samples are done and the results are reported. Micro hardness test has been used as an alternative test for measuring material’s tensile properties. The effect of temperature and load on the hardness of the grown alloy has been studied. Further the comparative studies of work hardening coefficients are reported. In the present work, the alloy of Bismuth-lead is prepared on the basis of percentage of molecular weight 9:1, 5:5 and 1:9 ratios and grown by Zone- Refining Technique under a vacuum atmosphere. The EDAX of these samples are done and the results are reported. Micro hardness test has been used as an alternative test for measuring material’s tensile properties. The effect of temperature and load on the hardness of the grown alloy has been studied. Further the comparative studies of work hardening coefficients are reported.

Keywords: EDAX, hardening coefficient, micro hardness, Bi-Pb alloy

Procedia PDF Downloads 294
26869 Simulation of Optimal Runoff Hydrograph Using Ensemble of Radar Rainfall and Blending of Runoffs Model

Authors: Myungjin Lee, Daegun Han, Jongsung Kim, Soojun Kim, Hung Soo Kim

Abstract:

Recently, the localized heavy rainfall and typhoons are frequently occurred due to the climate change and the damage is becoming bigger. Therefore, we may need a more accurate prediction of the rainfall and runoff. However, the gauge rainfall has the limited accuracy in space. Radar rainfall is better than gauge rainfall for the explanation of the spatial variability of rainfall but it is mostly underestimated with the uncertainty involved. Therefore, the ensemble of radar rainfall was simulated using error structure to overcome the uncertainty and gauge rainfall. The simulated ensemble was used as the input data of the rainfall-runoff models for obtaining the ensemble of runoff hydrographs. The previous studies discussed about the accuracy of the rainfall-runoff model. Even if the same input data such as rainfall is used for the runoff analysis using the models in the same basin, the models can have different results because of the uncertainty involved in the models. Therefore, we used two models of the SSARR model which is the lumped model, and the Vflo model which is a distributed model and tried to simulate the optimum runoff considering the uncertainty of each rainfall-runoff model. The study basin is located in Han river basin and we obtained one integrated runoff hydrograph which is an optimum runoff hydrograph using the blending methods such as Multi-Model Super Ensemble (MMSE), Simple Model Average (SMA), Mean Square Error (MSE). From this study, we could confirm the accuracy of rainfall and rainfall-runoff model using ensemble scenario and various rainfall-runoff model and we can use this result to study flood control measure due to climate change. Acknowledgements: This work is supported by the Korea Agency for Infrastructure Technology Advancement(KAIA) grant funded by the Ministry of Land, Infrastructure and Transport (Grant 18AWMP-B083066-05).

Keywords: radar rainfall ensemble, rainfall-runoff models, blending method, optimum runoff hydrograph

Procedia PDF Downloads 260
26868 Knowledge Sharing in Virtual Community: Societal Culture Considerations

Authors: Shahnaz Bashir, Abel Usoro, Imran Khan

Abstract:

Hofstede’s culture model is an important model to study culture between different societies. He collected data from world-wide and performed a comprehensive study. Hofstede’s cultural model is widely accepted and has been used to study cross cultural influences in different areas like cross-cultural psychology, cross cultural management, information technology, and intercultural communication. This study investigates the societal cultural aspects of knowledge sharing in virtual communities.

Keywords: knowledge management, knowledge sharing, societal culture, virtual communities

Procedia PDF Downloads 388
26867 An Improved Multiple Scattering Reflectance Model Based on Specular V-Cavity

Authors: Hongbin Yang, Mingxue Liao, Changwen Zheng, Mengyao Kong, Chaohui Liu

Abstract:

Microfacet-based reflection models are widely used to model light reflections for rough surfaces. Microfacet models have become the standard surface material building block for describing specular components with varying roughness; and yet, while they possess many desirable properties as well as produce convincing results, their design ignores important sources of scattering, which can cause a significant loss of energy. Specifically, they only simulate the single scattering on the microfacets and ignore the subsequent interactions. As the roughness increases, the interaction will become more and more important. So a multiple-scattering microfacet model based on specular V-cavity is presented for this important open problem. However, it spends much unnecessary rendering time because of setting the same number of scatterings for different roughness surfaces. In this paper, we design a geometric attenuation term G to compute the BRDF (Bidirectional reflection distribution function) of multiple scattering of rough surfaces. Moreover, we consider determining the number of scattering by deterministic heuristics for different roughness surfaces. As a result, our model produces a similar appearance of the objects with the state of the art model with significantly improved rendering efficiency. Finally, we derive a multiple scattering BRDF based on the original microfacet framework.

Keywords: bidirectional reflection distribution function, BRDF, geometric attenuation term, multiple scattering, V-cavity model

Procedia PDF Downloads 101
26866 The Physics of Cold Spray Technology

Authors: Ionel Botef

Abstract:

Studies show that, for qualitative coatings, the knowledge of cold spray technology must focus on a variety of interdisciplinary fields and a framework for problem solving. The integrated disciplines include, but are not limited to, engineering, material sciences, and physics. Due to its importance, the purpose of this paper is to summarize the state of the art of this technology alongside its theoretical and experimental studies, and explore the role and impact of physics upon cold spraying technology.

Keywords: surface engineering, cold spray, physics, modelling

Procedia PDF Downloads 518
26865 Virtual Modelling of Turbulent Fibre Flow in a Low Consistency Refiner for a Sustainable and Energy Efficient Process

Authors: Simon Ingelsten, Anton Lundberg, Vijay Shankar, Lars-Olof Landström, Örjan Johansson

Abstract:

The flow in a low consistency disc refiner is simulated with the aim of identifying flow structures possibly being of importance for a future study to optimise the energy efficiency in refining processes. A simplified flow geometry is used, where a single groove of a refiner disc is modelled. Two different fibre models are used to simulate turbulent fibre suspension flow in the groove. The first model is a Bingham viscoplastic fluid model where the fibre suspension is treated as a non-Newtonian fluid with a yield stress. The second model is a new model proposed in a recent study where the suspended fibres effect on flow is accounted for through a modelled orientation distribution function (ODF). Both models yielded similar results with small differences. Certain flow characteristics that were expected and that was found in the literature were identified. Some of these flow characteristics may be of importance in a future process to optimise the refiner geometry to increase the energy efficiency. Further study and a more detailed flow model is; however, needed in order for the simulations to yield results valid for quantitative use in such an optimisation study. An outline of the next steps in such a study is proposed.

Keywords: disc refiner, fibre flow, sustainability, turbulence modelling

Procedia PDF Downloads 391
26864 How Manufacturing Firm Manages Information Security: Need Pull and Technology Push Perspective

Authors: Geuna Kim, Sanghyun Kim

Abstract:

This study investigates various factors that may influence the ISM process, including the organization’s internal needs and external pressure, and examines the role of regulatory pressure in ISM development and performance. The 105 sets of data collected in a survey were tested against the research model using SEM. The results indicate that NP and TP had positive effects on the ISM process, except for perceived benefits. Regulatory pressure had a positive effect on the relationship between ISM awareness and ISM development and performance.

Keywords: information security management, need pull, technology push, regulatory pressure

Procedia PDF Downloads 276
26863 Measuring the Effect of Ventilation on Cooking in Indoor Air Quality by Low-Cost Air Sensors

Authors: Andres Gonzalez, Adam Boies, Jacob Swanson, David Kittelson

Abstract:

The concern of the indoor air quality (IAQ) has been increasing due to its risk to human health. The smoking, sweeping, and stove and stovetop use are the activities that have a major contribution to the indoor air pollution. Outdoor air pollution also affects IAQ. The most important factors over IAQ from cooking activities are the materials, fuels, foods, and ventilation. The low-cost, mobile air quality monitoring (LCMAQM) sensors, is reachable technology to assess the IAQ. This is because of the lower cost of LCMAQM compared to conventional instruments. The IAQ was assessed, using LCMAQM, during cooking activities in a University of Minnesota graduate-housing evaluating different ventilation systems. The gases measured are carbon monoxide (CO) and carbon dioxide (CO2). The particles measured are particle matter (PM) 2.5 micrometer (µm) and lung deposited surface area (LDSA). The measurements are being conducted during April 2019 in Como Student Community Cooperative (CSCC) that is a graduate housing at the University of Minnesota. The measurements are conducted using an electric stove for cooking. The amount and type of food and oil using for cooking are the same for each measurement. There are six measurements: two experiments measure air quality without any ventilation, two using an extractor as mechanical ventilation, and two using the extractor and windows open as mechanical and natural ventilation. 3The results of experiments show that natural ventilation is most efficient system to control particles and CO2. The natural ventilation reduces the concentration in 79% for LDSA and 55% for PM2.5, compared to the no ventilation. In the same way, CO2 reduces its concentration in 35%. A well-mixed vessel model was implemented to assess particle the formation and decay rates. Removal rates by the extractor were significantly higher for LDSA, which is dominated by smaller particles, than for PM2.5, but in both cases much lower compared to the natural ventilation. There was significant day to day variation in particle concentrations under nominally identical conditions. This may be related to the fat content of the food. Further research is needed to assess the impact of the fat in food on particle generations.

Keywords: cooking, indoor air quality, low-cost sensor, ventilation

Procedia PDF Downloads 96