Search results for: corridor densification option model
13923 Safety Approach Highway Alignment Optimization
Authors: Seyed Abbas Tabatabaei, Marjan Naderan Tahan, Arman Kadkhodai
Abstract:
An efficient optimization approach, called feasible gate (FG), is developed to enhance the computation efficiency and solution quality of the previously developed highway alignment optimization (HAO) model. This approach seeks to realistically represent various user preferences and environmentally sensitive areas and consider them along with geometric design constraints in the optimization process. This is done by avoiding the generation of infeasible solutions that violate various constraints and thus focusing the search on the feasible solutions. The proposed method is simple, but improves significantly the model’s computation time and solution quality. On the other, highway alignment optimization through Feasible Gates, eventuates only economic model by considering minimum design constrains includes minimum reduce of circular curves, minimum length of vertical curves and road maximum gradient. This modelling can reduce passenger comfort and road safety. In most of highway optimization models, by adding penalty function for each constraint, final result handles to satisfy minimum constraint. In this paper, we want to propose a safety-function solution by introducing gift function.Keywords: safety, highway geometry, optimization, alignment
Procedia PDF Downloads 41313922 Modeling of Anisotropic Hardening Based on Crystal Plasticity Theory and Virtual Experiments
Authors: Bekim Berisha, Sebastian Hirsiger, Pavel Hora
Abstract:
Advanced material models involving several sets of model parameters require a big experimental effort. As models are getting more and more complex like e.g. the so called “Homogeneous Anisotropic Hardening - HAH” model for description of the yielding behavior in the 2D/3D stress space, the number and complexity of the required experiments are also increasing continuously. In the context of sheet metal forming, these requirements are even more pronounced, because of the anisotropic behavior or sheet materials. In addition, some of the experiments are very difficult to perform e.g. the plane stress biaxial compression test. Accordingly, tensile tests in at least three directions, biaxial tests and tension-compression or shear-reverse shear experiments are performed to determine the parameters of the macroscopic models. Therefore, determination of the macroscopic model parameters based on virtual experiments is a very promising strategy to overcome these difficulties. For this purpose, in the framework of multiscale material modeling, a dislocation density based crystal plasticity model in combination with a FFT-based spectral solver is applied to perform virtual experiments. Modeling of the plastic behavior of metals based on crystal plasticity theory is a well-established methodology. However, in general, the computation time is very high and therefore, the computations are restricted to simplified microstructures as well as simple polycrystal models. In this study, a dislocation density based crystal plasticity model – including an implementation of the backstress – is used in a spectral solver framework to generate virtual experiments for three deep drawing materials, DC05-steel, AA6111-T4 and AA4045 aluminum alloys. For this purpose, uniaxial as well as multiaxial loading cases, including various pre-strain histories, has been computed and validated with real experiments. These investigations showed that crystal plasticity modeling in the framework of Representative Volume Elements (RVEs) can be used to replace most of the expensive real experiments. Further, model parameters of advanced macroscopic models like the HAH model can be determined from virtual experiments, even for multiaxial deformation histories. It was also found that crystal plasticity modeling can be used to model anisotropic hardening more accurately by considering the backstress, similar to well-established macroscopic kinematic hardening models. It can be concluded that an efficient coupling of crystal plasticity models and the spectral solver leads to a significant reduction of the amount of real experiments needed to calibrate macroscopic models. This advantage leads also to a significant reduction of computational effort needed for the optimization of metal forming process. Further, due to the time efficient spectral solver used in the computation of the RVE models, detailed modeling of the microstructure are possible.Keywords: anisotropic hardening, crystal plasticity, micro structure, spectral solver
Procedia PDF Downloads 31813921 Artificial Intelligence Methods for Returns Expectations in Financial Markets
Authors: Yosra Mefteh Rekik, Younes Boujelbene
Abstract:
We introduce in this paper a new conceptual model representing the stock market dynamics. This model is essentially based on cognitive behavior of the intelligence investors. In order to validate our model, we build an artificial stock market simulation based on agent-oriented methodologies. The proposed simulator is composed of market supervisor agent essentially responsible for executing transactions via an order book and various kinds of investor agents depending to their profile. The purpose of this simulation is to understand the influence of psychological character of an investor and its neighborhood on its decision-making and their impact on the market in terms of price fluctuations. Therefore, the difficulty of the prediction is due to several features: the complexity, the non-linearity and the dynamism of the financial market system, as well as the investor psychology. The Artificial Neural Networks learning mechanism take on the role of traders, who from their futures return expectations and place orders based on their expectations. The results of intensive analysis indicate that the existence of agents having heterogeneous beliefs and preferences has provided a better understanding of price dynamics in the financial market.Keywords: artificial intelligence methods, artificial stock market, behavioral modeling, multi-agent based simulation
Procedia PDF Downloads 44913920 A Learning-Based EM Mixture Regression Algorithm
Authors: Yi-Cheng Tian, Miin-Shen Yang
Abstract:
The mixture likelihood approach to clustering is a popular clustering method where the expectation and maximization (EM) algorithm is the most used mixture likelihood method. In the literature, the EM algorithm had been used for mixture regression models. However, these EM mixture regression algorithms are sensitive to initial values with a priori number of clusters. In this paper, to resolve these drawbacks, we construct a learning-based schema for the EM mixture regression algorithm such that it is free of initializations and can automatically obtain an approximately optimal number of clusters. Some numerical examples and comparisons demonstrate the superiority and usefulness of the proposed learning-based EM mixture regression algorithm.Keywords: clustering, EM algorithm, Gaussian mixture model, mixture regression model
Procedia PDF Downloads 51313919 Orthogonal Metal Cutting Simulation of Steel AISI 1045 via Smoothed Particle Hydrodynamic Method
Authors: Seyed Hamed Hashemi Sohi, Gerald Jo Denoga
Abstract:
Machining or metal cutting is one of the most widely used production processes in industry. The quality of the process and the resulting machined product depends on parameters like tool geometry, material, and cutting conditions. However, the relationships of these parameters to the cutting process are often based mostly on empirical knowledge. In this study, computer modeling and simulation using LS-DYNA software and a Smoothed Particle Hydrodynamic (SPH) methodology, was performed on the orthogonal metal cutting process to analyze three-dimensional deformation of AISI 1045 medium carbon steel during machining. The simulation was performed using the following constitutive models: the Power Law model, the Johnson-Cook model, and the Zerilli-Armstrong models (Z-A). The outcomes were compared against the simulated results obtained by Cenk Kiliçaslan using the Finite Element Method (FEM) and the empirical results of Jaspers and Filice. The analysis shows that the SPH method combined with the Zerilli-Armstrong constitutive model is a viable alternative to simulating the metal cutting process. The tangential force was overestimated by 7%, and the normal force was underestimated by 16% when compared with empirical values. The simulation values for flow stress versus strain at various temperatures were also validated against empirical values. The SPH method using the Z-A model has also proven to be robust against issues of time-scaling. Experimental work was also done to investigate the effects of friction, rake angle and tool tip radius on the simulation.Keywords: metal cutting, smoothed particle hydrodynamics, constitutive models, experimental, cutting forces analyses
Procedia PDF Downloads 26413918 Construct the Fur Input Mixed Model with Activity-Based Benefit Assessment Approach of Leather Industry
Authors: M. F. Wu, F. T. Cheng
Abstract:
Leather industry is the most important traditional industry to provide the leather products in the world for thousand years. The fierce global competitive environment and common awareness of global carbon reduction make livestock supply quantities falling, salt and wet blue leather material reduces and the price skyrockets significantly. Exchange rate fluctuation led sales revenue decreasing which due to the differences of export exchanges and compresses the overall profitability of leather industry. This paper applies activity-based benefit assessment approach to build up fitness fur input mixed model, fur is Wet Blue, which concerned with four key factors: the output rate of wet blue, unit cost of wet blue, yield rate and grade level of Wet Blue to achieve the low cost strategy under given unit price of leather product condition of the company. The research findings indicate that applying this model may improve the input cost structure, decrease numbers of leather product inventories and to raise the competitive advantages of the enterprise in the future.Keywords: activity-based benefit assessment approach, input mixed, output rate, wet blue
Procedia PDF Downloads 37913917 Anton Bruckner’s Requiem in Dm: The Reinterpretation of a Liturgical Genre in the Viennese Romantic Context
Authors: Sara Ramos Contioso
Abstract:
The premiere of Anton Bruckner's Requiem in Dm, in September 1849, represents a turning point in the composer's creative evolution. This Mass of the Dead, which was dedicated to the memory of his esteemed friend and mentor Franz Sailer, establishes the beginning of a new creative aesthetic in the composer´s production and links its liturgical development, which is contextualized in the monastery of St. Florian, to the use of a range of musicals possibilities that are projected by Bruckner on an orchestral texture with choir and organ. Set on a strict tridentine ritual model, this requiem exemplifies the religious aesthetics of a composer that is committed to the Catholic faith and that also links to its structure the reinterpretation of a religious model that, despite being romantic, shows a strong influence derived from the baroque or the Viennese Classicism language. Consequently, the study responds to the need to show the survival of the Requiem Mass within the romantic context of Vienna. Therefore, it draws on a detailed analysis of the score and the creative context of the composer with the intention of linking the work to the tradition of the genre and also specifying the stylistic particularities of its musical model within a variability of possibilities such as the contrasting precedents of Mozart, Haydn, Cherubini or Berlioz´s requiems. Tradition or modernity, liturgy or concert hall are aesthetic references that will condition the development of the Requiem Mass in the middle of the nineteenth century. In this context, this paper tries to recover Bruckner's Requiem in Dm as a musical model of the romantic ritual of deceased and as a stylistic reference of a creative composition that will condition the development of later liturgical works such as Liszt or DeLange (1868) ones.Keywords: liturgy, religious symbolism, requiem, romanticism
Procedia PDF Downloads 34013916 Enhancing Project Management Performance in Prefabricated Building Construction under Uncertainty: A Comprehensive Approach
Authors: Niyongabo Elyse
Abstract:
Prefabricated building construction is a pioneering approach that combines design, production, and assembly to attain energy efficiency, environmental sustainability, and economic feasibility. Despite continuous development in the industry in China, the low technical maturity of standardized design, factory production, and construction assembly introduces uncertainties affecting prefabricated component production and on-site assembly processes. This research focuses on enhancing project management performance under uncertainty to help enterprises navigate these challenges and optimize project resources. The study introduces a perspective on how uncertain factors influence the implementation of prefabricated building construction projects. It proposes a theoretical model considering project process management ability, adaptability to uncertain environments, and collaboration ability of project participants. The impact of uncertain factors is demonstrated through case studies and quantitative analysis, revealing constraints on implementation time, cost, quality, and safety. To address uncertainties in prefabricated component production scheduling, a fuzzy model is presented, expressing processing times in interval values. The model utilizes a cooperative co-evolution evolution algorithm (CCEA) to optimize scheduling, demonstrated through a real case study showcasing reduced project duration and minimized effects of processing time disturbances. Additionally, the research addresses on-site assembly construction scheduling, considering the relationship between task processing times and assigned resources. A multi-objective model with fuzzy activity durations is proposed, employing a hybrid cooperative co-evolution evolution algorithm (HCCEA) to optimize project scheduling. Results from real case studies indicate improved project performance in terms of duration, cost, and resilience to processing time delays and resource changes. The study also introduces a multistage dynamic process control model, utilizing IoT technology for real-time monitoring during component production and construction assembly. This approach dynamically adjusts schedules when constraints arise, leading to enhanced project management performance, as demonstrated in a real prefabricated housing project. Key contributions include a fuzzy prefabricated components production scheduling model, a multi-objective multi-mode resource-constrained construction project scheduling model with fuzzy activity durations, a multi-stage dynamic process control model, and a cooperative co-evolution evolution algorithm. The integrated mathematical model addresses the complexity of prefabricated building construction project management, providing a theoretical foundation for practical decision-making in the field.Keywords: prefabricated construction, project management performance, uncertainty, fuzzy scheduling
Procedia PDF Downloads 5513915 Re-identification Risk and Mitigation in Federated Learning: Human Activity Recognition Use Case
Authors: Besma Khalfoun
Abstract:
In many current Human Activity Recognition (HAR) applications, users' data is frequently shared and centrally stored by third parties, posing a significant privacy risk. This practice makes these entities attractive targets for extracting sensitive information about users, including their identity, health status, and location, thereby directly violating users' privacy. To tackle the issue of centralized data storage, a relatively recent paradigm known as federated learning has emerged. In this approach, users' raw data remains on their smartphones, where they train the HAR model locally. However, users still share updates of their local models originating from raw data. These updates are vulnerable to several attacks designed to extract sensitive information, such as determining whether a data sample is used in the training process, recovering the training data with inversion attacks, or inferring a specific attribute or property from the training data. In this paper, we first introduce PUR-Attack, a parameter-based user re-identification attack developed for HAR applications within a federated learning setting. It involves associating anonymous model updates (i.e., local models' weights or parameters) with the originating user's identity using background knowledge. PUR-Attack relies on a simple yet effective machine learning classifier and produces promising results. Specifically, we have found that by considering the weights of a given layer in a HAR model, we can uniquely re-identify users with an attack success rate of almost 100%. This result holds when considering a small attack training set and various data splitting strategies in the HAR model training. Thus, it is crucial to investigate protection methods to mitigate this privacy threat. Along this path, we propose SAFER, a privacy-preserving mechanism based on adaptive local differential privacy. Before sharing the model updates with the FL server, SAFER adds the optimal noise based on the re-identification risk assessment. Our approach can achieve a promising tradeoff between privacy, in terms of reducing re-identification risk, and utility, in terms of maintaining acceptable accuracy for the HAR model.Keywords: federated learning, privacy risk assessment, re-identification risk, privacy preserving mechanisms, local differential privacy, human activity recognition
Procedia PDF Downloads 1713914 System Identification and Quantitative Feedback Theory Design of a Lathe Spindle
Authors: M. Khairudin
Abstract:
This paper investigates the system identification and design quantitative feedback theory (QFT) for the robust control of a lathe spindle. The dynamic of the lathe spindle is uncertain and time variation due to the deepness variation on cutting process. System identification was used to obtain the dynamics model of the lathe spindle. In this work, real time system identification is used to construct a linear model of the system from the nonlinear system. These linear models and its uncertainty bound can then be used for controller synthesis. The real time nonlinear system identification process to obtain a set of linear models of the lathe spindle that represents the operating ranges of the dynamic system. With a selected input signal, the data of output and response is acquired and nonlinear system identification is performed using Matlab to obtain a linear model of the system. Practical design steps are presented in which the QFT-based conditions are formulated to obtain a compensator and pre-filter to control the lathe spindle. The performances of the proposed controller are evaluated in terms of velocity responses of the the lathe machine spindle in corporating deepness on cutting process.Keywords: lathe spindle, QFT, robust control, system identification
Procedia PDF Downloads 54513913 The Impacts of an Adapted Literature Circle Model on Reading Comprehension, Engagement, and Cooperation in an EFL Reading Course
Authors: Tiantian Feng
Abstract:
There is a dearth of research on the literary circle as a teaching strategy in English as a Foreign Language (EFL) classes in Chinese colleges and universities and even fewer empirical studies on its impacts. In this one-quarter, design-based project, the researcher aims to increase students’ engagement, cooperation, and, on top of that, reading comprehension performance by utilizing a researcher-developed, adapted reading circle model in an EFL reading course at a Chinese college. The model also integrated team-based learning and portfolio assessment, with an emphasis on the specialization of individual responsibilities, contributions, and outcomes in reading projects, with the goal of addressing current issues in EFL classes at Chinese colleges, such as passive learning, test orientation, ineffective and uncooperative teamwork, and lack of dynamics. In this quasi-experimental research, two groups of students enrolled in the course were invited to participate in four in-class team projects, with the intervention class following the adapted literature circle model and team members rotating as Leader, Coordinator, Brain trust, and Reporter. The researcher/instructor used a sequential explanatory mixed-methods approach to quantitatively analyze the final grades for the pre-and post-tests, as well as individual scores for team projects and will code students' artifacts in the next step, with the results to be reported in a subsequent paper(s). Initial analysis showed that both groups saw an increase in final grades, but the intervention group enjoyed a more significant boost, suggesting that the adapted reading circle model is effective in improving students’ reading comprehension performance. This research not only closes the empirical research gap of literature circles in college EFL classes in China but also adds to the pool of effective ways to optimize reading comprehension performance and class performance in college EFL classes.Keywords: literature circle, EFL teaching, college english reading, reading comprehension
Procedia PDF Downloads 10413912 Financing Innovation: Differences across National Innovation Systems
Authors: Núria Arimany Serrat, Xavier Ferràs Hernández, Petra A. Nylund, Eric Viardot
Abstract:
Innovation is an increasingly important antecedent to firm competitiveness and growth. Successful innovation, however, requires a significant financial commitment and the means of financing accessible to the firm may affect its ability to innovate. The access to equity financing such as venture capital has been connected to innovativeness for young firms. For established enterprises, debt financing of innovation may be a more realistic option. Continuous innovation and growth would otherwise require a constant increase of equity. We, therefore, investigate the relation between debt financing and innovation for large firms and hypothesize that those firms that carry more debt will be more innovative. The need for debt financing of innovation may be reduced for very profitable firms, which can finance innovation with cash flow. We thus hypothesize a moderating effect of profitability on the relationship between debt financing and innovation. We carry out an empirical investigation using a longitudinal data set including 167 large European firms over five years, resulting in 835 firm years. We apply generalized least squares (GLS) regression with fixed firm effects to control for firm heterogeneity. The findings support our hypotheses and we conclude that access to debt finding is an important antecedent of innovation, with profitability as a moderating factor. The results do however differ across national innovation systems and we find a strong relationship for British, Dutch, French, and Italian firms but not for German and Spanish entities. We discuss differences in the national systems of innovation and financing which contextualize the variations in the findings and thus make a nuanced contribution to the research in innovation financing. The cross-country differences calls for differentiated advice to managers, institutions, and researchers depending on the national context.Keywords: innovation, R&D, national innovation systems, financing
Procedia PDF Downloads 53413911 Improving Subjective Bias Detection Using Bidirectional Encoder Representations from Transformers and Bidirectional Long Short-Term Memory
Authors: Ebipatei Victoria Tunyan, T. A. Cao, Cheol Young Ock
Abstract:
Detecting subjectively biased statements is a vital task. This is because this kind of bias, when present in the text or other forms of information dissemination media such as news, social media, scientific texts, and encyclopedias, can weaken trust in the information and stir conflicts amongst consumers. Subjective bias detection is also critical for many Natural Language Processing (NLP) tasks like sentiment analysis, opinion identification, and bias neutralization. Having a system that can adequately detect subjectivity in text will boost research in the above-mentioned areas significantly. It can also come in handy for platforms like Wikipedia, where the use of neutral language is of importance. The goal of this work is to identify the subjectively biased language in text on a sentence level. With machine learning, we can solve complex AI problems, making it a good fit for the problem of subjective bias detection. A key step in this approach is to train a classifier based on BERT (Bidirectional Encoder Representations from Transformers) as upstream model. BERT by itself can be used as a classifier; however, in this study, we use BERT as data preprocessor as well as an embedding generator for a Bi-LSTM (Bidirectional Long Short-Term Memory) network incorporated with attention mechanism. This approach produces a deeper and better classifier. We evaluate the effectiveness of our model using the Wiki Neutrality Corpus (WNC), which was compiled from Wikipedia edits that removed various biased instances from sentences as a benchmark dataset, with which we also compare our model to existing approaches. Experimental analysis indicates an improved performance, as our model achieved state-of-the-art accuracy in detecting subjective bias. This study focuses on the English language, but the model can be fine-tuned to accommodate other languages.Keywords: subjective bias detection, machine learning, BERT–BiLSTM–Attention, text classification, natural language processing
Procedia PDF Downloads 13413910 Dynamic Response of Doubly Curved Composite Shell with Embedded Shape Memory Alloys Wires
Authors: Amin Ardali, Mohammadreza Khalili, Mohammadreza Rezai
Abstract:
In this paper, dynamic response of thin smart composite panel subjected to low-velocity transverse impact is investigated. Shape memory wires are used to reinforced curved composite panel in a smart way. One-dimensional thermodynamic constitutive model by Liang and Rogers is used for estimating the structural recovery stress. The two degrees-of-freedom mass-spring model is used for evaluation of the contact force between the curved composite panel and the impactor. This work is benefited from the Hertzian linear contact model which is linearized for the impact analysis of curved composite panel. The governing equations of curved panel are provided by first-order shear theory and solved by Fourier series related to simply supported boundary condition. For this purpose, the equation of doubly curved panel motion included the uniform in-plane forces is obtained. By the present analysis, the curved panel behavior under low-velocity impact, and also the effect of the impact parameters, the shape memory wire and the curved panel dimensions are studied.Keywords: doubly curved shell, SMA wire, impact response, smart material, shape memory alloy
Procedia PDF Downloads 40913909 Unsupervised Learning and Similarity Comparison of Water Mass Characteristics with Gaussian Mixture Model for Visualizing Ocean Data
Authors: Jian-Heng Wu, Bor-Shen Lin
Abstract:
The temperature-salinity relationship is one of the most important characteristics used for identifying water masses in marine research. Temperature-salinity characteristics, however, may change dynamically with respect to the geographic location and is quite sensitive to the depth at the same location. When depth is taken into consideration, however, it is not easy to compare the characteristics of different water masses efficiently for a wide range of areas of the ocean. In this paper, the Gaussian mixture model was proposed to analyze the temperature-salinity-depth characteristics of water masses, based on which comparison between water masses may be conducted. Gaussian mixture model could model the distribution of a random vector and is formulated as the weighting sum for a set of multivariate normal distributions. The temperature-salinity-depth data for different locations are first used to train a set of Gaussian mixture models individually. The distance between two Gaussian mixture models can then be defined as the weighting sum of pairwise Bhattacharyya distances among the Gaussian distributions. Consequently, the distance between two water masses may be measured fast, which allows the automatic and efficient comparison of the water masses for a wide range area. The proposed approach not only can approximate the distribution of temperature, salinity, and depth directly without the prior knowledge for assuming the regression family, but may restrict the complexity by controlling the number of mixtures when the amounts of samples are unevenly distributed. In addition, it is critical for knowledge discovery in marine research to represent, manage and share the temperature-salinity-depth characteristics flexibly and responsively. The proposed approach has been applied to a real-time visualization system of ocean data, which may facilitate the comparison of water masses by aggregating the data without degrading the discriminating capabilities. This system provides an interface for querying geographic locations with similar temperature-salinity-depth characteristics interactively and for tracking specific patterns of water masses, such as the Kuroshio near Taiwan or those in the South China Sea.Keywords: water mass, Gaussian mixture model, data visualization, system framework
Procedia PDF Downloads 15113908 Model Predictive Control of Turbocharged Diesel Engine with Exhaust Gas Recirculation
Authors: U. Yavas, M. Gokasan
Abstract:
Control of diesel engine’s air path has drawn a lot of attention due to its multi input-multi output, closed coupled, non-linear relation. Today, precise control of amount of air to be combusted is a must in order to meet with tight emission limits and performance targets. In this study, passenger car size diesel engine is modeled by AVL Boost RT, and then simulated with standard, industry level PID controllers. Finally, linear model predictive control is designed and simulated. This study shows the importance of modeling and control of diesel engines with flexible algorithm development in computer based systems.Keywords: predictive control, engine control, engine modeling, PID control, feedforward compensation
Procedia PDF Downloads 63913907 Innovation Outputs from Higher Education Institutions: A Case Study of the University of Waterloo, Canada
Authors: Wendy De Gomez
Abstract:
The University of Waterloo is situated in central Canada in the Province of Ontario- one hour from the metropolitan city of Toronto. For over 30 years, it has held Canada’s top spot as the most innovative university; and has been consistently ranked in the top 25 computer science and top 50 engineering schools in the world. Waterloo benefits from the federal government’s over 100 domestic innovation policies which have assisted in the country’s 15th place global ranking in the World Intellectual Property Organization’s (WIPO) 2022 Global Innovation Index. Yet undoubtedly, the University of Waterloo’s unique characteristics are what propels its innovative creativeness forward. This paper will provide a contextual definition of innovation in higher education and then demonstrate the five operational attributes that contribute to the University of Waterloo’s innovative reputation. The methodology is based on statistical analyses obtained from ranking bodies such as the QS World University Rankings, a secondary literature review related to higher education innovation in Canada, and case studies that exhibit the operationalization of the attributes outlined below. The first attribute is geography. Specifically, the paper investigates the network structure effect of the Toronto-Waterloo high-tech corridor and the resultant industrial relationships built there. The second attribute is University Policy 73-Intellectal Property Rights. This creator-owned policy grants all ownership to the creator/inventor regardless of the use of the University of Waterloo property or funding. Essentially, through the incentivization of IP ownership by all researchers, further commercialization and entrepreneurship are formed. Third, this IP policy works hand in hand with world-renowned business incubators such as the Accelerator Centre in the dedicated research and technology park and velocity, a 14-year-old facility that equips and guides founders to build and scale companies. Communitech, a 25-year-old provincially backed facility in the region, also works closely with the University of Waterloo to build strong teams, access capital, and commercialize products. Fourth, Waterloo’s co-operative education program contributes 31% of all co-op participants to the Canadian economy. Home to the world’s largest co-operative education program, data shows that over 7,000 from around the world recruit Waterloo students for short- and long-term placements- directly contributing to the student’s ability to learn and optimize essential employment skills when they graduate. Finally, the students themselves at Waterloo are exceptional. The entrance average ranges from the low 80s to the mid-90s depending on the program. In computer, electrical, mechanical, mechatronics, and systems design engineering, to have a 66% chance of acceptance, the applicant’s average must be 95% or above. Singularly, none of these five attributes could lead to the university’s outstanding track record of innovative creativity, but when bundled up into a 1000 acre- 100 building main campus with 6 academic faculties, 40,000+ students, and over 1300 world-class faculty, the recipe for success becomes quite evident.Keywords: IP policy, higher education, economy, innovation
Procedia PDF Downloads 7213906 Effects of a Bioactive Subfraction of Strobilanthes Crispus on the Tumour Growth, Body Weight and Haematological Parameters in 4T1-Induced Breast Cancer Model
Authors: Yusha'u Shu'aibu Baraya, Kah Keng Wong, Nik Soriani Yaacob
Abstract:
Strobilanthes crispus (S. crispus), is a Malaysian herb locally known as ‘Pecah kaca’ or ‘Jin batu’ which have demonstrated potent anticancer effects in both in vitro and in vivo models. In particular, S. crispus subfraction (SCS) significantly reduced tumor growth in N-methyl-N-Nitrosourea-induced breast cancer rat model. However, there is paucity of information on the effects of SCS in breast cancer metastasis. Thus, in this study, the antimetastatic effects of SCS (100 mg/kg) was investigated following 30 days of treatment in 4T1-induced mammary tumor (n = 5) model. The response to treatment was assessed based on the outcome of the tumour growth, body weight and hematological parameters. The results demonstrated that tumor bearing mice treated with SCS (TM-S) had significant (p<0.05) reduction in the mean tumor number and tumor volume as well as tumor weight compared to the tumor bearing mice (TM), i.e. tumor untreated group. Also, there was no secondary tumor formation or tumor-associated lesions in the major organs of TM-S compared to the TM group. Similarly, comparable body weights were observed among the TM-S, normal (uninduced) mice treated with SCS and normal (untreated/control) mice (NM) groups compared to the TM group (p<0.05). Furthermore, SCS administration does not cause significant changes in the hematological parameters as compared to the NM group, which indicates no sign of anemia and toxicity related effects. In conclusion, SCS significantly inhibited the overall tumor growth and metastasis in 4T1-induced breast cancer mouse model suggesting its promising potentials as therapeutic agent for breast cancer treatment.Keywords: 4T1-cells, breast cancer, metastasis, Strobilanthes crispus
Procedia PDF Downloads 15413905 Study on Horizontal Ecological Compensation Mechanism in Yangtze River Economic Belt Basin: Based on Evolutionary Game Analysis and Water Quality and Quantity Model
Authors: Tingyu Zhang
Abstract:
The horizontal ecological compensation (HEC) mechanism is the key to stimulating the active participation of the whole basin in ecological protection. In this paper, we construct an evolutionary model for HEC in the Yangtze River Economic Belt (YREB) basin with the introduction of the central government constraint and incentive mechanism (CGCIM) and explore the conditions for the realization of a (Protection and compensation) strategy that meets the social expectations. Further, the water quality-water quantity model is utilized to measure the HEC amount with the characteristic factual data of the YREB in 2020-2022. The results show that the stability of the evolutionary game model of upstream and downstream governments in the YREB is closely related to the CGCIM. If (Protection Compensation) is to be realized as the only evolutionary stable strategy of the evolutionary game system composed of upstream and downstream governments, it is necessary for the CGCIM to satisfy that the sum of the incentives for the protection side and its unilateral or bilateral constraints is greater than twice the input cost of the active strategy, and the sum of the incentives for the compensation side and its unilateral or bilateral constraints is greater than the amount of ecological compensation that needs to be paid by it when it adopts the active strategy. At this point, the total amount of HEC that the downstream government should give to the upstream government of the YREB is 2856.7 million yuan in 2020, 5782.1 million yuan in 2021, and 23166.7 million yuan in 2022. The results of the study can provide a reference for promoting the improvement and refinement of the HEC mechanism in the YREB.Keywords: horizontal ecological compensation, Yangtze river economic belt, evolutionary game analysis, water quality and quantity model research on territorial ecological restoration in Mianzhu city, Sichuan, under the dual evaluation framework
Procedia PDF Downloads 5513904 Changes of Acute-phase Reactants in Systemic Sclerosis During Long-term Rituximab Therapy
Authors: Liudmila Garzanova, Lidia Ananyeva, Olga Koneva, Olga Ovsyannikova, Oxana Desinova, Mayya Starovoytova, Rushana Shayahmetova, Anna Khelkovskaya-Sergeeva
Abstract:
Objectives. C-reactive protein (CRP) and erythrocyte sedimentation rate (ESR) are associated with severe course, increased morbidity and mortality in systemic sclerosis (SSc). The aim of our study was to assess changes in CRP and ESR in SSc patients during long-term RTX therapy. Methods. This study included 113 patients with SSc. Mean age was 48.1±13 years, female-85%. The mean disease duration was 6±5 years. The diffuse cutaneous subset of the disease had 55% of patients. All pts had interstitial lung disease (ILD). All patients received prednisolone at a mean dose of 11.6±4.8 mg/day, and 53 of them - were immunosuppressants at inclusion. Patients received RTX due to the ineffectiveness of previous therapy for ILD. The parameters were evaluated over the periods: at baseline (point 0), 13±2.3 month (point 1, n=113), 42±14 month (point 2, n=80) and 79±6.5 month (point 3, n=25) after initiation of RTX therapy. Cumulative mean dose of RTX at point 1 = 1.7±0.6g, at point 2 = 3±1.5g, and at point 3 = 3.8±2.4g. The results are presented in the form of mean values, delta(Δ)-difference between the baseline parameter and follow-up point. Results. There was an improvement in studied parameters on RTX therapy. There was a significant decrease of ESR, CRP and activity index (EScSG-AI) at all observation points (p=0.001). In point 1: ΔCRP was 6.7 mg/l, ΔESR = 7.4 mm/h, ΔActivity index (EScSG-AI) = 1.7. In point 2: ΔCRP was 8.7 mg/l, ΔESR = 7.5 mm/h, ΔActivity index (EScSG-AI) = 1.9. In point 3: ΔCRP was 16.1 mg/l, ΔESR = 11 mm/h, ΔActivity index (EScSG-AI) = 2.1. Conclusion. There was a significant decrease in CRP and ESR during long-term RTX therapy, which correlated with a decrease in the disease activity index. RTX is an effective treatment option for SSc with an elevation of acute-phase reactants.Keywords: C-reactive protein, interstitial lung disease, systemic sclerosis, rituximab
Procedia PDF Downloads 3413903 Maintaining Energy Security in Natural Gas Pipeline Operations by Empowering Process Safety Principles Through Alarm Management Applications
Authors: Huseyin Sinan Gunesli
Abstract:
Process Safety Management is a disciplined framework for managing the integrity of systems and processes that handle hazardous substances. It relies on good design principles, well-implemented automation systems, and operating and maintenance practices. Alarm Management Systems play a critically important role in the safe and efficient operation of modern industrial plants. In that respect, Alarm Management is one of the critical factors feeding the safe operations of the plants in the manner of applying effective process safety principles. Trans Anatolian Natural Gas Pipeline (TANAP) is part of the Southern Gas Corridor, which extends from the Caspian Sea to Italy. TANAP transports Natural Gas from the Shah Deniz gas field of Azerbaijan, and possibly from other neighboring countries, to Turkey and through Trans Adriatic Pipeline (TAP) Pipeline to Europe. TANAP plays a crucial role in maintaining Energy Security for the region and Europe. In that respect, the application of Process Safety principles is vital to deliver safe, reliable and efficient Natural Gas delivery to Shippers both in the region and Europe. Effective Alarm Management is one of those Process Safety principles which feeds safe operations of the TANAP pipeline. Alarm Philosophy was designed and implemented in TANAP Pipeline according to the relevant standards. However, it is essential to manage the alarms received in the control room effectively to maintain safe operations. In that respect, TANAP has commenced Alarm Management & Rationalization program as of February 2022 after transferring to Plateau Regime, reaching the design parameters. While Alarm Rationalization started, there were more than circa 2300 alarms received per hour from one of the compressor stations. After applying alarm management principles such as reviewing and removal of bad actors, standing, stale, chattering, fleeting alarms, comprehensive review and revision of alarm set points through a change management principle, conducting alarm audits/design verification and etc., it has been achieved to reduce down to circa 40 alarms per hour. After the successful implementation of alarm management principles as specified above, the number of alarms has been reduced to industry standards. That significantly improved operator vigilance to focus on mainly important and critical alarms to avoid any excursion beyond safe operating limits leading to any potential process safety events. Following the ‟What Gets Measured, Gets Managed” principle, TANAP has identified key Performance Indicators (KPIs) to manage Process Safety principles effectively, where Alarm Management has formed one of the key parameters of those KPIs. However, review and analysis of the alarms were performed manually. Without utilizing Alarm Management Software, achieving full compliance with international standards is almost infeasible. In that respect, TANAP has started using one of the industry-wide known Alarm Management Applications to maintain full review and analysis of alarms and define actions as required. That actually significantly empowered TANAP’s process safety principles in terms of Alarm Management.Keywords: process safety principles, energy security, natural gas pipeline operations, alarm rationalization, alarm management, alarm management application
Procedia PDF Downloads 10813902 Evaluation of Flexural Cracking Width of Steel Fibre Reinforced Concrete Beams
Authors: Touhami Tahenni
Abstract:
Excessively wide cracks are harmful to the serviceability of reinforced concrete (RC) beams and may lead to durability problems in the longer term. They also reduce the rigidity of RC sections, rendering the tensile concrete ineffective structurally. To reduce the negative effects of cracks, steel fibers are added to concrete mixes in the same manner as aggregates. In the present work, steel fibers reinforced concrete (SFRC) beams, made of normal strength and high strength concretes, were tested in a four-point bending test using a digital image correlation technique. The beams had different volume fractions of fibres and different aspect ratios (fiber length/fiber diameter). The evaluation of flexural cracking widths was determined using Gom-Aramis software. The experimental crack widths were compared with theoretical values predicted by the technical document of Rilem TC 162-TDF. The model proposed in this document seems to be the only one that considers the efficiency of steel fibres in restraining the crack widths. However, the model of Rilem takes into account only the aspect ratio of steel fibres to predict the crack width of SFRC beams. It has been reported in several pieces of research that the contribution of steel fibres to the limitation of flexural cracking widths is based on three essential parameters namely, the volume fraction, the orientation and the aspect ratio of fibres. Referring to the literature on the flexural cracking behavior of SFRC beams and the experimental observations of the present work, a correction of the Rilem model by the introduction of these parameters in the formula is proposed. The crack widths predicted by the new empirical model were compared with the experimental results and assessed against other test data on SFRC beams taken from the literature. The modified Rilem model gives better results and is found more satisfactory in predicting the crack widths of fibres concrete.Keywords: stee fibres, reinforced concrete, flexural cracking, tensile strength, crack width
Procedia PDF Downloads 10213901 Co-Integrated Commodity Forward Pricing Model
Authors: F. Boudet, V. Galano, D. Gmira, L. Munoz, A. Reina
Abstract:
Commodities pricing needs a specific approach as they are often linked to each other and so are expectedly doing their prices. They are called co-integrated when at least one stationary linear combination exists between them. Though widespread in economic literature, and even if many equilibrium relations and co-movements exist in the economy, this principle of co-movement is not developed in derivatives field. The present study focuses on the following problem: How can the price of a forward agreement on a commodity be simulated, when it is co-integrated with other ones? Theoretical analysis is developed from Gibson-Schwartz model and an analytical solution is given for short maturities contracts and under risk-neutral conditions. The application has been made to crude oil and heating oil energy commodities and result confirms the applicability of proposed method.Keywords: co-integration, commodities, forward pricing, Gibson-Schwartz
Procedia PDF Downloads 28713900 Numerical Analysis of the Turbulent Flow around DTMB 4119 Marine Propeller
Authors: K. Boumediene, S. E. Belhenniche
Abstract:
This article presents a numerical analysis of a turbulent flow past DTMB 4119 marine propeller by the means of RANS approach; the propeller designed at David Taylor Model Basin in USA. The purpose of this study is to predict the hydrodynamic performance of the marine propeller, it aims also to compare the results obtained with the experiment carried out in open water tests; a periodical computational domain was created to reduce the unstructured mesh size generated. The standard kw turbulence model for the simulation is selected; the results were in a good agreement. Therefore, the errors were estimated respectively to 1.3% and 5.9% for KT and KQ.Keywords: propeller flow, CFD simulation, RANS, hydrodynamic performance
Procedia PDF Downloads 50513899 Hydrothermal Liquefaction for Astaxanthin Extraction from Wet Algae
Authors: Spandana Ramisetty, Mandan Chidambaram, Ramesh Bhujade
Abstract:
Algal biomass is not only a potential source for biocrude but also for high value chemicals like carotenoids, fatty acids, proteins, polysaccharides, vitamins etc. Astaxanthin is one such high value vital carotenoid which has extensive applications in pharmaceutical, aquaculture, poultry and cosmetic industries and expanding as dietary supplement to humans. Green microalgae Haematococcus pluvialis is identified as the richest natural source of astaxanthin and is the key source of commercial astaxanthin. Several extraction processes from wet and dry Haematococcus pluvialis biomass have been explored by researchers. Extraction with supercritical CO₂ and various physical disruption techniques like mortar and pestle, homogenization, ultrasonication and ball mill from dried algae are widely used extraction methods. However, these processes require energy intensive drying of biomass that escalates overall costs notably. From the process economics perspective, it is vital to utilize wet processing technology in order to eliminate drying costs. Hydrothermal liquefaction (HTL) is a thermo-chemical conversion process that converts wet biomass containing over 80% water to bio-products under high temperature and high pressure conditions. Astaxanthin is a lipid soluble pigment and is usually extracted along with lipid component. Mild HTL at 200°C and 60 bar has been demonstrated by researchers in a microfluidic platform achieving near complete extraction of astaxanthin from wet biomass. There is very limited work done in this field. An integrated approach of sequential HTL offers cost-effective option to extract astaxanthin/lipid from wet algal biomass without drying algae and also recovering water, minerals and nutrients. This paper reviews past work and evaluates the astaxanthin extraction processes with focus on hydrothermal extraction.Keywords: astaxanthin, extraction, high value chemicals, hydrothermal liquefaction
Procedia PDF Downloads 30913898 Debris Flow Mapping Using Geographical Information System Based Model and Geospatial Data in Middle Himalayas
Authors: Anand Malik
Abstract:
The Himalayas with high tectonic activities poses a great threat to human life and property. Climate change is another reason which triggering extreme events multiple fold effect on high mountain glacial environment, rock falls, landslides, debris flows, flash flood and snow avalanches. One such extreme event of cloud burst along with breach of moraine dammed Chorabri Lake occurred from June 14 to June 17, 2013, triggered flooding of Saraswati and Mandakini rivers in the Kedarnath Valley of Rudraprayag district of Uttrakhand state of India. As a result, huge volume of water with its high velocity created a catastrophe of the century, which resulted into loss of large number of human/animals, pilgrimage, tourism, agriculture and property. Thus a comprehensive assessment of debris flow hazards requires GIS-based modeling using numerical methods. The aim of present study is to focus on analysis and mapping of debris flow movements using geospatial data with flow-r (developed by team at IGAR, University of Lausanne). The model is based on combined probabilistic and energetic algorithms for the assessment of spreading of flow with maximum run out distances. Aster Digital Elevation Model (DEM) with 30m x 30m cell size (resolution) is used as main geospatial data for preparing the run out assessment, while Landsat data is used to analyze land use land cover change in the study area. The results of the study area show that model can be applied with great accuracy as the model is very useful in determining debris flow areas. The results are compared with existing available landslides/debris flow maps. ArcGIS software is used in preparing run out susceptibility maps which can be used in debris flow mitigation and future land use planning.Keywords: debris flow, geospatial data, GIS based modeling, flow-R
Procedia PDF Downloads 27713897 Developing a Translator Career Path: Based on the Dreyfus Model of Skills Acquisition
Authors: Noha A. Alowedi
Abstract:
This paper proposes a Translator Career Path (TCP) which is based on the Dreyfus Model of Skills Acquisition as the conceptual framework. In this qualitative study, the methodology to collect and analyze the data takes an inductive approach that draws upon the literature to form the criteria for the different steps in the TCP. This path is based on descriptors of expert translator performance and best employees’ practice documented in the literature. Each translator skill will be graded as novice, advanced beginner, competent, proficient, and expert. Consequently, five levels of translator performance are identified in the TCP as five ranks. The first rank is the intern translator, which is equivalent to the novice level; the second rank is the assistant translator, which is equivalent to the advanced beginner level; the third rank is the associate translator, which is equivalent to the competent level; the fourth rank is the translator, which is equivalent to the proficient level; finally, the fifth rank is the expert translator, which is equivalent to the expert level. The main function of this career path is to guide the processes of translator development in translation organizations. Although it is designed primarily for the need of in-house translators’ supervisors, the TCP can be used in academic settings for translation trainers and teachers.Keywords: Dreyfus model, translation organization, translator career path, translator development, translator evaluation, translator promotion
Procedia PDF Downloads 37713896 Analysis on Greenhouse Gas Emissions Potential by Deploying the Green Cars in Korean Road Transport Sector
Authors: Sungjun Hong, Yanghon Chung, Nyunbae Park, Sangyong Park
Abstract:
South Korea, as the 7th largest greenhouse gas emitting country in 2011, announced that the national reduction target of greenhouse gas emissions was 30% based on BAU (Business As Usual) by 2020. And the reduction rate of the transport sector is 34.3% which is the highest figure among all sectors. This paper attempts to analyze the environmental effect on deploying the green cars in Korean road transport sector. In order to calculate the greenhouse gas emissions, the LEAP model is applied in this study.Keywords: green car, greenhouse gas, LEAP model, road transport sector
Procedia PDF Downloads 61913895 Book Recommendation Using Query Expansion and Information Retrieval Methods
Authors: Ritesh Kumar, Rajendra Pamula
Abstract:
In this paper, we present our contribution for book recommendation. In our experiment, we combine the results of Sequential Dependence Model (SDM) and exploitation of book information such as reviews, tags and ratings. This social information is assigned by users. For this, we used CLEF-2016 Social Book Search Track Suggestion task. Finally, our proposed method extensively evaluated on CLEF -2015 Social Book Search datasets, and has better performance (nDCG@10) compared to other state-of-the-art systems. Recently we got the good performance in CLEF-2016.Keywords: sequential dependence model, social information, social book search, query expansion
Procedia PDF Downloads 29113894 Aerodynamic Modeling Using Flight Data at High Angle of Attack
Authors: Rakesh Kumar, A. K. Ghosh
Abstract:
The paper presents the modeling of linear and nonlinear longitudinal aerodynamics using real flight data of Hansa-3 aircraft gathered at low and high angles of attack. The Neural-Gauss-Newton (NGN) method has been applied to model the linear and nonlinear longitudinal dynamics and estimate parameters from flight data. Unsteady aerodynamics due to flow separation at high angles of attack near stall has been included in the aerodynamic model using Kirchhoff’s quasi-steady stall model. NGN method is an algorithm that utilizes Feed Forward Neural Network (FFNN) and Gauss-Newton optimization to estimate the parameters and it does not require any a priori postulation of mathematical model or solving of equations of motion. NGN method was validated on real flight data generated at moderate angles of attack before application to the data at high angles of attack. The estimates obtained from compatible flight data using NGN method were validated by comparing with wind tunnel values and the maximum likelihood estimates. Validation was also carried out by comparing the response of measured motion variables with the response generated by using estimates a different control input. Next, NGN method was applied to real flight data generated by executing a well-designed quasi-steady stall maneuver. The results obtained in terms of stall characteristics and aerodynamic parameters were encouraging and reasonably accurate to establish NGN as a method for modeling nonlinear aerodynamics from real flight data at high angles of attack.Keywords: parameter estimation, NGN method, linear and nonlinear, aerodynamic modeling
Procedia PDF Downloads 455