Search results for: generalised linear model
15921 Assessment of Personal Level Exposures to Particulate Matter among Children in Rural Preliminary Schools as an Indoor Air Pollution Monitoring
Authors: Seyedtaghi Mirmohammadi, J. Yazdani, S. M. Asadi, M. Rokni, A. Toosi
Abstract:
There are many indoor air quality studies with an emphasis on indoor particulate matters (PM2.5) monitoring. Whereas, there is a lake of data about indoor PM2.5 concentrations in rural area schools (especially in classrooms), since preliminary children are assumed to be more defenseless to health hazards and spend a large part of their time in classrooms. The objective of this study was indoor PM2.5 concentration quality assessment. Fifteen preliminary schools by time-series sampling were selected to evaluate the indoor air quality in the rural district of Sari city, Iran. Data on indoor air climate parameters (temperature, relative humidity and wind speed) were measured by a hygrometer and thermometer. Particulate matters (PM2.5) were collected and assessed by Real Time Dust Monitor, (MicroDust Pro, Casella, UK). The mean indoor PM2.5 concentration in the studied classrooms was 135µg/m3 in average. The multiple linear regression revealed that a correlation between PM2.5 concentration and relative humidity, distance from city center and classroom size. Classroom size yields reasonable negative relationship, the PM2.5 concentration was ranged from 65 to 540μg/m3 and statistically significant at 0.05 level and the relative humidity was ranged from 70 to 85% and dry bulb temperature ranged from 28 to 29°C were statistically significant at 0.035 and 0.05 level, respectively. A statistical predictive model was obtained from multiple regressions modeling for PM2.5 and indoor psychrometric parameters.Keywords: particulate matters, classrooms, regression, concentration, humidity
Procedia PDF Downloads 31115920 The Impact of the Composite Expanded Graphite PCM on the PV Panel Whole Year Electric Output: Case Study Milan
Authors: Hasan A Al-Asadi, Ali Samir, Afrah Turki Awad, Ali Basem
Abstract:
Integrating the phase change material (PCM) with photovoltaic (PV) panels is one of the effective techniques to minimize the PV panel temperature and increase their electric output. In order to investigate the impact of the PCM on the electric output of the PV panels for a whole year, a lumped-distributed parameter model for the PV-PCM module has been developed. This development has considered the impact of the PCM density variation between the solid phase and liquid phase. This contribution will increase the assessment accuracy of the electric output of the PV-PCM module. The second contribution is to assess the impact of the expanded composite graphite-PCM on the PV electric output in Milan for a whole year. The novel one-dimensional model has been solved using MATLAB software. The results of this model have been validated against literature experiment work. The weather and the solar radiation data have been collected. The impact of expanded graphite-PCM on the electric output of the PV panel for a whole year has been investigated. The results indicate this impact has an enhancement rate of 2.39% for the electric output of the PV panel in Milan for a whole year.Keywords: PV panel efficiency, PCM, numerical model, solar energy
Procedia PDF Downloads 17315919 Building Exoskeletons for Seismic Retrofitting
Authors: Giuliana Scuderi, Patrick Teuffel
Abstract:
The proven vulnerability of the existing social housing building heritage to natural or induced earthquakes requires the development of new design concepts and conceptual method to preserve materials and object, at the same time providing new performances. An integrate intervention between civil engineering, building physics and architecture can convert the social housing districts from a critical part of the city to a strategic resource of revitalization. Referring to bio-mimicry principles the present research proposes a taxonomy with the exoskeleton of the insect, an external, light and resistant armour whose role is to protect the internal organs from external potentially dangerous inputs. In the same way, a “building exoskeleton”, acting from the outside of the building as an enclosing cage, can restore, protect and support the existing building, assuming a complex set of roles, from the structural to the thermal, from the aesthetical to the functional. This study evaluates the structural efficiency of shape memory alloys devices (SMADs) connecting the “building exoskeleton” with the existing structure to rehabilitate, in order to prevent the out-of-plane collapse of walls and for the passive dissipation of the seismic energy, with a calibrated operability in relation to the intensity of the horizontal loads. The two case studies of a masonry structure and of a masonry structure with concrete frame are considered, and for each case, a theoretical social housing building is exposed to earthquake forces, to evaluate its structural response with or without SMADs. The two typologies are modelled with the finite element program SAP2000, and they are respectively defined through a “frame model” and a “diagonal strut model”. In the same software two types of SMADs, called the 00-10 SMAD and the 05-10 SMAD are defined, and non-linear static and dynamic analyses, namely push over analysis and time history analysis, are performed to evaluate the seismic response of the building. The effectiveness of the devices in limiting the control joint displacements resulted higher in one direction, leading to the consideration of a possible calibrated use of the devices in the different walls of the building. The results show also a higher efficiency of the 00-10 SMADs in controlling the interstory drift, but at the same time the necessity to improve the hysteretic behaviour, to maximise the passive dissipation of the seismic energy.Keywords: adaptive structure, biomimetic design, building exoskeleton, social housing, structural envelope, structural retrofitting
Procedia PDF Downloads 42015918 Analytical Solution for Stellar Distance Based on Photon Dominated Cosmic Expansion Model
Authors: Xiaoyun Li, Suoang Longzhou
Abstract:
This paper derives the analytical solution of stellar distance according to its redshift based on the photon-dominated universe expansion model. Firstly, it calculates stellar separation speed and the farthest distance of observable stars via simulation. Then the analytical solution of stellar distance according to its redshift is derived. It shows that when the redshift is large, the stellar distance (and its separation speed) is not proportional to its redshift due to the relativity effect. It also reveals the relationship between stellar age and its redshift. The correctness of the analytical solution is verified by the latest astronomic observations of Ia supernovas in 2020.Keywords: redshift, cosmic expansion model, analytical solution, stellar distance
Procedia PDF Downloads 16115917 Knowledge Audit Model for Requirement Elicitation Process
Authors: Laleh Taheri, Noraini C. Pa, Rusli Abdullah, Salfarina Abdullah
Abstract:
Knowledge plays an important role to the success of any organization. Software development organizations are highly knowledge-intensive organizations especially in their Requirement Elicitation Process (REP). There are several problems regarding communicating and using the knowledge in REP such as misunderstanding, being out of scope, conflicting information and changes of requirements. All of these problems occurred in transmitting the requirements knowledge during REP. Several researches have been done in REP in order to solve the problem towards requirements. Knowledge Audit (KA) approaches were proposed in order to solve managing knowledge in human resources, financial, and manufacturing. There is lack of study applying the KA in requirements elicitation process. Therefore, this paper proposes a KA model for REP in supporting to acquire good requirements.Keywords: knowledge audit, requirement elicitation process, KA model, knowledge in requirement elicitation
Procedia PDF Downloads 34515916 Preference for Housing Services and Rational House Price Bubbles
Authors: Stefanie Jeanette Huber
Abstract:
This paper explores the relevance and implications of preferences for housing services on house price fluctuations through the lens of an overlapping generation’s model. The model implies that an economy whose agents have lower preferences for housing services is characterized with lower expenditure shares on housing services and will tend to experience more frequent and more volatile housing bubbles. These model predictions are tested empirically in the companion paper Housing Booms and Busts - Convergences and Divergences across OECD countries. Between 1970 - 2013, countries who spend less on housing services as a share of total income experienced significantly more housing cycles and the associated housing boom-bust cycles were more violent. Finally, the model is used to study the impact of rental subsidies and help-to-buy schemes on rational housing bubbles. Rental subsidies are found to contribute to the control of housing bubbles, whereas help-to- buy scheme makes the economy more bubble-prone.Keywords: housing bubbles, housing booms and busts, preference for housing services, expenditure shares for housing services, rental and purchase subsidies
Procedia PDF Downloads 29915915 Autonomous Quantum Competitive Learning
Authors: Mohammed A. Zidan, Alaa Sagheer, Nasser Metwally
Abstract:
Real-time learning is an important goal that most of artificial intelligence researches try to achieve it. There are a lot of problems and applications which require low cost learning such as learn a robot to be able to classify and recognize patterns in real time and real-time recall. In this contribution, we suggest a model of quantum competitive learning based on a series of quantum gates and additional operator. The proposed model enables to recognize any incomplete patterns, where we can increase the probability of recognizing the pattern at the expense of the undesired ones. Moreover, these undesired ones could be utilized as new patterns for the system. The proposed model is much better compared with classical approaches and more powerful than the current quantum competitive learning approaches.Keywords: competitive learning, quantum gates, quantum gates, winner-take-all
Procedia PDF Downloads 47215914 Predicting Indonesia External Debt Crisis: An Artificial Neural Network Approach
Authors: Riznaldi Akbar
Abstract:
In this study, we compared the performance of the Artificial Neural Network (ANN) model with back-propagation algorithm in correctly predicting in-sample and out-of-sample external debt crisis in Indonesia. We found that exchange rate, foreign reserves, and exports are the major determinants to experiencing external debt crisis. The ANN in-sample performance provides relatively superior results. The ANN model is able to classify correctly crisis of 89.12 per cent with reasonably low false alarms of 7.01 per cent. In out-of-sample, the prediction performance fairly deteriorates compared to their in-sample performances. It could be explained as the ANN model tends to over-fit the data in the in-sample, but it could not fit the out-of-sample very well. The 10-fold cross-validation has been used to improve the out-of-sample prediction accuracy. The results also offer policy implications. The out-of-sample performance could be very sensitive to the size of the samples, as it could yield a higher total misclassification error and lower prediction accuracy. The ANN model could be used to identify past crisis episodes with some accuracy, but predicting crisis outside the estimation sample is much more challenging because of the presence of uncertainty.Keywords: debt crisis, external debt, artificial neural network, ANN
Procedia PDF Downloads 44315913 Failure Inference and Optimization for Step Stress Model Based on Bivariate Wiener Model
Authors: Soudabeh Shemehsavar
Abstract:
In this paper, we consider the situation under a life test, in which the failure time of the test units are not related deterministically to an observable stochastic time varying covariate. In such a case, the joint distribution of failure time and a marker value would be useful for modeling the step stress life test. The problem of accelerating such an experiment is considered as the main aim of this paper. We present a step stress accelerated model based on a bivariate Wiener process with one component as the latent (unobservable) degradation process, which determines the failure times and the other as a marker process, the degradation values of which are recorded at times of failure. Parametric inference based on the proposed model is discussed and the optimization procedure for obtaining the optimal time for changing the stress level is presented. The optimization criterion is to minimize the approximate variance of the maximum likelihood estimator of a percentile of the products’ lifetime distribution.Keywords: bivariate normal, Fisher information matrix, inverse Gaussian distribution, Wiener process
Procedia PDF Downloads 31715912 The Effects of Different Parameters of Wood Floating Debris on Scour Rate Around Bridge Piers
Authors: Muhanad Al-Jubouri
Abstract:
A local scour is the most important of the several scours impacting bridge performance and security. Even though scour is widespread in bridges, especially during flood seasons, the experimental tests could not be applied to many standard highway bridges. A computational fluid dynamics numerical model was used to solve the problem of calculating local scouring and deposition for non-cohesive silt and clear water conditions near single and double cylindrical piers with the effect of floating debris. When FLOW-3D software is employed with the Rang turbulence model, the Nilsson bed-load transfer equation and fine mesh size are considered. The numerical findings of single cylindrical piers correspond pretty well with the physical model's results. Furthermore, after parameter effectiveness investigates the range of outcomes based on predicted user inputs such as the bed-load equation, mesh cell size, and turbulence model, the final numerical predictions are compared to experimental data. When the findings are compared, the error rate for the deepest point of the scour is equivalent to 3.8% for the single pier example.Keywords: local scouring, non-cohesive, clear water, computational fluid dynamics, turbulence model, bed-load equation, debris
Procedia PDF Downloads 6915911 Powers of Class p-w A (s, t) Operators Associated with Generalized Aluthge Transformations
Authors: Mohammed Husein Mohammed Rashid
Abstract:
Let Τ = U |Τ| be a polar decomposition of a bounded linear operator T on a complex Hilbert space with ker U = ker |T|. T is said to be class p-w A(s,t) if (|T*|ᵗ|T|²ˢ|T*|ᵗ )ᵗᵖ/ˢ⁺ᵗ ≥|T*|²ᵗᵖ and |T|²ˢᵖ ≥ (|T|ˢ|T*|²ᵗ|T|ˢ)ˢᵖ/ˢ⁺ᵗ with 0Keywords: class p-w A (s, t), normaloid, isoloid, finite, orthogonality
Procedia PDF Downloads 11715910 The Role of Group Size, Public Employees’ Wages and Control Corruption Institutions in a Game-Theoretical Model of Public Corruption
Authors: Pablo J. Valverde, Jaime E. Fernandez
Abstract:
This paper shows under which conditions public corruption can emerge. The theoretical model includes variables such as the public employee wage (w), a control corruption parameter (c), and the group size of interactions (GS) between clusters of public officers and contractors. The system behavior is analyzed using phase diagrams based on combinations of such parameters (c, w, GS). Numerical simulations are implemented in order to contrast analytic results based on Nash equilibria of the theoretical model. Major findings include the functional relationship between wages and network topology, which attempts to reduce the emergence of corrupt behavior.Keywords: public corruption, game theory, complex systems, Nash equilibrium.
Procedia PDF Downloads 24215909 Evaluating the Suitability and Performance of Dynamic Modulus Predictive Models for North Dakota’s Asphalt Mixtures
Authors: Duncan Oteki, Andebut Yeneneh, Daba Gedafa, Nabil Suleiman
Abstract:
Most agencies lack the equipment required to measure the dynamic modulus (|E*|) of asphalt mixtures, necessitating the need to use predictive models. This study compared measured |E*| values for nine North Dakota asphalt mixes using the original Witczak, modified Witczak, and Hirsch models. The influence of temperature on the |E*| models was investigated, and Pavement ME simulations were conducted using measured |E*| and predictions from the most accurate |E*| model. The results revealed that the original Witczak model yielded the lowest Se/Sy and highest R² values, indicating the lowest bias and highest accuracy, while the poorest overall performance was exhibited by the Hirsch model. Using predicted |E*| as inputs in the Pavement ME generated conservative distress predictions compared to using measured |E*|. The original Witczak model was recommended for predicting |E*| for low-reliability pavements in North Dakota.Keywords: asphalt mixture, binder, dynamic modulus, MEPDG, pavement ME, performance, prediction
Procedia PDF Downloads 4815908 Efficiency of Secondary Schools by ICT Intervention in Sylhet Division of Bangladesh
Authors: Azizul Baten, Kamrul Hossain, Abdullah-Al-Zabir
Abstract:
The objective of this study is to develop an appropriate stochastic frontier secondary schools efficiency model by ICT Intervention and to examine the impact of ICT challenges on secondary schools efficiency in the Sylhet division in Bangladesh using stochastic frontier analysis. The Translog stochastic frontier model was found an appropriate than the Cobb-Douglas model in secondary schools efficiency by ICT Intervention. Based on the results of the Cobb-Douglas model, it is found that the coefficient of the number of teachers, the number of students, and teaching ability had a positive effect on increasing the level of efficiency. It indicated that these are related to technical efficiency. In the case of inefficiency effects for both Cobb-Douglas and Translog models, the coefficient of the ICT lab decreased secondary school inefficiency, but the online class in school was found to increase the level of inefficiency. The coefficients of teacher’s preference for ICT tools like multimedia projectors played a contributor role in decreasing the secondary school inefficiency in the Sylhet division of Bangladesh. The interaction effects of the number of teachers and the classrooms, and the number of students and the number of classrooms, the number of students and teaching ability, and the classrooms and teaching ability of the teachers were recorded with the positive values and these have a positive impact on increasing the secondary school efficiency. The overall mean efficiency of urban secondary schools was found at 84.66% for the Translog model, while it was 83.63% for the Cobb-Douglas model. The overall mean efficiency of rural secondary schools was found at 80.98% for the Translog model, while it was 81.24% for the Cobb-Douglas model. So, the urban secondary schools performed better than the rural secondary schools in the Sylhet division. It is observed from the results of the Tobit model that the teacher-student ratio had a positive influence on secondary school efficiency. The teaching experiences of those who have 1 to 5 years and 10 years above, MPO type school, conventional teaching method have had a negative and significant influence on secondary school efficiency. The estimated value of σ-square (0.0625) was different from Zero, indicating a good fit. The value of γ (0.9872) was recorded as positive and it can be interpreted as follows: 98.72 percent of random variation around in secondary school outcomes due to inefficiency.Keywords: efficiency, secondary schools, ICT, stochastic frontier analysis
Procedia PDF Downloads 15215907 Distangling Biological Noise in Cellular Images with a Focus on Explainability
Authors: Manik Sharma, Ganapathy Krishnamurthi
Abstract:
The cost of some drugs and medical treatments has risen in recent years, that many patients are having to go without. A classification project could make researchers more efficient. One of the more surprising reasons behind the cost is how long it takes to bring new treatments to market. Despite improvements in technology and science, research and development continues to lag. In fact, finding new treatment takes, on average, more than 10 years and costs hundreds of millions of dollars. If successful, we could dramatically improve the industry's ability to model cellular images according to their relevant biology. In turn, greatly decreasing the cost of treatments and ensure these treatments get to patients faster. This work aims at solving a part of this problem by creating a cellular image classification model which can decipher the genetic perturbations in cell (occurring naturally or artificially). Another interesting question addressed is what makes the deep-learning model decide in a particular fashion, which can further help in demystifying the mechanism of action of certain perturbations and paves a way towards the explainability of the deep-learning model.Keywords: cellular images, genetic perturbations, deep-learning, explainability
Procedia PDF Downloads 11215906 Cognitive Model of Analogy Based on Operation of the Brain Cells: Glial, Axons and Neurons
Authors: Ozgu Hafizoglu
Abstract:
Analogy is an essential tool of human cognition that enables connecting diffuse and diverse systems with attributional, deep structural, casual relations that are essential to learning, to innovation in artificial worlds, and to discovery in science. Cognitive Model of Analogy (CMA) leads and creates information pattern transfer within and between domains and disciplines in science. This paper demonstrates the Cognitive Model of Analogy (CMA) as an evolutionary approach to scientific research. The model puts forward the challenges of deep uncertainty about the future, emphasizing the need for flexibility of the system in order to enable reasoning methodology to adapt to changing conditions. In this paper, the model of analogical reasoning is created based on brain cells, their fractal, and operational forms within the system itself. Visualization techniques are used to show correspondences. Distinct phases of the problem-solving processes are divided thusly: encoding, mapping, inference, and response. The system is revealed relevant to brain activation considering each of these phases with an emphasis on achieving a better visualization of the brain cells: glial cells, axons, axon terminals, and neurons, relative to matching conditions of analogical reasoning and relational information. It’s found that encoding, mapping, inference, and response processes in four-term analogical reasoning are corresponding with the fractal and operational forms of brain cells: glial, axons, and neurons.Keywords: analogy, analogical reasoning, cognitive model, brain and glials
Procedia PDF Downloads 18515905 Aeroelastic Stability Analysis in Turbomachinery Using Reduced Order Aeroelastic Model Tool
Authors: Chandra Shekhar Prasad, Ludek Pesek Prasad
Abstract:
In the present day fan blade of aero engine, turboprop propellers, gas turbine or steam turbine low-pressure blades are getting bigger, lighter and thus, become more flexible. Therefore, flutter, forced blade response and vibration related failure of the high aspect ratio blade are of main concern for the designers, thus need to be address properly in order to achieve successful component design. At the preliminary design stage large number of design iteration is need to achieve the utter free safe design. Most of the numerical method used for aeroelastic analysis is based on field-based methods such as finite difference method, finite element method, finite volume method or coupled. These numerical schemes are used to solve the coupled fluid Flow-Structural equation based on full Naiver-Stokes (NS) along with structural mechanics’ equations. These type of schemes provides very accurate results if modeled properly, however, they are computationally very expensive and need large computing recourse along with good personal expertise. Therefore, it is not the first choice for aeroelastic analysis during preliminary design phase. A reduced order aeroelastic model (ROAM) with acceptable accuracy and fast execution is more demanded at this stage. Similar ROAM are being used by other researchers for aeroelastic and force response analysis of turbomachinery. In the present paper new medium fidelity ROAM is successfully developed and implemented in numerical tool to simulated the aeroelastic stability phenomena in turbomachinery and well as flexible wings. In the present, a hybrid flow solver based on 3D viscous-inviscid coupled 3D panel method (PM) and 3d discrete vortex particle method (DVM) is developed, viscous parameters are estimated using boundary layer(BL) approach. This method can simulate flow separation and is a good compromise between accuracy and speed compared to CFD. In the second phase of the research work, the flow solver (PM) will be coupled with ROM non-linear beam element method (BEM) based FEM structural solver (with multibody capabilities) to perform the complete aeroelastic simulation of a steam turbine bladed disk, propellers, fan blades, aircraft wing etc. The partitioned based coupling approach is used for fluid-structure interaction (FSI). The numerical results are compared with experimental data for different test cases and for the blade cascade test case, experimental data is obtained from in-house lab experiments at IT CAS. Furthermore, the results from the new aeroelastic model will be compared with classical CFD-CSD based aeroelastic models. The proposed methodology for the aeroelastic stability analysis of gas turbine or steam turbine blades, or propellers or fan blades will provide researchers and engineers a fast, cost-effective and efficient tool for aeroelastic (classical flutter) analysis for different design at preliminary design stage where large numbers of design iteration are required in short time frame.Keywords: aeroelasticity, beam element method (BEM), discrete vortex particle method (DVM), classical flutter, fluid-structure interaction (FSI), panel method, reduce order aeroelastic model (ROAM), turbomachinery, viscous-inviscid coupling
Procedia PDF Downloads 26615904 Obsession of Time and the New Musical Ontologies. The Concert for Saxophone, Daniel Kientzy and Orchestra by Myriam Marbe
Authors: Dutica Luminita
Abstract:
For the music composer Myriam Marbe the musical time and memory represent 2 (complementary) phenomena with conclusive impact on the settlement of new musical ontologies. Summarizing the most important achievements of the contemporary techniques of composition, her vision on the microform presented in The Concert for Daniel Kientzy, saxophone and orchestra transcends the linear and unidirectional time in favour of a flexible, multi-vectorial speech with spiral developments, where the sound substance is auto(re)generated by analogy with the fundamental processes of the memory. The conceptual model is of an archetypal essence, the music composer being concerned with identifying the mechanisms of the creation process, especially of those specific to the collective creation (of oral tradition). Hence the spontaneity of expression, improvisation tint, free rhythm, micro-interval intonation, coloristic-timbral universe dominated by multiphonics and unique sound effects. Hence the atmosphere of ritual, however purged by the primary connotations and reprojected into a wonderful spectacular space. The Concert is a work of artistic maturity and enforces respect, among others, by the timbral diversity of the three species of saxophone required by the music composer (baritone, sopranino and alt), in Part III Daniel Kientzy shows the performance of playing two saxophones concomitantly. The score of the music composer Myriam Marbe contains a deeply spiritualized music, full or archetypal symbols, a music whose drama suggests a real cinematographic movement.Keywords: archetype, chronogenesis, concert, multiphonics
Procedia PDF Downloads 54315903 Uncertainty in Risk Modeling
Authors: Mueller Jann, Hoffmann Christian Hugo
Abstract:
Conventional quantitative risk management in banking is a risk factor of its own, because it rests on assumptions such as independence and availability of data which do not hold when rare events of extreme consequences are involved. There is a growing recognition of the need for alternative risk measures that do not make these assumptions. We propose a novel method for modeling the risk associated with investment products, in particular derivatives, by using a formal language for specifying financial contracts. Expressions in this language are interpreted in the category of values annotated with (a formal representation of) uncertainty. The choice of uncertainty formalism thus becomes a parameter of the model, so it can be adapted to the particular application and it is not constrained to classical probabilities. We demonstrate our approach using a simple logic-based uncertainty model and a case study in which we assess the risk of counter party default in a portfolio of collateralized loans.Keywords: risk model, uncertainty monad, derivatives, contract algebra
Procedia PDF Downloads 57615902 Delay-Dependent Passivity Analysis for Neural Networks with Time-Varying Delays
Authors: H. Y. Jung, Jing Wang, J. H. Park, Hao Shen
Abstract:
This brief addresses the passivity problem for neural networks with time-varying delays. The aim is focus on establishing the passivity condition of the considered neural networks.Keywords: neural networks, passivity analysis, time-varying delays, linear matrix inequality
Procedia PDF Downloads 57015901 Comparison Analysis of CFD Turbulence Fluid Numerical Study for Quick Coupling
Authors: JoonHo Lee, KyoJin An, JunSu Kim, Young-Chul Park
Abstract:
In this study, the fluid flow characteristics and performance numerical study through CFD model of the Non-split quick coupling for flow control in hydraulic system equipment for the aerospace business group focused to predict. In this study, we considered turbulence models for the application of Computational Fluid Dynamics for the CFD model of the Non-split Quick Coupling for aerospace business. In addition to this, the adequacy of the CFD model were verified by comparing with standard value. Based on this analysis, accurate the fluid flow characteristics can be predicted. It is, therefore, the design of the fluid flow characteristic contribute the reliability for the Quick Coupling which is required in industries on the basis of research results.Keywords: CFD, FEM, quick coupling, turbulence
Procedia PDF Downloads 38415900 The Imminent Other in Anna Deavere Smith’s Performance
Authors: Joy Shihyi Huang
Abstract:
This paper discusses the concept of community in Anna Deavere Smith’s performance, one that challenges and explores existing notions of justice and the other. In contrast to unwavering assumptions of essentialism that have helped to propel a discourse on moral agency within the black community, Smith employs postmodern ideas in which the theatrical attributes of doubling and repetition are conceptualized as part of what Marvin Carlson coined as a ‘memory machine.’ Her dismissal of the need for linear time, such as that regulated by Aristotle’s The Poetics and its concomitant ethics, values, and emotions as a primary ontological and epistemological construct produced by the existing African American historiography, demonstrates an urgency to produce an alternative communal self to override metanarratives in which the African Americans’ lives are contained and sublated by specific historical confines. Drawing on Emmanuel Levinas’ theories in ethics, specifically his notion of ‘proximity’ and ‘the third,’ the paper argues that Smith enacts a new model of ethics by launching an acting method that eliminates the boundary of self and other. Defying psychological realism, Smith conceptualizes an approach to acting that surpasses the mere mimetic value of invoking a ‘likeness’ of an actor to a character, which as such, resembles the mere attribution of various racial or sexual attributes in identity politics. Such acting, she contends, reduces the other to a representation of, at best, an ultimate rendering of me/my experience. She instead appreciates ‘unlikeness,’ recognizes the unavoidable actor/character gap as a power that humbles the self, whose irreversible journey to the other carves out its own image.Keywords: Anna Deavere Smith, Emmanuel Levinas, other, performance
Procedia PDF Downloads 15515899 Deepfake Detection for Compressed Media
Authors: Sushil Kumar Gupta, Atharva Joshi, Ayush Sonawale, Sachin Naik, Rajshree Khande
Abstract:
The usage of artificially created videos and audio by deep learning is a major problem of the current media landscape, as it pursues the goal of misinformation and distrust. In conclusion, the objective of this work targets generating a reliable deepfake detection model using deep learning that will help detect forged videos accurately. In this work, CelebDF v1, one of the largest deepfake benchmark datasets in the literature, is adopted to train and test the proposed models. The data includes authentic and synthetic videos of high quality, therefore allowing an assessment of the model’s performance against realistic distortions.Keywords: deepfake detection, CelebDF v1, convolutional neural network (CNN), xception model, data augmentation, media manipulation
Procedia PDF Downloads 1015898 Designing a Model for Preparing Reports on the Automatic Earned Value Management Progress by the Integration of Primavera P6, SQL Database, and Power BI: A Case Study of a Six-Storey Concrete Building in Mashhad, Iran
Authors: Hamed Zolfaghari, Mojtaba Kord
Abstract:
Project planners and controllers are frequently faced with the challenge of inadequate software for the preparation of automatic project progress reports based on actual project information updates. They usually make dashboards in Microsoft Excel, which is local and not applicable online. Another shortcoming is that it is not linked to planning software such as Microsoft Project, which lacks the database required for data storage. This study aimed to propose a model for the preparation of reports on automatic online project progress based on actual project information updates by the integration of Primavera P6, SQL database, and Power BI for a construction project. The designed model could be applicable to project planners and controller agents by enabling them to prepare project reports automatically and immediately after updating the project schedule using actual information. To develop the model, the data were entered into P6, and the information was stored on the SQL database. The proposed model could prepare a wide range of reports, such as earned value management, HR reports, and financial, physical, and risk reports automatically on the Power BI application. Furthermore, the reports could be published and shared online.Keywords: primavera P6, SQL, Power BI, EVM, integration management
Procedia PDF Downloads 10815897 Artificial Neural Network Based Parameter Prediction of Miniaturized Solid Rocket Motor
Authors: Hao Yan, Xiaobing Zhang
Abstract:
The working mechanism of miniaturized solid rocket motors (SRMs) is not yet fully understood. It is imperative to explore its unique features. However, there are many disadvantages to using common multi-objective evolutionary algorithms (MOEAs) in predicting the parameters of the miniaturized SRM during its conceptual design phase. Initially, the design variables and objectives are constrained in a lumped parameter model (LPM) of this SRM, which leads to local optima in MOEAs. In addition, MOEAs require a large number of calculations due to their population strategy. Although the calculation time for simulating an LPM just once is usually less than that of a CFD simulation, the number of function evaluations (NFEs) is usually large in MOEAs, which makes the total time cost unacceptably long. Moreover, the accuracy of the LPM is relatively low compared to that of a CFD model due to its assumptions. CFD simulations or experiments are required for comparison and verification of the optimal results obtained by MOEAs with an LPM. The conceptual design phase based on MOEAs is a lengthy process, and its results are not precise enough due to the above shortcomings. An artificial neural network (ANN) based parameter prediction is proposed as a way to reduce time costs and improve prediction accuracy. In this method, an ANN is used to build a surrogate model that is trained with a 3D numerical simulation. In design, the original LPM is replaced by a surrogate model. Each case uses the same MOEAs, in which the calculation time of the two models is compared, and their optimization results are compared with 3D simulation results. Using the surrogate model for the parameter prediction process of the miniaturized SRMs results in a significant increase in computational efficiency and an improvement in prediction accuracy. Thus, the ANN-based surrogate model does provide faster and more accurate parameter prediction for an initial design scheme. Moreover, even when the MOEAs converge to local optima, the time cost of the ANN-based surrogate model is much lower than that of the simplified physical model LPM. This means that designers can save a lot of time during code debugging and parameter tuning in a complex design process. Designers can reduce repeated calculation costs and obtain accurate optimal solutions by combining an ANN-based surrogate model with MOEAs.Keywords: artificial neural network, solid rocket motor, multi-objective evolutionary algorithm, surrogate model
Procedia PDF Downloads 9015896 Examination of State of Repair of Buildings in Private Housing Estates in Enugu Metropolis, Enugu State Nigeria
Authors: Umeora Chukwunonso Obiefuna
Abstract:
The private sector in housing provision continually take steps towards addressing part of the problem of cushioning the effect of the housing shortage in Nigeria by establishing housing estates since the government alone cannot provide housing for everyone. This research examined and reported findings from research conducted on the state of repair of buildings in private housing estates in Enugu metropolis, Enugu state Nigeria. The objectives of the study were to examine the physical conditions of the building fabrics and appraise the performance of infrastructural services provided in the buildings. The questionnaire was used as a research instrument to elicit data from respondents. Stratified sampling of the estates based on building type was adopted as a sampling method for this study. Findings from the research show that the state of repair of most buildings require minor repairs to make them fit for habitation and sound to ensure the well-being of the residents. In addition, four independent variables from the nine independent variables investigated significantly explained residual variation in the dependent variable - state of repair of the buildings in the study area. These variables are: Average Monthly Income of Residents (AMIR), Length of Stay of the Residents in the estates (LSY), Type of Wall Finishes on the buildings (TWF), and Time Taken to Respond to Resident’s complaints by the estate managers (TTRC). With this, the linear model was established for predicting the state of repair of buildings in private housing estates in the study area. This would assist in identifying variables that are lucid in predicting the state of repair of the buildings.Keywords: building, housing estate, private, repair
Procedia PDF Downloads 14115895 Verification of a Simple Model for Rolling Isolation System Response
Authors: Aarthi Sridhar, Henri Gavin, Karah Kelly
Abstract:
Rolling Isolation Systems (RISs) are simple and effective means to mitigate earthquake hazards to equipment in critical and precious facilities, such as hospitals, network collocation facilities, supercomputer centers, and museums. The RIS works by isolating components acceleration the inertial forces felt by the subsystem. The RIS consists of two platforms with counter-facing concave surfaces (dishes) in each corner. Steel balls lie inside the dishes and allow the relative motion between the top and bottom platform. Formerly, a mathematical model for the dynamics of RISs was developed using Lagrange’s equations (LE) and experimentally validated. A new mathematical model was developed using Gauss’s Principle of Least Constraint (GPLC) and verified by comparing impulse response trajectories of the GPLC model and the LE model in terms of the peak displacements and accelerations of the top platform. Mathematical models for the RIS are tedious to derive because of the non-holonomic rolling constraints imposed on the system. However, using Gauss’s Principle of Least constraint to find the equations of motion removes some of the obscurity and yields a system that can be easily extended. Though the GPLC model requires more state variables, the equations of motion are far simpler. The non-holonomic constraint is enforced in terms of accelerations and therefore requires additional constraint stabilization methods in order to avoid the possibility that numerical integration methods can cause the system to go unstable. The GPLC model allows the incorporation of more physical aspects related to the RIS, such as contribution of the vertical velocity of the platform to the kinetic energy and the mass of the balls. This mathematical model for the RIS is a tool to predict the motion of the isolation platform. The ability to statistically quantify the expected responses of the RIS is critical in the implementation of earthquake hazard mitigation.Keywords: earthquake hazard mitigation, earthquake isolation, Gauss’s Principle of Least Constraint, nonlinear dynamics, rolling isolation system
Procedia PDF Downloads 25115894 Adequacy of Advanced Earthquake Intensity Measures for Estimation of Damage under Seismic Excitation with Arbitrary Orientation
Authors: Konstantinos G. Kostinakis, Manthos K. Papadopoulos, Asimina M. Athanatopoulou
Abstract:
An important area of research in seismic risk analysis is the evaluation of expected seismic damage of structures under a specific earthquake ground motion. Several conventional intensity measures of ground motion have been used to estimate their damage potential to structures. Yet, none of them was proved to be able to predict adequately the seismic damage of any structural system. Therefore, alternative advanced intensity measures which take into account not only ground motion characteristics but also structural information have been proposed. The adequacy of a number of advanced earthquake intensity measures in prediction of structural damage of 3D R/C buildings under seismic excitation which attacks the building with arbitrary incident angle is investigated in the present paper. To achieve this purpose, a symmetric in plan and an asymmetric 5-story R/C building are studied. The two buildings are subjected to 20 bidirectional earthquake ground motions. The two horizontal accelerograms of each ground motion are applied along horizontal orthogonal axes forming 72 different angles with the structural axes. The response is computed by non-linear time history analysis. The structural damage is expressed in terms of the maximum interstory drift as well as the overall structural damage index. The values of the aforementioned seismic damage measures determined for incident angle 0° as well as their maximum values over all seismic incident angles are correlated with 9 structure-specific ground motion intensity measures. The research identified certain intensity measures which exhibited strong correlation with the seismic damage of the two buildings. However, their adequacy for estimation of the structural damage depends on the response parameter adopted. Furthermore, it was confirmed that the widely used spectral acceleration at the fundamental period of the structure is a good indicator of the expected earthquake damage level.Keywords: damage indices, non-linear response, seismic excitation angle, structure-specific intensity measures
Procedia PDF Downloads 49315893 Assessment of Modern RANS Models for the C3X Vane Film Cooling Prediction
Authors: Mikhail Gritskevich, Sebastian Hohenstein
Abstract:
The paper presents the results of a detailed assessment of several modern Reynolds Averaged Navier-Stokes (RANS) turbulence models for prediction of C3X vane film cooling at various injection regimes. Three models are considered, namely the Shear Stress Transport (SST) model, the modification of the SST model accounting for the streamlines curvature (SST-CC), and the Explicit Algebraic Reynolds Stress Model (EARSM). It is shown that all the considered models face with a problem in prediction of the adiabatic effectiveness in the vicinity of the cooling holes; however, accounting for the Reynolds stress anisotropy within the EARSM model noticeably increases the solution accuracy. On the other hand, further downstream all the models provide a reasonable agreement with the experimental data for the adiabatic effectiveness and among the considered models the most accurate results are obtained with the use EARMS.Keywords: discrete holes film cooling, Reynolds Averaged Navier-Stokes (RANS), Reynolds stress tensor anisotropy, turbulent heat transfer
Procedia PDF Downloads 42015892 Effect of Acid-Basic Treatments of Lingocellulosic Material Forest Wastes Wild Carob on Ethyl Violet Dye Adsorption
Authors: Abdallah Bouguettoucha, Derradji Chebli, Tariq Yahyaoui, Hichem Attout
Abstract:
The effect of acid -basic treatment of lingocellulosic material (forest wastes wild carob) on Ethyl violet adsorption was investigated. It was found that surface chemistry plays an important role in Ethyl violet (EV) adsorption. HCl treatment produces more active acidic surface groups such as carboxylic and lactone, resulting in an increase in the adsorption of EV dye. The adsorption efficiency was higher for treated of lingocellulosic material with HCl than for treated with KOH. Maximum biosorption capacity was 170 and 130 mg/g, for treated of lingocellulosic material with HCl than for treated with KOH at pH 6 respectively. It was also found that the time to reach equilibrium takes less than 25 min for both treated materials. The adsorption of basic dye (i.e., ethyl violet or basic violet 4) was carried out by varying some process parameters, such as initial concentration, pH and temperature. The adsorption process can be well described by means of a pseudo-second-order reaction model showing that boundary layer resistance was not the rate-limiting step, as confirmed by intraparticle diffusion since the linear plot of Qt versus t^0.5 did not pass through the origin. In addition, experimental data were accurately expressed by the Sips equation if compared with the Langmuir and Freundlich isotherms. The values of ΔG° and ΔH° confirmed that the adsorption of EV on acid-basic treated forest wast wild carob was spontaneous and endothermic in nature. The positive values of ΔS° suggested an irregular increase of the randomness at the treated lingocellulosic material -solution interface during the adsorption process.Keywords: adsorption, isotherm models, thermodynamic parameters, wild carob
Procedia PDF Downloads 277