Search results for: equation model
10629 Hydrological Analysis for Urban Water Management
Authors: Ranjit Kumar Sahu, Ramakar Jha
Abstract:
Urban Water Management is the practice of managing freshwater, waste water, and storm water as components of a basin-wide management plan. It builds on existing water supply and sanitation considerations within an urban settlement by incorporating urban water management within the scope of the entire river basin. The pervasive problems generated by urban development have prompted, in the present work, to study the spatial extent of urbanization in Golden Triangle of Odisha connecting the cities Bhubaneswar (20.2700° N, 85.8400° E), Puri (19.8106° N, 85.8314° E) and Konark (19.9000° N, 86.1200° E)., and patterns of periodic changes in urban development (systematic/random) in order to develop future plans for (i) urbanization promotion areas, and (ii) urbanization control areas. Remote Sensing, using USGS (U.S. Geological Survey) Landsat8 maps, supervised classification of the Urban Sprawl has been done for during 1980 - 2014, specifically after 2000. This Work presents the following: (i) Time series analysis of Hydrological data (ground water and rainfall), (ii) Application of SWMM (Storm Water Management Model) and other soft computing techniques for Urban Water Management, and (iii) Uncertainty analysis of model parameters (Urban Sprawl and correlation analysis). The outcome of the study shows drastic growth results in urbanization and depletion of ground water levels in the area that has been discussed briefly. Other relative outcomes like declining trend of rainfall and rise of sand mining in local vicinity has been also discussed. Research on this kind of work will (i) improve water supply and consumption efficiency (ii) Upgrade drinking water quality and waste water treatment (iii) Increase economic efficiency of services to sustain operations and investments for water, waste water, and storm water management, and (iv) engage communities to reflect their needs and knowledge for water management.Keywords: Storm Water Management Model (SWMM), uncertainty analysis, urban sprawl, land use change
Procedia PDF Downloads 42510628 Designing Sustainable Building Based on Iranian's Windmills
Authors: Negar Sartipzadeh
Abstract:
Energy-conscious design, which coordinates with the Earth ecological systems during its life cycle, has the least negative impact on the environment with the least waste of resources. Due to the increasing in world population as well as the consumption of fossil fuels that cause the production of greenhouse gasses and environmental pollution, mankind is looking for renewable and also sustainable energies. The Iranian native construction is a clear evidence of energy-aware designing. Our predecessors were forced to rely on the natural resources and sustainable energies as well as environmental issues which have been being considered in the recent world. One of these endless energies is wind energy. Iranian traditional architecture foundations is a appropriate model in solving the environmental crisis and the contemporary energy. What will come in this paper is an effort to recognition and introduction of the unique characteristics of the Iranian architecture in the application of aerodynamic and hydraulic energies derived from the wind, which are the most common and major type of using sustainable energies in the traditional architecture of Iran. Therefore, the recent research attempts to offer a hybrid system suggestions for application in new constructions designing in a region such as Nashtifan, which has potential through reviewing windmills and how they deal with sustainable energy sources, as a model of Iranian native construction.Keywords: renewable energy, sustainable building, windmill, Iranian architecture
Procedia PDF Downloads 42210627 Resisting Adversarial Assaults: A Model-Agnostic Autoencoder Solution
Authors: Massimo Miccoli, Luca Marangoni, Alberto Aniello Scaringi, Alessandro Marceddu, Alessandro Amicone
Abstract:
The susceptibility of deep neural networks (DNNs) to adversarial manipulations is a recognized challenge within the computer vision domain. Adversarial examples, crafted by adding subtle yet malicious alterations to benign images, exploit this vulnerability. Various defense strategies have been proposed to safeguard DNNs against such attacks, stemming from diverse research hypotheses. Building upon prior work, our approach involves the utilization of autoencoder models. Autoencoders, a type of neural network, are trained to learn representations of training data and reconstruct inputs from these representations, typically minimizing reconstruction errors like mean squared error (MSE). Our autoencoder was trained on a dataset of benign examples; learning features specific to them. Consequently, when presented with significantly perturbed adversarial examples, the autoencoder exhibited high reconstruction errors. The architecture of the autoencoder was tailored to the dimensions of the images under evaluation. We considered various image sizes, constructing models differently for 256x256 and 512x512 images. Moreover, the choice of the computer vision model is crucial, as most adversarial attacks are designed with specific AI structures in mind. To mitigate this, we proposed a method to replace image-specific dimensions with a structure independent of both dimensions and neural network models, thereby enhancing robustness. Our multi-modal autoencoder reconstructs the spectral representation of images across the red-green-blue (RGB) color channels. To validate our approach, we conducted experiments using diverse datasets and subjected them to adversarial attacks using models such as ResNet50 and ViT_L_16 from the torch vision library. The autoencoder extracted features used in a classification model, resulting in an MSE (RGB) of 0.014, a classification accuracy of 97.33%, and a precision of 99%.Keywords: adversarial attacks, malicious images detector, binary classifier, multimodal transformer autoencoder
Procedia PDF Downloads 11210626 Use of FWD in Determination of Bonding Condition of Semi-Rigid Asphalt Pavement
Authors: Nonde Lushinga, Jiang Xin, Danstan Chiponde, Lawrence P. Mutale
Abstract:
In this paper, falling weight deflectometer (FWD) was used to determine the bonding condition of a newly constructed semi-rigid base pavement. Using Evercal back-calculation computer programme, it was possible to quickly and accurately determine the structural condition of the pavement system of FWD test data. The bonding condition of the pavement layers was determined from calculated shear stresses and strains (relative horizontal displacements) on the interface of pavement layers from BISAR 3.0 pavement computer programmes. Thus, by using non-linear layered elastic theory, a pavement structure is analysed in the same way as other civil engineering structures. From non-destructive FWD testing, the required bonding condition of pavement layers was quantified from soundly based principles of Goodman’s constitutive models shown in equation 2, thereby producing the shear reaction modulus (Ks) which gives an indication of bonding state of pavement layers. Furthermore, a Tack coat failure Ratio (TFR) which has long being used in the USA in pavement evaluation was also used in the study in order to give validity to the study. According to research [39], the interface between two asphalt layers is determined by use of Tack Coat failure Ratio (TFR) which is the ratio of the stiffness of top layer asphalt layers over the stiffness of the second asphalt layer (E1/E2) in a slipped pavement. TFR gives an indication of the strength of the tack coat which is the main determinants of interlayer slipping. The criteria is that if the interface was in the state full bond, TFR would be greater or equals to 1 and that if the TFR was 0, meant full slip. Results of the calculations showed that TFR value was 1.81 which re-affirmed the position that the pavement under study was in the state of full bond because the value was greater than 1. It was concluded that FWD can be used to determine bonding condition of existing and newly constructed pavements.Keywords: falling weight deflectometer (FWD), backcaluclation, semi-rigid base pavement, shear reaction modulus
Procedia PDF Downloads 51510625 Application of Hydrological Engineering Centre – River Analysis System (HEC-RAS) to Estuarine Hydraulics
Authors: Julia Zimmerman, Gaurav Savant
Abstract:
This study aims to evaluate the efficacy of the U.S. Army Corp of Engineers’ River Analysis System (HEC-RAS) application to modeling the hydraulics of estuaries. HEC-RAS has been broadly used for a variety of riverine applications. However, it has not been widely applied to the study of circulation in estuaries. This report details the model development and validation of a combined 1D/2D unsteady flow hydraulic model using HEC-RAS for estuaries and they are associated with tidally influenced rivers. Two estuaries, Galveston Bay and Delaware Bay, were used as case studies. Galveston Bay, a bar-built, vertically mixed estuary, was modeled for the 2005 calendar year. Delaware Bay, a drowned river valley estuary, was modeled from October 22, 2019, to November 5, 2019. Water surface elevation was used to validate both models by comparing simulation results to NOAA’s Center for Operational Oceanographic Products and Services (CO-OPS) gauge data. Simulations were run using the Diffusion Wave Equations (DW), the Shallow Water Equations, Eulerian-Lagrangian Method (SWE-ELM), and the Shallow Water Equations Eulerian Method (SWE-EM) and compared for both accuracy and computational resources required. In general, the Diffusion Wave Equations results were found to be comparable to the two Shallow Water equations sets while requiring less computational power. The 1D/2D combined approach was valid for study areas within the 2D flow area, with the 1D flow serving mainly as an inflow boundary condition. Within the Delaware Bay estuary, the HEC-RAS DW model ran in 22 minutes and had an average R² value of 0.94 within the 2-D mesh. The Galveston Bay HEC-RAS DW ran in 6 hours and 47 minutes and had an average R² value of 0.83 within the 2-D mesh. The longer run time and lower R² for Galveston Bay can be attributed to the increased length of the time frame modeled and the greater complexity of the estuarine system. The models did not accurately capture tidal effects within the 1D flow area.Keywords: Delaware bay, estuarine hydraulics, Galveston bay, HEC-RAS, one-dimensional modeling, two-dimensional modeling
Procedia PDF Downloads 19910624 A Calibration Method of Portable Coordinate Measuring Arm Using Bar Gauge with Cone Holes
Authors: Rim Chang Hyon, Song Hak Jin, Song Kwang Hyok, Jong Ki Hun
Abstract:
The calibration of the articulated arm coordinate measuring machine (AACMM) is key to improving calibration accuracy and saving calibration time. To reduce the time consumed for calibration, we should choose the proper calibration gauges and develop a reasonable calibration method. In addition, we should get the exact optimal solution by accurately removing the rough errors within the experimental data. In this paper, we present a calibration method of the portable coordinate measuring arm (PCMA) using the 1.2m long bar guage with cone-holes. First, we determine the locations of the bar gauge and establish an optimal objective function for identifying the structural parameter errors. Next, we make a mathematical model of the calibration algorithm and present a new mathematical method to remove the rough errors within calibration data. Finally, we find the optimal solution to identify the kinematic parameter errors by using Levenberg-Marquardt algorithm. The experimental results show that our calibration method is very effective in saving the calibration time and improving the calibration accuracy.Keywords: AACMM, kinematic model, parameter identify, measurement accuracy, calibration
Procedia PDF Downloads 8310623 Exploring Syntactic and Semantic Features for Text-Based Authorship Attribution
Authors: Haiyan Wu, Ying Liu, Shaoyun Shi
Abstract:
Authorship attribution is to extract features to identify authors of anonymous documents. Many previous works on authorship attribution focus on statistical style features (e.g., sentence/word length), content features (e.g., frequent words, n-grams). Modeling these features by regression or some transparent machine learning methods gives a portrait of the authors' writing style. But these methods do not capture the syntactic (e.g., dependency relationship) or semantic (e.g., topics) information. In recent years, some researchers model syntactic trees or latent semantic information by neural networks. However, few works take them together. Besides, predictions by neural networks are difficult to explain, which is vital in authorship attribution tasks. In this paper, we not only utilize the statistical style and content features but also take advantage of both syntactic and semantic features. Different from an end-to-end neural model, feature selection and prediction are two steps in our method. An attentive n-gram network is utilized to select useful features, and logistic regression is applied to give prediction and understandable representation of writing style. Experiments show that our extracted features can improve the state-of-the-art methods on three benchmark datasets.Keywords: authorship attribution, attention mechanism, syntactic feature, feature extraction
Procedia PDF Downloads 13610622 Using 3D Satellite Imagery to Generate a High Precision Canopy Height Model
Authors: M. Varin, A. M. Dubois, R. Gadbois-Langevin, B. Chalghaf
Abstract:
Good knowledge of the physical environment is essential for an integrated forest planning. This information enables better forecasting of operating costs, determination of cutting volumes, and preservation of ecologically sensitive areas. The use of satellite images in stereoscopic pairs gives the capacity to generate high precision 3D models, which are scale-adapted for harvesting operations. These models could represent an alternative to 3D LiDAR data, thanks to their advantageous cost of acquisition. The objective of the study was to assess the quality of stereo-derived canopy height models (CHM) in comparison to a traditional LiDAR CHM and ground tree-height samples. Two study sites harboring two different forest stand types (broadleaf and conifer) were analyzed using stereo pairs and tri-stereo images from the WorldView-3 satellite to calculate CHM. Acquisition of multispectral images from an Unmanned Aerial Vehicle (UAV) was also realized on a smaller part of the broadleaf study site. Different algorithms using two softwares (PCI Geomatica and Correlator3D) with various spatial resolutions and band selections were tested to select the 3D modeling technique, which offered the best performance when compared with LiDAR. In the conifer study site, the CHM produced with Corelator3D using only the 50-cm resolution panchromatic band was the one with the smallest Root-mean-square deviation (RMSE: 1.31 m). In the broadleaf study site, the tri-stereo model provided slightly better performance, with an RMSE of 1.2 m. The tri-stereo model was also compared to the UAV, which resulted in an RMSE of 1.3 m. At individual tree level, when ground samples were compared to satellite, lidar, and UAV CHM, RMSE were 2.8, 2.0, and 2.0 m, respectively. Advanced analysis was done for all of these cases, and it has been noted that RMSE is reduced when the canopy cover is higher when shadow and slopes are lower and when clouds are distant from the analyzed site.Keywords: very high spatial resolution, satellite imagery, WorlView-3, canopy height models, CHM, LiDAR, unmanned aerial vehicle, UAV
Procedia PDF Downloads 12610621 Mathematical Modeling of the Fouling Phenomenon in Ultrafiltration of Latex Effluent
Authors: Amira Abdelrasoul, Huu Doan, Ali Lohi
Abstract:
An efficient and well-planned ultrafiltration process is becoming a necessity for monetary returns in the industrial settings. The aim of the present study was to develop a mathematical model for an accurate prediction of ultrafiltration membrane fouling of latex effluent applied to homogeneous and heterogeneous membranes with uniform and non-uniform pore sizes, respectively. The models were also developed for an accurate prediction of power consumption that can handle the large-scale purposes. The model incorporated the fouling attachments as well as chemical and physical factors in membrane fouling for accurate prediction and scale-up application. Both Polycarbonate and Polysulfone flat membranes, with pore sizes of 0.05 µm and a molecular weight cut-off of 60,000, respectively, were used under a constant feed flow rate and a cross-flow mode in ultrafiltration of the simulated paint effluent. Furthermore, hydrophilic ultrafilic and hydrophobic PVDF membranes with MWCO of 100,000 were used to test the reliability of the models. Monodisperse particles of 50 nm and 100 nm in diameter, and a latex effluent with a wide range of particle size distributions were utilized to validate the models. The aggregation and the sphericity of the particles indicated a significant effect on membrane fouling.Keywords: membrane fouling, mathematical modeling, power consumption, attachments, ultrafiltration
Procedia PDF Downloads 47010620 Enhanced COVID-19 Pharmaceuticals and Microplastics Removal from Wastewater Using Hybrid Reactor System
Authors: Reda Dzingelevičienė, Vytautas Abromaitis, Nerijus Dzingelevičius, Kęstutis Baranauskis, Saulius Raugelė, Malgorzata Mlynska-Szultka, Sergej Suzdalev, Reza Pashaei, Sajjad Abbasi, Boguslaw Buszewski
Abstract:
A unique hybrid technology was developed for the removal of COVID-19 specific contaminants from wastewater. Reactor testing was performed using model water samples contaminated with COVID-19 pharmaceuticals and microplastics. Different hydraulic retention times, concentrations of pollutants and dissolved ozone were tested. Liquid Chromatography-Mass Spectrometry, solid phase extraction, surface area and porosity, analytical tools were used to monitor the treatment efficiency and remaining sorption capacity of the spent adsorbent. The combination of advanced oxidation and adsorption processes was found to be the most effective, with the highest 90-99% and 89-95% molnupiravir and microplastics contaminants removal efficiency from the model wastewater. The research has received funding from the European Regional Development Fund (project No 13.1.1-LMT-K-718-05-0014) under a grant agreement with the Research Council of Lithuania (LMTLT), and it was funded as part of the European Union’s measure in response to the COVID-19 pandemic.Keywords: adsorption, hybrid reactor system, pharmaceuticals-microplastics, wastewater
Procedia PDF Downloads 8510619 Nonstationary Modeling of Extreme Precipitation in the Wei River Basin, China
Authors: Yiyuan Tao
Abstract:
Under the impact of global warming together with the intensification of human activities, the hydrological regimes may be altered, and the traditional stationary assumption was no longer satisfied. However, most of the current design standards of water infrastructures were still based on the hypothesis of stationarity, which may inevitably result in severe biases. Many critical impacts of climate on ecosystems, society, and the economy are controlled by extreme events rather than mean values. Therefore, it is of great significance to identify the non-stationarity of precipitation extremes and model the precipitation extremes in a nonstationary framework. The Wei River Basin (WRB), located in a continental monsoon climate zone in China, is selected as a case study in this study. Six extreme precipitation indices were employed to investigate the changing patterns and stationarity of precipitation extremes in the WRB. To identify if precipitation extremes are stationary, the Mann-Kendall trend test and the Pettitt test, which is used to examine the occurrence of abrupt changes are adopted in this study. Extreme precipitation indices series are fitted with non-stationary distributions that selected from six widely used distribution functions: Gumbel, lognormal, Weibull, gamma, generalized gamma and exponential distributions by means of the time-varying moments model generalized additive models for location, scale and shape (GAMLSS), where the distribution parameters are defined as a function of time. The results indicate that: (1) the trends were not significant for the whole WRB, but significant positive/negative trends were still observed in some stations, abrupt changes for consecutive wet days (CWD) mainly occurred in 1985, and the assumption of stationarity is invalid for some stations; (2) for these nonstationary extreme precipitation indices series with significant positive/negative trends, the GAMLSS models are able to capture well the temporal variations of the indices, and perform better than the stationary model. Finally, the differences between the quantiles of nonstationary and stationary models are analyzed, which highlight the importance of nonstationary modeling of precipitation extremes in the WRB.Keywords: extreme precipitation, GAMLSSS, non-stationary, Wei River Basin
Procedia PDF Downloads 12410618 In silico Repopulation Model of Various Tumour Cells during Treatment Breaks in Head and Neck Cancer Radiotherapy
Authors: Loredana G. Marcu, David Marcu, Sanda M. Filip
Abstract:
Advanced head and neck cancers are aggressive tumours, which require aggressive treatment. Treatment efficiency is often hindered by cancer cell repopulation during radiotherapy, which is due to various mechanisms triggered by the loss of tumour cells and involves both stem and differentiated cells. The aim of the current paper is to present in silico simulations of radiotherapy schedules on a virtual head and neck tumour grown with biologically realistic kinetic parameters. Using the linear quadratic formalism of cell survival after radiotherapy, altered fractionation schedules employing various treatment breaks for normal tissue recovery are simulated and repopulation mechanism implemented in order to evaluate the impact of various cancer cell contribution on tumour behaviour during irradiation. The model has shown that the timing of treatment breaks is an important factor influencing tumour control in rapidly proliferating tissues such as squamous cell carcinomas of the head and neck. Furthermore, not only stem cells but also differentiated cells, via the mechanism of abortive division, can contribute to malignant cell repopulation during treatment.Keywords: radiation, tumour repopulation, squamous cell carcinoma, stem cell
Procedia PDF Downloads 26710617 Comparison between the Efficiency of Heterojunction Thin Film InGaP\GaAs\Ge and InGaP\GaAs Solar Cell
Authors: F. Djaafar, B. Hadri, G. Bachir
Abstract:
This paper presents the design parameters for a thin film 3J InGaP/GaAs/Ge solar cell with a simulated maximum efficiency of 32.11% using Tcad Silvaco. Design parameters include the doping concentration, molar fraction, layers’ thickness and tunnel junction characteristics. An initial dual junction InGaP/GaAs model of a previous published heterojunction cell was simulated in Tcad Silvaco to accurately predict solar cell performance. To improve the solar cell’s performance, we have fixed meshing, material properties, models and numerical methods. However, thickness and layer doping concentration were taken as variables. We, first simulate the InGaP\GaAs dual junction cell by changing the doping concentrations and thicknesses which showed an increase in efficiency. Next, a triple junction InGaP/GaAs/Ge cell was modeled by adding a Ge layer to the previous dual junction InGaP/GaAs model with an InGaP /GaAs tunnel junction.Keywords: heterojunction, modeling, simulation, thin film, Tcad Silvaco
Procedia PDF Downloads 36910616 A Probabilistic Theory of the Buy-Low and Sell-High for Algorithmic Trading
Authors: Peter Shi
Abstract:
Algorithmic trading is a rapidly expanding domain within quantitative finance, constituting a substantial portion of trading volumes in the US financial market. The demand for rigorous and robust mathematical theories underpinning these trading algorithms is ever-growing. In this study, the author establishes a new stock market model that integrates the Efficient Market Hypothesis and the statistical arbitrage. The model, for the first time, finds probabilistic relations between the rational price and the market price in terms of the conditional expectation. The theory consequently leads to a mathematical justification of the old market adage: buy-low and sell-high. The thresholds for “low” and “high” are precisely derived using a max-min operation on Bayes’s error. This explicit connection harmonizes the Efficient Market Hypothesis and Statistical Arbitrage, demonstrating their compatibility in explaining market dynamics. The amalgamation represents a pioneering contribution to quantitative finance. The study culminates in comprehensive numerical tests using historical market data, affirming that the “buy-low” and “sell-high” algorithm derived from this theory significantly outperforms the general market over the long term in four out of six distinct market environments.Keywords: efficient market hypothesis, behavioral finance, Bayes' decision, algorithmic trading, risk control, stock market
Procedia PDF Downloads 7210615 Least Squares Solution for Linear Quadratic Gaussian Problem with Stochastic Approximation Approach
Authors: Sie Long Kek, Wah June Leong, Kok Lay Teo
Abstract:
Linear quadratic Gaussian model is a standard mathematical model for the stochastic optimal control problem. The combination of the linear quadratic estimation and the linear quadratic regulator allows the state estimation and the optimal control policy to be designed separately. This is known as the separation principle. In this paper, an efficient computational method is proposed to solve the linear quadratic Gaussian problem. In our approach, the Hamiltonian function is defined, and the necessary conditions are derived. In addition to this, the output error is defined and the least-square optimization problem is introduced. By determining the first-order necessary condition, the gradient of the sum squares of output error is established. On this point of view, the stochastic approximation approach is employed such that the optimal control policy is updated. Within a given tolerance, the iteration procedure would be stopped and the optimal solution of the linear-quadratic Gaussian problem is obtained. For illustration, an example of the linear-quadratic Gaussian problem is studied. The result shows the efficiency of the approach proposed. In conclusion, the applicability of the approach proposed for solving the linear quadratic Gaussian problem is highly demonstrated.Keywords: iteration procedure, least squares solution, linear quadratic Gaussian, output error, stochastic approximation
Procedia PDF Downloads 18710614 Multi-Robotic Partial Disassembly Line Balancing with Robotic Efficiency Difference via HNSGA-II
Authors: Tao Yin, Zeqiang Zhang, Wei Liang, Yanqing Zeng, Yu Zhang
Abstract:
To accelerate the remanufacturing process of electronic waste products, this study designs a partial disassembly line with the multi-robotic station to effectively dispose of excessive wastes. The multi-robotic partial disassembly line is a technical upgrade to the existing manual disassembly line. Balancing optimization can make the disassembly line smoother and more efficient. For partial disassembly line balancing with the multi-robotic station (PDLBMRS), a mixed-integer programming model (MIPM) considering the robotic efficiency differences is established to minimize cycle time, energy consumption and hazard index and to calculate their optimal global values. Besides, an enhanced NSGA-II algorithm (HNSGA-II) is proposed to optimize PDLBMRS efficiently. Finally, MIPM and HNSGA-II are applied to an actual mixed disassembly case of two types of computers, the comparison of the results solved by GUROBI and HNSGA-II verifies the correctness of the model and excellent performance of the algorithm, and the obtained Pareto solution set provides multiple options for decision-makers.Keywords: waste disposal, disassembly line balancing, multi-robot station, robotic efficiency difference, HNSGA-II
Procedia PDF Downloads 23710613 Modelling of Organic Rankine Cycle for Waste Heat Recovery Process in Supercritical Condition
Authors: Jahedul Islam Chowdhury, Bao Kha Nguyen, David Thornhill, Roy Douglas, Stephen Glover
Abstract:
Organic Rankine Cycle (ORC) is the most commonly used method for recovering energy from small sources of heat. The investigation of the ORC in supercritical condition is a new research area as it has a potential to generate high power and thermal efficiency in a waste heat recovery system. This paper presents a steady state ORC model in supercritical condition and its simulations with a real engine’s exhaust data. The key component of ORC, evaporator, is modelled using finite volume method, modelling of all other components of the waste heat recovery system such as pump, expander and condenser are also presented. The aim of this paper is to investigate the effects of mass flow rate and evaporator outlet temperature on the efficiency of the waste heat recovery process. Additionally, the necessity of maintaining an optimum evaporator outlet temperature is also investigated. Simulation results show that modification of mass flow rate is the key to changing the operating temperature at the evaporator outlet.Keywords: Organic Rankine cycle, supercritical condition, steady state model, waste heat recovery
Procedia PDF Downloads 40510612 Evaluation of Reliability Flood Control System Based on Uncertainty of Flood Discharge, Case Study Wulan River, Central Java, Indonesia
Authors: Anik Sarminingsih, Krishna V. Pradana
Abstract:
The failure of flood control system can be caused by various factors, such as not considering the uncertainty of designed flood causing the capacity of the flood control system is exceeded. The presence of the uncertainty factor is recognized as a serious issue in hydrological studies. Uncertainty in hydrological analysis is influenced by many factors, starting from reading water elevation data, rainfall data, selection of method of analysis, etc. In hydrological modeling selection of models and parameters corresponding to the watershed conditions should be evaluated by the hydraulic model in the river as a drainage channel. River cross-section capacity is the first defense in knowing the reliability of the flood control system. Reliability of river capacity describes the potential magnitude of flood risk. Case study in this research is Wulan River in Central Java. This river occurring flood almost every year despite some efforts to control floods such as levee, floodway and diversion. The flood-affected areas include several sub-districts, mainly in Kabupaten Kudus and Kabupaten Demak. First step is analyze the frequency of discharge observation from Klambu weir which have time series data from 1951-2013. Frequency analysis is performed using several distribution frequency models such as Gumbel distribution, Normal, Normal Log, Pearson Type III and Log Pearson. The result of the model based on standard deviation overlaps, so the maximum flood discharge from the lower return periods may be worth more than the average discharge for larger return periods. The next step is to perform a hydraulic analysis to evaluate the reliability of river capacity based on the flood discharge resulted from several methods. The selection of the design flood discharge of flood control system is the result of the method closest to bankfull capacity of the river.Keywords: design flood, hydrological model, reliability, uncertainty, Wulan river
Procedia PDF Downloads 29410611 Evaluation of Settlement of Coastal Embankments Using Finite Elements Method
Authors: Sina Fadaie, Seyed Abolhassan Naeini
Abstract:
Coastal embankments play an important role in coastal structures by reducing the effect of the wave forces and controlling the movement of sediments. Many coastal areas are underlain by weak and compressible soils. Estimation of during construction settlement of coastal embankments is highly important in design and safety control of embankments and appurtenant structures. Accordingly, selecting and establishing of an appropriate model with a reasonable level of complication is one of the challenges for engineers. Although there are advanced models in the literature regarding design of embankments, there is not enough information on the prediction of their associated settlement, particularly in coastal areas having considerable soft soils. Marine engineering study in Iran is important due to the existence of two important coastal areas located in the northern and southern parts of the country. In the present study, the validity of Terzaghi’s consolidation theory has been investigated. In addition, the settlement of these coastal embankments during construction is predicted by using special methods in PLAXIS software by the help of appropriate boundary conditions and soil layers. The results indicate that, for the existing soil condition at the site, some parameters are important to be considered in analysis. Consequently, a model is introduced to estimate the settlement of the embankments in such geotechnical conditions.Keywords: consolidation, settlement, coastal embankments, numerical methods, finite elements method
Procedia PDF Downloads 15710610 Determinants of Psychological Distress in Teenagers and Young Adults Affected by Cancer: A Systematic Review
Authors: Anna Bak-Klimek, Emily Spencer, Siew Lee, Karen Campbell, Wendy McInally
Abstract:
Background & Significance: Over half of Teenagers and Young Adults (TYAs) say that they experience psychological distress after cancer diagnosis and TYAs with cancer are at higher risk of developing distress compared to other age groups. Despite this there are no age-appropriate interventions to help TYAs manage distress and there is a lack of conceptual understanding of what causes distress in this population group. This makes it difficult to design a targeted, developmentally appropriate intervention. This review aims to identify the key determinants of distress in TYAs affected by cancer and to propose an integrative model of cancer-related distress for TYAs. Method: A literature search was performed in Cochrane Database of Systematic Reviews, MEDLINE, PsycINFO, CINAHL, EMBASE and PsycArticles in May-June, 2022. Quantitative literature was systematically reviewed on the relationship between psychological distress experienced by TYAs affected by cancer and a wide range of factors i.e. individual (demographic, psychological, developmental, and clinical factors) and contextual (social/environmental) factors. Evidence was synthesized and correlates were categorized using the Biopsychosocial Model. The full protocol is available from PROSPERO (CRD42022322069) Results: Thirty eligible quantitative studies met criteria for the review. A total of twenty-six studies were cross-sectional, three were longitudinal and one study was a case control study. The evidence on the relationship between the socio-demographic, illness and treatment-related factors and psychological distress is inconsistent and unclear. There is however consistent evidence on the link between psychological factors and psychological distress. For instance, the use of cognitive and defence coping, negative meta-cognitive beliefs, less optimism, a lack of sense of meaning and lower resilience levels were significantly associated with higher psychological distress. Furthermore, developmental factors such as poor self-image, identity issues and perceived conflict were strongly associated with higher distress levels. Conclusions: The current review suggests that psychological and developmental factors such as ineffective coping strategies, poor self-image and identity issues may play a key role in the development of psychological distress in TYAs affected by cancer. The review proposes a Positive Developmental Psychology Model of Distress for Teenagers and Young Adults affected by cancer. The review highlights that implementation of psychological interventions that foster optimism, improve resilience and address self-image may result in reduced distress in TYA’s with cancer.Keywords: cancer, determinant, psychological distress, teenager and young adult, theoretical model
Procedia PDF Downloads 9410609 Modelling the Long Rune of Aggregate Import Demand in Libya
Authors: Said Yousif Khairi
Abstract:
Being a developing economy, imports of capital, raw materials and manufactories goods are vital for sustainable economic growth. In 2006, Libya imported LD 8 billion (US$ 6.25 billion) which composed of mainly machinery and transport equipment (49.3%), raw material (18%), and food products and live animals (13%). This represented about 10% of GDP. Thus, it is pertinent to investigate factors affecting the amount of Libyan imports. An econometric model representing the aggregate import demand for Libya was developed and estimated using the bounds test procedure, which based on an unrestricted error correction model (UECM). The data employed for the estimation was from 1970–2010. The results of the bounds test revealed that the volume of imports and its determinants namely real income, consumer price index and exchange rate are co-integrated. The findings indicate that the demand for imports is inelastic with respect to income, index price level and The exchange rate variable in the short run is statistically significant. In the long run, the income elasticity is elastic while the price elasticity and the exchange rate remains inelastic. This indicates that imports are important elements for Libyan economic growth in the long run.Keywords: import demand, UECM, bounds test, Libya
Procedia PDF Downloads 36110608 Experimental and Numerical Investigation on Deformation Behaviour of Single Crystal Copper
Authors: Suman Paik, P. V. Durgaprasad, Bijan K. Dutta
Abstract:
A study combining experimental and numerical investigation on the deformation behaviour of single crystals of copper is presented in this paper. Cylindrical samples were cut in specific orientations from high purity copper single crystal and subjected to uniaxial compression loading at quasi-static strain rate. The stress-strain curves along two different crystallographic orientations were then extracted. In order to study and compare the deformation responses, a single crystal plasticity model incorporating non-Schmid effects was developed assuming cross-slip plays an important role in orientation of the material. By making use of crystal plasticity finite element method, the model was applied to investigate the orientation dependence of the stress-strain behaviour of two crystallographic orientations. Finally, details of slip activities of deformed crystals were investigated by linking the orientation of slip lines with the theoretical traces of possible crystallographic planes. The experimentally determined active slip modes were matched with those determined by simulations.Keywords: crystal plasticity, modelling, non-Schmid effects, finite elements, finite strain
Procedia PDF Downloads 21310607 Simulation Model for Optimizing Energy in Supply Chain Management
Authors: Nazli Akhlaghinia, Ali Rajabzadeh Ghatari
Abstract:
In today's world, with increasing environmental awareness, firms are facing severe pressure from various stakeholders, including the government and customers, to reduce their harmful effects on the environment. Over the past few decades, the increasing effects of global warming, climate change, waste, and air pollution have increased the global attention of experts to the issue of the green supply chain and led them to the optimal solution for greenery. Green supply chain management (GSCM) plays an important role in motivating the sustainability of the organization. With increasing environmental concerns, the main objective of the research is to use system thinking methodology and Vensim software for designing a dynamic system model for green supply chain and observing behaviors. Using this methodology, we look for the effects of a green supply chain structure on the behavioral dynamics of output variables. We try to simulate the complexity of GSCM in a period of 30 months and observe the complexity of behaviors of variables including sustainability, providing green products, and reducing energy consumption, and consequently reducing sample pollution.Keywords: supply chain management, green supply chain management, system dynamics, energy consumption
Procedia PDF Downloads 13810606 Motivation and Multiglossia: Exploring the Diversity of Interests, Attitudes, and Engagement of Arabic Learners
Authors: Anna-Maria Ramezanzadeh
Abstract:
Demand for Arabic language is growing worldwide, driven by increased interest in the multifarious purposes the language serves, both for the population of heritage learners and those studying Arabic as a foreign language. The diglossic, or indeed multiglossic nature of the language as used in Arabic speaking communities however, is seldom represented in the content of classroom courses. This disjoint between the nature of provision and students’ expectations can severely impact their engagement with course material, and their motivation to either commence or continue learning the language. The nature of motivation and its relationship to multiglossia is sparsely explored in current literature on Arabic. The theoretical framework here proposed aims to address this gap by presenting a model and instruments for the measurement of Arabic learners’ motivation in relation to the multiple strands of the language. It adopts and develops the Second Language Motivation Self-System model (L2MSS), originally proposed by Zoltan Dörnyei, which measures motivation as the desire to reduce the discrepancy between leaners’ current and future self-concepts in terms of the second language (L2). The tripartite structure incorporates measures of the Current L2 Self, Future L2 Self (consisting of an Ideal L2 Self, and an Ought-To Self), and the L2 Learning Experience. The strength of the self-concepts is measured across three different domains of Arabic: Classical, Modern Standard and Colloquial. The focus on learners’ self-concepts allows for an exploration of the effect of multiple factors on motivation towards Arabic, including religion. The relationship between Islam and Arabic is often given as a prominent reason behind some students’ desire to learn the language. Exactly how and why this factor features in learners’ L2 self-concepts has not yet been explored. Specifically designed surveys and interview protocols are proposed to facilitate the exploration of these constructs. The L2 Learning Experience component of the model is operationalized as learners’ task-based engagement. Engagement is conceptualised as multi-dimensional and malleable. In this model, situation-specific measures of cognitive, behavioural, and affective components of engagement are collected via specially designed repeated post-task self-report surveys on Personal Digital Assistant over multiple Arabic lessons. Tasks are categorised according to language learning skill. Given the domain-specific uses of the different varieties of Arabic, the relationship between learners’ engagement with different types of tasks and their overall motivational profiles will be examined to determine the extent of the interaction between the two constructs. A framework for this data analysis is proposed and hypotheses discussed. The unique combination of situation-specific measures of engagement and a person-oriented approach to measuring motivation allows for a macro- and micro-analysis of the interaction between learners and the Arabic learning process. By combining cross-sectional and longitudinal elements with a mixed-methods design, the model proposed offers the potential for capturing a comprehensive and detailed picture of the motivation and engagement of Arabic learners. The application of this framework offers a number of numerous potential pedagogical and research implications which will also be discussed.Keywords: Arabic, diglossia, engagement, motivation, multiglossia, sociolinguistics
Procedia PDF Downloads 16610605 The Search for an Alternative to Tabarru` in Takaful Models
Authors: Abu Umar Faruq Ahmad, Muhammad Ayub
Abstract:
Tabarru` (unilateral gratuitous contribution) is thought to be the basic concept that distinguishes Takaful from conventional non-Sharīʿah compliant insurance. The Sharīʿah compliance of its current practice has been questioned in the premise that, a) it is a form of commutative contract; b) it is akin to the commercial corporate structure of insurance companies due to following the same marketing strategies, allocation to reserves, sharing of underwriting surplus by the companies one way or the other, providing loans to the Takaful funds, and resultantly absorbing the underwriting losses. The Sharīʿah scholars are of the view that the relationship between participants in Takaful should be in the form of commitment to donate, under which a contributor makes commitments himself to donate a sum of money for mutual help and cooperation on the condition that the balance, if any, should be returned to him. With the aim of finding solutions to the above mentioned concerns and other Sharīʿah related issues the study seeks to investigate whether the Takaful companies are functioning in accordance with the Islamic principles of brotherhood, solidarity, and cooperative risk sharing. Given that it discusses the cooperative model of Takaful to address the current and future Sharīʿah related and legal concerns. The study proposed an alternative model and considers it to best serve the objectives of Takaful which operates on the basis of ta`awun or mutual co-operation.Keywords: hibah, musharakah ta`awuniyyah, Tabarru`, Takaful
Procedia PDF Downloads 44510604 Design of IMC-PID Controller Cascaded Filter for Simplified Decoupling Control System
Authors: Le Linh, Truong Nguyen Luan Vu, Le Hieu Giang
Abstract:
In this work, the IMC-PID controller cascaded filter based on Internal Model Control (IMC) scheme is systematically proposed for the simplified decoupling control system. The simplified decoupling is firstly introduced for multivariable processes by using coefficient matching to obtain a stable, proper, and causal simplified decoupler. Accordingly, transfer functions of decoupled apparent processes can be expressed as a set of n equivalent independent processes and then derived as a ratio of the original open-loop transfer function to the diagonal element of the dynamic relative gain array. The IMC-PID controller in series with filter is then directly employed to enhance the overall performance of the decoupling control system while avoiding difficulties arising from properties inherent to simplified decoupling. Some simulation studies are considered to demonstrate the simplicity and effectiveness of the proposed method. Simulations were conducted by tuning various controllers of the multivariate processes with multiple time delays. The results indicate that the proposed method consistently performs well with fast and well-balanced closed-loop time responses.Keywords: coefficient matching method, internal model control (IMC) scheme, PID controller cascaded filter, simplified decoupler
Procedia PDF Downloads 44210603 A Theoretical Model for a Humidification Dehumidification (HD) Solar Desalination Unit
Authors: Yasser El-Henawy, M. Abd El-Kader, Gamal H. Moustafa
Abstract:
A theoretical study of a humidification dehumidification solar desalination unit has been carried out to increase understanding the effect of weather conditions on the unit productivity. A humidification-dehumidification (HD) solar desalination unit has been designed to provide fresh water for population in remote arid areas. It consists of solar water collector and air collector; to provide the hot water and air to the desalination chamber. The desalination chamber is divided into humidification and dehumidification towers. The circulation of air between the two towers is maintained by the forced convection. A mathematical model has been formulated, in which the thermodynamic relations were used to study the flow, heat and mass transfer inside the humidifier and dehumidifier. The present technique is performed in order to increase the unit performance. Heat and mass balance has been done and a set of governing equations has been solved using the finite difference technique. The unit productivity has been calculated along the working day during the summer and winter sessions and has compared with the available experimental results. The average accumulative productivity of the system in winter has been ranged between 2.5 to 4 kg/m2.day, while the average summer productivity has been found between 8 to 12 kg/m2 day.Keywords: solar desalination, solar collector, humidification and dehumidification, simulation, finite difference, water productivity
Procedia PDF Downloads 41210602 AI-Based Autonomous Plant Health Monitoring and Control System with Visual Health-Scoring Models
Authors: Uvais Qidwai, Amor Moursi, Mohamed Tahar, Malek Hamad, Hamad Alansi
Abstract:
This paper focuses on the development and implementation of an advanced plant health monitoring system with an AI backbone and IoT sensory network. Our approach involves addressing the critical environmental factors essential for preserving a plant’s well-being, including air temperature, soil moisture, soil temperature, soil conductivity, pH, water levels, and humidity, as well as the presence of essential nutrients like nitrogen, phosphorus, and potassium. Central to our methodology is the utilization of computer vision technology, particularly a night vision camera. The captured data is then compared against a reference database containing different health statuses. This comparative analysis is implemented using an AI deep learning model, which enables us to generate accurate assessments of plant health status. By combining the AI-based decision-making approach, our system aims to provide precise and timely insights into the overall health and well-being of plants, offering a valuable tool for effective plant care and management.Keywords: deep learning image model, IoT sensing, cloud-based analysis, remote monitoring app, computer vision, fuzzy control
Procedia PDF Downloads 5410601 Optimization Model for Identification of Assembly Alternatives of Large-Scale, Make-to-Order Products
Authors: Henrik Prinzhorn, Peter Nyhuis, Johannes Wagner, Peter Burggräf, Torben Schmitz, Christina Reuter
Abstract:
Assembling large-scale products, such as airplanes, locomotives, or wind turbines, involves frequent process interruptions induced by e.g. delayed material deliveries or missing availability of resources. This leads to a negative impact on the logistical performance of a producer of xxl-products. In industrial practice, in case of interruptions, the identification, evaluation and eventually the selection of an alternative order of assembly activities (‘assembly alternative’) leads to an enormous challenge, especially if an optimized logistical decision should be reached. Therefore, in this paper, an innovative, optimization model for the identification of assembly alternatives that addresses the given problem is presented. It describes make-to-order, large-scale product assembly processes as a resource constrained project scheduling (RCPS) problem which follows given restrictions in practice. For the evaluation of the assembly alternative, a cost-based definition of the logistical objectives (delivery reliability, inventory, make-span and workload) is presented.Keywords: assembly scheduling, large-scale products, make-to-order, optimization, rescheduling
Procedia PDF Downloads 45910600 Value Index, a Novel Decision Making Approach for Waste Load Allocation
Authors: E. Feizi Ashtiani, S. Jamshidi, M.H Niksokhan, A. Feizi Ashtiani
Abstract:
Waste load allocation (WLA) policies may use multi-objective optimization methods to find the most appropriate and sustainable solutions. These usually intend to simultaneously minimize two criteria, total abatement costs (TC) and environmental violations (EV). If other criteria, such as inequity, need for minimization as well, it requires introducing more binary optimizations through different scenarios. In order to reduce the calculation steps, this study presents value index as an innovative decision making approach. Since the value index contains both the environmental violation and treatment costs, it can be maximized simultaneously with the equity index. It implies that the definition of different scenarios for environmental violations is no longer required. Furthermore, the solution is not necessarily the point with minimized total costs or environmental violations. This idea is testified for Haraz River, in north of Iran. Here, the dissolved oxygen (DO) level of river is simulated by Streeter-Phelps equation in MATLAB software. The WLA is determined for fish farms using multi-objective particle swarm optimization (MOPSO) in two scenarios. At first, the trade-off curves of TC-EV and TC-Inequity are plotted separately as the conventional approach. In the second, the Value-Equity curve is derived. The comparative results show that the solutions are in a similar range of inequity with lower total costs. This is due to the freedom of environmental violation attained in value index. As a result, the conventional approach can well be replaced by the value index particularly for problems optimizing these objectives. This reduces the process to achieve the best solutions and may find better classification for scenario definition. It is also concluded that decision makers are better to focus on value index and weighting its contents to find the most sustainable alternatives based on their requirements.Keywords: waste load allocation (WLA), value index, multi objective particle swarm optimization (MOPSO), Haraz River, equity
Procedia PDF Downloads 422