Search results for: linear static analysis
28451 On Fourier Type Integral Transform for a Class of Generalized Quotients
Authors: A. S. Issa, S. K. Q. AL-Omari
Abstract:
In this paper, we investigate certain spaces of generalized functions for the Fourier and Fourier type integral transforms. We discuss convolution theorems and establish certain spaces of distributions for the considered integrals. The new Fourier type integral is well-defined, linear, one-to-one and continuous with respect to certain types of convergences. Many properties and an inverse problem are also discussed in some details.Keywords: Boehmian, Fourier integral, Fourier type integral, generalized quotient
Procedia PDF Downloads 36528450 Modeling and Optimization of Algae Oil Extraction Using Response Surface Methodology
Authors: I. F. Ejim, F. L. Kamen
Abstract:
Aims: In this experiment, algae oil extraction with a combination of n-hexane and ethanol was investigated. The effects of extraction solvent concentration, extraction time and temperature on the yield and quality of oil were studied using Response Surface Methodology (RSM). Experimental Design: Optimization of algae oil extraction using Box-Behnken design was used to generate 17 experimental runs in a three-factor-three-level design where oil yield, specific gravity, acid value and saponification value were evaluated as the response. Result: In this result, a minimum oil yield of 17% and maximum of 44% was realized. The optimum values for yield, specific gravity, acid value and saponification value from the overlay plot were 40.79%, 0.8788, 0.5056 mg KOH/g and 180.78 mg KOH/g respectively with desirability of 0.801. The maximum point prediction was yield 40.79% at solvent concentration 66.68 n-hexane, temperature of 40.0°C and extraction time of 4 hrs. Analysis of Variance (ANOVA) results showed that the linear and quadratic coefficient were all significant at p<0.05. The experiment was validated and results obtained were with the predicted values. Conclusion: Algae oil extraction was successfully optimized using RSM and its quality indicated it is suitable for many industrial uses.Keywords: algae oil, response surface methodology, optimization, Box-Bohnken, extraction
Procedia PDF Downloads 33828449 Homeless Population Modeling and Trend Prediction Through Identifying Key Factors and Machine Learning
Authors: Shayla He
Abstract:
Background and Purpose: According to Chamie (2017), it’s estimated that no less than 150 million people, or about 2 percent of the world’s population, are homeless. The homeless population in the United States has grown rapidly in the past four decades. In New York City, the sheltered homeless population has increased from 12,830 in 1983 to 62,679 in 2020. Knowing the trend on the homeless population is crucial at helping the states and the cities make affordable housing plans, and other community service plans ahead of time to better prepare for the situation. This study utilized the data from New York City, examined the key factors associated with the homelessness, and developed systematic modeling to predict homeless populations of the future. Using the best model developed, named HP-RNN, an analysis on the homeless population change during the months of 2020 and 2021, which were impacted by the COVID-19 pandemic, was conducted. Moreover, HP-RNN was tested on the data from Seattle. Methods: The methodology involves four phases in developing robust prediction methods. Phase 1 gathered and analyzed raw data of homeless population and demographic conditions from five urban centers. Phase 2 identified the key factors that contribute to the rate of homelessness. In Phase 3, three models were built using Linear Regression, Random Forest, and Recurrent Neural Network (RNN), respectively, to predict the future trend of society's homeless population. Each model was trained and tuned based on the dataset from New York City for its accuracy measured by Mean Squared Error (MSE). In Phase 4, the final phase, the best model from Phase 3 was evaluated using the data from Seattle that was not part of the model training and tuning process in Phase 3. Results: Compared to the Linear Regression based model used by HUD et al (2019), HP-RNN significantly improved the prediction metrics of Coefficient of Determination (R2) from -11.73 to 0.88 and MSE by 99%. HP-RNN was then validated on the data from Seattle, WA, which showed a peak %error of 14.5% between the actual and the predicted count. Finally, the modeling results were collected to predict the trend during the COVID-19 pandemic. It shows a good correlation between the actual and the predicted homeless population, with the peak %error less than 8.6%. Conclusions and Implications: This work is the first work to apply RNN to model the time series of the homeless related data. The Model shows a close correlation between the actual and the predicted homeless population. There are two major implications of this result. First, the model can be used to predict the homeless population for the next several years, and the prediction can help the states and the cities plan ahead on affordable housing allocation and other community service to better prepare for the future. Moreover, this prediction can serve as a reference to policy makers and legislators as they seek to make changes that may impact the factors closely associated with the future homeless population trend.Keywords: homeless, prediction, model, RNN
Procedia PDF Downloads 12128448 An Exploratory of the Use of English in Contemporary Society
Authors: Saksit Saengboon
Abstract:
The study of English in Thailand receives comparatively little attention in the world of Englishes scholarship despite a complex and dynamic linguistic landscape. Like many countries in the region, English is used in predictable contexts, such as schools and at work. However, English is being increasingly used as a contact language among Thais and non-Thais, requiring much needed empirical attention. This study aims to address this neglected issue by examining how Thais perceive and use English in contemporary Thai society. This study explored the ways in which English has been used in public signage, mass media, especially about Thai food, and perceptions of Thais (N = 80) regarding English. Findings indicate that English in Thailand is used in a complicated manner portraying both standard and non-standard English. Thais still hold a static or traditional view of English, making it impractical, if not impossible, to have Thai English as an established variety.Keywords: Thai english, thainess in english, public signage, mass media, thai food, thai linguistic landscape
Procedia PDF Downloads 12228447 The Application of Raman Spectroscopy in Olive Oil Analysis
Authors: Silvia Portarena, Chiara Anselmi, Chiara Baldacchini, Enrico Brugnoli
Abstract:
Extra virgin olive oil (EVOO) is a complex matrix mainly composed by fatty acid and other minor compounds, among which carotenoids are well known for their antioxidative function that is a key mechanism of protection against cancer, cardiovascular diseases, and macular degeneration in humans. EVOO composition in terms of such constituents is generally the result of a complex combination of genetic, agronomical and environmental factors. To selectively improve the quality of EVOOs, the role of each factor on its biochemical composition need to be investigated. By selecting fruits from four different cultivars similarly grown and harvested, it was demonstrated that Raman spectroscopy, combined with chemometric analysis, is able to discriminate the different cultivars, also as a function of the harvest date, based on the relative content and composition of fatty acid and carotenoids. In particular, a correct classification up to 94.4% of samples, according to the cultivar and the maturation stage, was obtained. Moreover, by using gas chromatography and high-performance liquid chromatography as reference techniques, the Raman spectral features further allowed to build models, based on partial least squares regression, that were able to predict the relative amount of the main fatty acids and the main carotenoids in EVOO, with high coefficients of determination. Besides genetic factors, climatic parameters, such as light exposition, distance from the sea, temperature, and amount of precipitations could have a strong influence on EVOO composition of both major and minor compounds. This suggests that the Raman spectra could act as a specific fingerprint for the geographical discrimination and authentication of EVOO. To understand the influence of environment on EVOO Raman spectra, samples from seven regions along the Italian coasts were selected and analyzed. In particular, it was used a dual approach combining Raman spectroscopy and isotope ratio mass spectrometry (IRMS) with principal component and linear discriminant analysis. A correct classification of 82% EVOO based on their regional geographical origin was obtained. Raman spectra were obtained by Super Labram spectrometer equipped with an Argon laser (514.5 nm wavelenght). Analyses of stable isotope content ratio were performed using an isotope ratio mass spectrometer connected to an elemental analyzer and to a pyrolysis system. These studies demonstrate that RR spectroscopy is a valuable and useful technique for the analysis of EVOO. In combination with statistical analysis, it makes possible the assessment of specific samples’ content and allows for classifying oils according to their geographical and varietal origin.Keywords: authentication, chemometrics, olive oil, raman spectroscopy
Procedia PDF Downloads 33228446 A Tutorial on Model Predictive Control for Spacecraft Maneuvering Problem with Theory, Experimentation and Applications
Authors: O. B. Iskender, K. V. Ling, V. Dubanchet, L. Simonini
Abstract:
This paper discusses the recent advances and future prospects of spacecraft position and attitude control using Model Predictive Control (MPC). First, the challenges of the space missions are summarized, in particular, taking into account the errors, uncertainties, and constraints imposed by the mission, spacecraft and, onboard processing capabilities. The summary of space mission errors and uncertainties provided in categories; initial condition errors, unmodeled disturbances, sensor, and actuator errors. These previous constraints are classified into two categories: physical and geometric constraints. Last, real-time implementation capability is discussed regarding the required computation time and the impact of sensor and actuator errors based on the Hardware-In-The-Loop (HIL) experiments. The rationales behind the scenarios’ are also presented in the scope of space applications as formation flying, attitude control, rendezvous and docking, rover steering, and precision landing. The objectives of these missions are explained, and the generic constrained MPC problem formulations are summarized. Three key design elements used in MPC design: the prediction model, the constraints formulation and the objective cost function are discussed. The prediction models can be linear time invariant or time varying depending on the geometry of the orbit, whether it is circular or elliptic. The constraints can be given as linear inequalities for input or output constraints, which can be written in the same form. Moreover, the recent convexification techniques for the non-convex geometrical constraints (i.e., plume impingement, Field-of-View (FOV)) are presented in detail. Next, different objectives are provided in a mathematical framework and explained accordingly. Thirdly, because MPC implementation relies on finding in real-time the solution to constrained optimization problems, computational aspects are also examined. In particular, high-speed implementation capabilities and HIL challenges are presented towards representative space avionics. This covers an analysis of future space processors as well as the requirements of sensors and actuators on the HIL experiments outputs. The HIL tests are investigated for kinematic and dynamic tests where robotic arms and floating robots are used respectively. Eventually, the proposed algorithms and experimental setups are introduced and compared with the authors' previous work and future plans. The paper concludes with a conjecture that MPC paradigm is a promising framework at the crossroads of space applications while could be further advanced based on the challenges mentioned throughout the paper and the unaddressed gap.Keywords: convex optimization, model predictive control, rendezvous and docking, spacecraft autonomy
Procedia PDF Downloads 11028445 Oxidosqualene Cyclase: A Novel Inhibitor
Authors: Devadrita Dey Sarkar
Abstract:
Oxidosqualene cyclase is a membrane bound enzyme in which helps in the formation of steroid scaffold in higher organisms. In a highly selective cyclization reaction oxidosqualene cyclase forms LANOSTEROL with seven chiral centres starting from the linear substrate 2,3-oxidosqualene. In humans OSC in cholesterol biosynthesis it represents a target for the discovery of novel anticholesteraemic drugs that could complement the widely used statins. The enzyme oxidosqualene: lanosterol cyclase (OSC) represents a novel target for the treatment of hypercholesterolemia. OSC catalyzes the cyclization of the linear 2,3-monoepoxysqualene to lanosterol, the initial four-ringed sterol intermediate in the cholesterol biosynthetic pathway. OSC also catalyzes the formation of 24(S), 25-epoxycholesterol, a ligand activator of the liver X receptor. Inhibition of OSC reduces cholesterol biosynthesis and selectively enhances 24(S),25-epoxycholesterol synthesis. Through this dual mechanism, OSC inhibition decreases plasma levels of low-density lipoprotein (LDL)-cholesterol and prevents cholesterol deposition within macrophages. The recent crystallization of OSC identifies the mechanism of action for this complex enzyme, setting the stage for the design of OSC inhibitors with improved pharmacological properties for cholesterol lowering and treatment of atherosclerosis. While studying and designing the inhibitor of oxidosqulene cyclase, I worked on the pdb id of 1w6k which was the most worked on pdb id and I used several methods, techniques and softwares to identify and validate the top most molecules which could be acting as an inhibitor for oxidosqualene cyclase. Thus, by partial blockage of this enzyme, both an inhibition of lanosterol and subsequently cholesterol formation as well as a concomitant effect on HMG-CoA reductase can be achieved. Both effects complement each other and lead to an effective control of cholesterol biosynthesis. It is therefore concluded that 2,3-oxidosqualene cyclase plays a crucial role in the regulation of intracellular cholesterol homeostasis. 2,3-Oxidosqualene cyclase inhibitors offer an attractive approach for novel lipid-lowering agents.Keywords: anticholesteraemic, crystallization, statins, homeostasis
Procedia PDF Downloads 35128444 Effects of Matrix Properties on Surfactant Enhanced Oil Recovery in Fractured Reservoirs
Authors: Xiaoqian Cheng, Jon Kleppe, Ole Torsæter
Abstract:
The properties of rocks have effects on efficiency of surfactant. One objective of this study is to analyze the effects of rock properties (permeability, porosity, initial water saturation) on surfactant spontaneous imbibition at laboratory scale. The other objective is to evaluate existing upscaling methods and establish a modified upscaling method. A core is put in a container that is full of surfactant solution. Assume there is no space between the bottom of the core and the container. The core is modelled as a cuboid matrix with a length of 3.5 cm, a width of 3.5 cm, and a height of 5 cm. The initial matrix, brine and oil properties are set as the properties of Ekofisk Field. The simulation results of matrix permeability show that the oil recovery rate has a strong positive linear relationship with matrix permeability. Higher oil recovery is obtained from the matrix with higher permeability. One existing upscaling method is verified by this model. The study on matrix porosity shows that the relationship between oil recovery rate and matrix porosity is a negative power function. However, the relationship between ultimate oil recovery and matrix porosity is a positive power function. The initial water saturation of matrix has negative linear relationships with ultimate oil recovery and enhanced oil recovery. However, the relationship between oil recovery and initial water saturation is more complicated with the imbibition time because of the transition of dominating force from capillary force to gravity force. Modified upscaling methods are established. The work here could be used as a reference for the surfactant application in fractured reservoirs. And the description of the relationships between properties of matrix and the oil recovery rate and ultimate oil recovery helps to improve upscaling methods.Keywords: initial water saturation, permeability, porosity, surfactant EOR
Procedia PDF Downloads 16228443 Removal of Basic Dyes from Aqueous Solutions with a Treated Spent Bleaching Earth
Authors: M. Mana, M. S. Ouali, L. C. de Menorval
Abstract:
A spent bleaching earth from an edible oil refinery has been treated by impregnation with a normal sodium hydroxide solution followed by mild thermal treatment (100°C). The obtained material (TSBE) was washed, dried and characterized by X-ray diffraction, FTIR, SEM, BET, and thermal analysis. The clay structure was not apparently affected by the treatment and the impregnated organic matter was quantitatively removed. We have investigated the comparative sorption of safranine and methylene blue on this material, the spent bleaching earth (SBE) and the virgin bleaching earth (VBE). The kinetic results fit the pseudo second order kinetic model and the Weber & Morris, intra-particle diffusion model. The pH had no effect on the sorption efficiency. The sorption isotherms followed the Langmuir model for various sorbent concentrations with good values of determination coefficient. A linear relationship was found between the calculated maximum removal capacity and the solid/solution ratio. A comparison between the results obtained with this material and those of the literature highlighted the low cost and the good removal capacity of the treated spent bleaching earth.Keywords: basic dyes, isotherms, sorption, spent bleaching earth
Procedia PDF Downloads 24928442 Comparison of Developed Statokinesigram and Marker Data Signals by Model Approach
Authors: Boris Barbolyas, Kristina Buckova, Tomas Volensky, Cyril Belavy, Ladislav Dedik
Abstract:
Background: Based on statokinezigram, the human balance control is often studied. Approach to human postural reaction analysis is based on a combination of stabilometry output signal with retroreflective marker data signal processing, analysis, and understanding, in this study. The study shows another original application of Method of Developed Statokinesigram Trajectory (MDST), too. Methods: In this study, the participants maintained quiet bipedal standing for 10 s on stabilometry platform. Consequently, bilateral vibration stimuli to Achilles tendons in 20 s interval was applied. Vibration stimuli caused that human postural system took the new pseudo-steady state. Vibration frequencies were 20, 60 and 80 Hz. Participant's body segments - head, shoulders, hips, knees, ankles and little fingers were marked by 12 retroreflective markers. Markers positions were scanned by six cameras system BTS SMART DX. Registration of their postural reaction lasted 60 s. Sampling frequency was 100 Hz. For measured data processing were used Method of Developed Statokinesigram Trajectory. Regression analysis of developed statokinesigram trajectory (DST) data and retroreflective marker developed trajectory (DMT) data were used to find out which marker trajectories most correlate with stabilometry platform output signals. Scaling coefficients (λ) between DST and DMT by linear regression analysis were evaluated, too. Results: Scaling coefficients for marker trajectories were identified for all body segments. Head markers trajectories reached maximal value and ankle markers trajectories had a minimal value of scaling coefficient. Hips, knees and ankles markers were approximately symmetrical in the meaning of scaling coefficient. Notable differences of scaling coefficient were detected in head and shoulders markers trajectories which were not symmetrical. The model of postural system behavior was identified by MDST. Conclusion: Value of scaling factor identifies which body segment is predisposed to postural instability. Hypothetically, if statokinesigram represents overall human postural system response to vibration stimuli, then markers data represented particular postural responses. It can be assumed that cumulative sum of particular marker postural responses is equal to statokinesigram.Keywords: center of pressure (CoP), method of developed statokinesigram trajectory (MDST), model of postural system behavior, retroreflective marker data
Procedia PDF Downloads 35028441 The Effect of Cinnamaldehyde on Escherichia coli Survival during Low Temperature Long Time Cooking
Authors: Fuji Astuti, Helen Onyeaka
Abstract:
The aim of the study was to investigate the combine effects of cinnamaldehyde (0.25 and 0.45% v/v) on thermal resistance of pathogenic Escherichia coli during low temperature long time (LT-LT) cooking below 60℃. Three different static temperatures (48, 53 and 50℃) were performed, and the number of viable cells was studied. The starting concentrations of cells were 10⁸ CFU/ml. In this experiment, heat treatment efficiency for safe reduction indicated by decimal logarithm reduction of viable recovered cells, which was monitored for heating over 6 hours. Thermal inactivation was measured by means of establishing the death curves between the mean log surviving cells (log₁₀ CFU/ml) and designated time points (minutes) for each temperature test. The findings depicted that addition of cinnamaldehyde exhibited to elevate the thermal sensitivity of E. coli. However, the injured cells found to be well-adapted to all temperature tests after certain time point of cooking, in which they grew to more than 10⁵ CFU/ml.Keywords: cinnamaldehyde, decimal logarithm reduction, Escherichia coli, LT-LT cooking
Procedia PDF Downloads 35828440 Multi-Impairment Compensation Based Deep Neural Networks for 16-QAM Coherent Optical Orthogonal Frequency Division Multiplexing System
Authors: Ying Han, Yuanxiang Chen, Yongtao Huang, Jia Fu, Kaile Li, Shangjing Lin, Jianguo Yu
Abstract:
In long-haul and high-speed optical transmission system, the orthogonal frequency division multiplexing (OFDM) signal suffers various linear and non-linear impairments. In recent years, researchers have proposed compensation schemes for specific impairment, and the effects are remarkable. However, different impairment compensation algorithms have caused an increase in transmission delay. With the widespread application of deep neural networks (DNN) in communication, multi-impairment compensation based on DNN will be a promising scheme. In this paper, we propose and apply DNN to compensate multi-impairment of 16-QAM coherent optical OFDM signal, thereby improving the performance of the transmission system. The trained DNN models are applied in the offline digital signal processing (DSP) module of the transmission system. The models can optimize the constellation mapping signals at the transmitter and compensate multi-impairment of the OFDM decoded signal at the receiver. Furthermore, the models reduce the peak to average power ratio (PAPR) of the transmitted OFDM signal and the bit error rate (BER) of the received signal. We verify the effectiveness of the proposed scheme for 16-QAM Coherent Optical OFDM signal and demonstrate and analyze transmission performance in different transmission scenarios. The experimental results show that the PAPR and BER of the transmission system are significantly reduced after using the trained DNN. It shows that the DNN with specific loss function and network structure can optimize the transmitted signal and learn the channel feature and compensate for multi-impairment in fiber transmission effectively.Keywords: coherent optical OFDM, deep neural network, multi-impairment compensation, optical transmission
Procedia PDF Downloads 14328439 Correlation of SPT N-Value and Equipment Drilling Parameters in Deep Soil Mixing
Authors: John Eric C. Bargas, Maria Cecilia M. Marcos
Abstract:
One of the most common ground improvement techniques is Deep Soil Mixing (DSM). As the technique progresses, there is still lack in the development when it comes to depth control. This was the issue experienced during the installation of DSM in one of the National projects in the Philippines. This study assesses the feasibility of using equipment drilling parameters such as hydraulic pressure, drilling speed and rotational speed in determining the Standard Penetration Test N-value of a specific soil. Hydraulic pressure and drilling speed with a constant rotational speed of 30 rpm have a positive correlation with SPT N-value for cohesive soil and sand. A linear trend was observed for cohesive soil. The correlation of SPT N-value and hydraulic pressure yielded a R²=0.5377 while the correlation of SPT N-value and drilling speed has a R²=0.6355. While the best fitted model for sand is polynomial trend. The correlation of SPT N-value and hydraulic pressure yielded a R²=0.7088 while the correlation of SPT N-value and drilling speed has a R²=0.4354. The low correlation may be attributed to the behavior of sand when the auger penetrates. Sand tends to follow the rotation of the auger rather than resisting which was observed for very loose to medium dense sand. Specific Energy and the product of hydraulic pressure and drilling speed yielded same R² with a positive correlation. Linear trend was observed for cohesive soil while polynomial trend for sand. Cohesive soil yielded a R²=0.7320 which has a strong relationship. Sand also yielded a strong relationship having a coefficient of determination, R²=0.7203. It is feasible to use hydraulic pressure and drilling speed to estimate the SPT N-value of the soil. Also, the product of hydraulic pressure and drilling speed can be a substitute to specific energy when estimating the SPT N-value of a soil. However, additional considerations are necessary to account for other influencing factors like ground water and physical and mechanical properties of soil.Keywords: ground improvement, equipment drilling parameters, standard penetration test, deep soil mixing
Procedia PDF Downloads 4728438 Dynamic Voltage Restorer Control Strategies: An Overview
Authors: Arvind Dhingra, Ashwani Kumar Sharma
Abstract:
Power quality is an important parameter for today’s consumers. Various custom power devices are in use to give a proper supply of power quality. Dynamic Voltage Restorer is one such custom power device. DVR is a static VAR device which is used for series compensation. It is a power electronic device that is used to inject a voltage in series and in synchronism to compensate for the sag in voltage. Inductive Loads are a major source of power quality distortion. The induction furnace is one such typical load. A typical induction furnace is used for melting the scrap or iron. At the time of starting the melting process, the power quality is distorted to a large extent especially with the induction of harmonics. DVR is one such approach to mitigate these harmonics. This paper is an attempt to overview the various control strategies being followed for control of power quality by using DVR. An overview of control of harmonics using DVR is also presented.Keywords: DVR, power quality, harmonics, harmonic mitigation
Procedia PDF Downloads 37828437 Engineering Topology of Ecological Model for Orientation Impact of Sustainability Urban Environments: The Spatial-Economic Modeling
Authors: Moustafa Osman Mohammed
Abstract:
The modeling of a spatial-economic database is crucial in recitation economic network structure to social development. Sustainability within the spatial-economic model gives attention to green businesses to comply with Earth’s Systems. The natural exchange patterns of ecosystems have consistent and periodic cycles to preserve energy and materials flow in systems ecology. When network topology influences formal and informal communication to function in systems ecology, ecosystems are postulated to valence the basic level of spatial sustainable outcome (i.e., project compatibility success). These referred instrumentalities impact various aspects of the second level of spatial sustainable outcomes (i.e., participant social security satisfaction). The sustainability outcomes are modeling composite structure based on a network analysis model to calculate the prosperity of panel databases for efficiency value, from 2005 to 2025. The database is modeling spatial structure to represent state-of-the-art value-orientation impact and corresponding complexity of sustainability issues (e.g., build a consistent database necessary to approach spatial structure; construct the spatial-economic-ecological model; develop a set of sustainability indicators associated with the model; allow quantification of social, economic and environmental impact; use the value-orientation as a set of important sustainability policy measures), and demonstrate spatial structure reliability. The structure of spatial-ecological model is established for management schemes from the perspective pollutants of multiple sources through the input–output criteria. These criteria evaluate the spillover effect to conduct Monte Carlo simulations and sensitivity analysis in a unique spatial structure. The balance within “equilibrium patterns,” such as collective biosphere features, has a composite index of many distributed feedback flows. The following have a dynamic structure related to physical and chemical properties for gradual prolong to incremental patterns. While these spatial structures argue from ecological modeling of resource savings, static loads are not decisive from an artistic/architectural perspective. The model attempts to unify analytic and analogical spatial structure for the development of urban environments in a relational database setting, using optimization software to integrate spatial structure where the process is based on the engineering topology of systems ecology.Keywords: ecological modeling, spatial structure, orientation impact, composite index, industrial ecology
Procedia PDF Downloads 6828436 Vibrations of Springboards: Mode Shape and Time Domain Analysis
Authors: Stefano Frassinelli, Alessandro Niccolai, Riccardo E. Zich
Abstract:
Diving is an important Olympic sport. In this sport, the effective performance of the athlete is related to his capability to interact correctly with the springboard. In fact, the elevation of the jump and the correctness of the dive are influenced by the vibrations of the board. In this paper, the vibrations of the springboard will be analyzed by means of typical tools for vibration analysis: Firstly, a modal analysis will be done on two different models of the springboard, then, these two model and another one will be analyzed with a time analysis, done integrating the equations of motion od deformable bodies. All these analyses will be compared with experimental data measured on a real springboard by means of a 6-axis accelerometer; these measurements are aimed to assess the models proposed. The acquired data will be analyzed both in frequency domain and in time domain.Keywords: springboard analysis, modal analysis, time domain analysis, vibrations
Procedia PDF Downloads 46028435 Effect of Different Ground Motion Scaling Methods on Behavior of 40 Story RC Core Wall Building
Authors: Muhammad Usman, Munir Ahmed
Abstract:
The demand of high-rise buildings has grown fast during the past decades. The design of these buildings by using RC core wall have been widespread nowadays in many countries. The RC core wall (RCCW) buildings encompasses central core wall and boundary columns joined through post tension slab at different floor levels. The core wall often provides greater stiffness as compared to the collective stiffness of the boundary columns. Hence, the core wall dominantly resists lateral loading i.e. wind or earthquake load. Non-linear response history analysis (NLRHA) procedure is the finest seismic design procedure of the times for designing high-rise buildings. The modern design tools for nonlinear response history analysis and performance based design has provided more confidence to design these structures for high-rise buildings. NLRHA requires selection and scaling of ground motions to match design spectrum for site specific conditions. Designers use several techniques for scaling ground motion records (time series). Time domain and frequency domain scaling are most commonly used which comprises their own benefits and drawbacks. Due to lengthy process of NLRHA, application of only one technique is conceivable. To the best of author’s knowledge, no consensus on the best procedures for the selection and scaling of the ground motions is available in literature. This research aims to provide the finest ground motion scaling technique specifically for designing 40 story high-rise RCCW buildings. Seismic response of 40 story RCCW building is checked by applying both the frequency domain and time domain scaling. Variable sites are selected in three critical seismic zones of Pakistan. The results indicates that there is extensive variation in seismic response of building for these scaling. There is still a need to build a consensus on the subjected research by investigating variable sites and buildings heights.Keywords: 40-storied RC core wall building, nonlinear response history analysis, ground motions, time domain scaling, frequency domain scaling
Procedia PDF Downloads 13128434 Internet of Things: Route Search Optimization Applying Ant Colony Algorithm and Theory of Computer Science
Authors: Tushar Bhardwaj
Abstract:
Internet of Things (IoT) possesses a dynamic network where the network nodes (mobile devices) are added and removed constantly and randomly, hence the traffic distribution in the network is quite variable and irregular. The basic but very important part in any network is route searching. We have many conventional route searching algorithms like link-state, and distance vector algorithms but they are restricted to the static point to point network topology. In this paper we propose a model that uses the Ant Colony Algorithm for route searching. It is dynamic in nature and has positive feedback mechanism that conforms to the route searching. We have also embedded the concept of Non-Deterministic Finite Automata [NDFA] minimization to reduce the network to increase the performance. Results show that Ant Colony Algorithm gives the shortest path from the source to destination node and NDFA minimization reduces the broadcasting storm effectively.Keywords: routing, ant colony algorithm, NDFA, IoT
Procedia PDF Downloads 44428433 Comparative Analysis of Simulation-Based and Mixed-Integer Linear Programming Approaches for Optimizing Building Modernization Pathways Towards Decarbonization
Authors: Nico Fuchs, Fabian Wüllhorst, Laura Maier, Dirk Müller
Abstract:
The decarbonization of building stocks necessitates the modernization of existing buildings. Key measures for this include reducing energy demands through insulation of the building envelope, replacing heat generators, and installing solar systems. Given limited financial resources, it is impractical to modernize all buildings in a portfolio simultaneously; instead, prioritization of buildings and modernization measures for a given planning horizon is essential. Optimization models for modernization pathways can assist portfolio managers in this prioritization. However, modeling and solving these large-scale optimization problems, often represented as mixed-integer problems (MIP), necessitates simplifying the operation of building energy systems particularly with respect to system dynamics and transient behavior. This raises the question of which level of simplification remains sufficient to accurately account for realistic costs and emissions of building energy systems, ensuring a fair comparison of different modernization measures. This study addresses this issue by comparing a two-stage simulation-based optimization approach with a single-stage mathematical optimization in a mixed-integer linear programming (MILP) formulation. The simulation-based approach serves as a benchmark for realistic energy system operation but requires a restriction of the solution space to discrete choices of modernization measures, such as the sizing of heating systems. After calculating the operation of different energy systems in terms of the resulting final energy demands in simulation models on a first stage, the results serve as input for a second stage MILP optimization, where the design of each building in the portfolio is optimized. In contrast to the simulation-based approach, the MILP-based approach can capture a broader variety of modernization measures due to the efficiency of MILP solvers but necessitates simplifying the building energy system operation. Both approaches are employed to determine the cost-optimal design and dimensioning of several buildings in a portfolio to meet climate targets within limited yearly budgets, resulting in a modernization pathway for the entire portfolio. The comparison reveals that the MILP formulation successfully captures design decisions of building energy systems, such as the selection of heating systems and the modernization of building envelopes. However, the results regarding the optimal dimensioning of heating technologies differ from the results of the two-stage simulation-based approach, as the MILP model tends to overestimate operational efficiency, highlighting the limitations of the MILP approach.Keywords: building energy system optimization, model accuracy in optimization, modernization pathways, building stock decarbonization
Procedia PDF Downloads 3428432 Direct Electrical Communication of Redox Enzyme Based on 3-Dimensional Cross-Linked Redox Enzyme/Nanomaterials
Authors: A. K. M. Kafi, S. N. Nina, Mashitah M. Yusoff
Abstract:
In this work, we have described a new 3-dimensional (3D) network of cross-linked Horseradish Peroxidase/Carbon Nanotube (HRP/CNT) on a thiol-modified Au surface in order to build up the effective electrical wiring of the enzyme units with the electrode. This was achieved by the electropolymerization of aniline-functionalized carbon nanotubes (CNTs) and 4-aminothiophenol -modified-HRP on a 4-aminothiophenol monolayer-modified Au electrode. The synthesized 3D HRP/CNT networks were characterized with cyclic voltammetry and amperometry, resulting the establishment direct electron transfer between the redox active unit of HRP and the Au surface. Electrochemical measurements reveal that the immobilized HRP exhibits high biological activity and stability and a quasi-reversible redox peak of the redox center of HRP was observed at about −0.355 and −0.275 V vs. Ag/AgCl. The electron transfer rate constant, KS and electron transfer co-efficient were found to be 0.57 s-1 and 0.42, respectively. Based on the electrocatalytic process by direct electrochemistry of HRP, a biosensor for detecting H2O2 was developed. The developed biosensor exhibits excellent electrocatalytic activity for the reduction of H2O2. The proposed biosensor modified with HRP/CNT 3D network displays a broader linear range and a lower detection limit for H2O2 determination. The linear range is from 1.0×10−7 to 1.2×10−4M with a detection limit of 2.2.0×10−8M at 3σ. Moreover, this biosensor exhibits very high sensitivity, good reproducibility and long-time stability. In summary, ease of fabrication, a low cost, fast response and high sensitivity are the main advantages of the new biosensor proposed in this study. These obvious advantages would really help for the real analytical applicability of the proposed biosensor.Keywords: redox enzyme, nanomaterials, biosensors, electrical communication
Procedia PDF Downloads 45428431 Modified Genome-Scale Metabolic Model of Escherichia coli by Adding Hyaluronic Acid Biosynthesis-Related Enzymes (GLMU2 and HYAD) from Pasteurella multocida
Authors: P. Pasomboon, P. Chumnanpuen, T. E-kobon
Abstract:
Hyaluronic acid (HA) consists of linear heteropolysaccharides repeat of D-glucuronic acid and N-acetyl-D-glucosamine. HA has various useful properties to maintain skin elasticity and moisture, reduce inflammation, and lubricate the movement of various body parts without causing immunogenic allergy. HA can be found in several animal tissues as well as in the capsule component of some bacteria including Pasteurella multocida. This study aimed to modify a genome-scale metabolic model of Escherichia coli using computational simulation and flux analysis methods to predict HA productivity under different carbon sources and nitrogen supplement by the addition of two enzymes (GLMU2 and HYAD) from P. multocida to improve the HA production under the specified amount of carbon sources and nitrogen supplements. Result revealed that threonine and aspartate supplement raised the HA production by 12.186%. Our analyses proposed the genome-scale metabolic model is useful for improving the HA production and narrows the number of conditions to be tested further.Keywords: Pasteurella multocida, Escherichia coli, hyaluronic acid, genome-scale metabolic model, bioinformatics
Procedia PDF Downloads 12328430 Optimization of Water Pipeline Routes Using a GIS-Based Multi-Criteria Decision Analysis and a Geometric Search Algorithm
Authors: Leon Mortari
Abstract:
The Metropolitan East region of Rio de Janeiro state, Brazil, faces a historic water scarcity. Among the alternatives studied to solve this situation, the possibility of adduction of the available water in the reservoir Lagoa de Juturnaíba to supply the region's municipalities stands out. The allocation of a linear engineering project must occur through an evaluation of different aspects, such as altitude, slope, proximity to roads, distance from watercourses, land use and occupation, and physical and chemical features of the soil. This work aims to apply a multi-criteria model that combines geoprocessing techniques, decision-making, and geometric search algorithm to optimize a hypothetical adductor system in the scenario of expanding the water supply system that serves this region, known as Imunana-Laranjal, using the Lagoa de Juturnaíba as the source. It is proposed in this study, the construction of a spatial database related to the presented evaluation criteria, treatment and rasterization of these data, and standardization and reclassification of this information in a Geographic Information System (GIS) platform. The methodology involves the integrated analysis of these criteria, using their relative importance defined by weighting them based on expert consultations and the Analytic Hierarchy Process (AHP) method. Three approaches are defined for weighting the criteria by AHP: the first treats all criteria as equally important, the second considers weighting based on a pairwise comparison matrix, and the third establishes a hierarchy based on the priority of the criteria. For each approach, a distinct group of weightings is defined. In the next step, map algebra tools are used to overlay the layers and generate cost surfaces, that indicates the resistance to the passage of the adductor route, using the three groups of weightings. The Dijkstra algorithm, a geometric search algorithm, is then applied to these cost surfaces to find an optimized path within the geographical space, aiming to minimize resources, time, investment, maintenance, and environmental and social impacts.Keywords: geometric search algorithm, GIS, pipeline, route optimization, spatial multi-criteria analysis model
Procedia PDF Downloads 3128429 Stable Isotope Analysis of Faunal Remains of Ancient Kythnos Island for Paleoenvironmental Reconstruction
Authors: M. Tassi, E. Dotsika, P. Karalis, A. Trantalidou, A. Mazarakis Ainian
Abstract:
The Kythnos Island in Greece is of particular archaeological interest, as it has been inhabited from the 12th BC until the 7th AD. From island excavations, numerous faunal and human skeletal remains have been recovered. This work is the first attempt at the paleoenvironmental reconstruction of the island via stable isotope analysis. Specifically, we perform 13C and 18O isotope analysis in faunal bone apatite in order to investigate the climate conditions that prevailed in the area. Additionally, we conduct 13C and 15N isotope analysis in faunal bone collagen, which will constitute the baseline for the subsequent diet reconstruction of the ancient Kythnos population.Keywords: stable isotopes analysis, bone collagen stable isotope analysis, bone apatite stable isotope analysis, paleodiet, palaeoclimate
Procedia PDF Downloads 14428428 On the System of Split Equilibrium and Fixed Point Problems in Real Hilbert Spaces
Authors: Francis O. Nwawuru, Jeremiah N. Ezeora
Abstract:
In this paper, a new algorithm for solving the system of split equilibrium and fixed point problems in real Hilbert spaces is considered. The equilibrium bifunction involves a nite family of pseudo-monotone mappings, which is an improvement over monotone operators. More so, it turns out that the solution of the finite family of nonexpansive mappings. The regularized parameters do not depend on Lipschitz constants. Also, the computations of the stepsize, which plays a crucial role in the convergence analysis of the proposed method, do require prior knowledge of the norm of the involved bounded linear map. Furthermore, to speed up the rate of convergence, an inertial term technique is introduced in the proposed method. Under standard assumptions on the operators and the control sequences, using a modified Halpern iteration method, we establish strong convergence, a desired result in applications. Finally, the proposed scheme is applied to solve some optimization problems. The result obtained improves numerous results announced earlier in this direction.Keywords: equilibrium, Hilbert spaces, fixed point, nonexpansive mapping, extragradient method, regularized equilibrium
Procedia PDF Downloads 4828427 Direct Electrical Communication of Redox Enzyme Based on 3-Dimensional Crosslinked Redox Enzyme/Carbon Nanotube on a Thiol-Modified Au Surface
Authors: A. K. M. Kafi, S. N. Nina, Mashitah M. Yusoff
Abstract:
In this work, we have described a new 3-dimensional (3D) network of crosslinked Horseradish Peroxidase/Carbon Nanotube (HRP/CNT) on a thiol-modified Au surface in order to build up the effective electrical wiring of the enzyme units with the electrode. This was achieved by the electropolymerization of aniline-functionalized carbon nanotubes (CNTs) and 4-aminothiophenol -modified-HRP on a 4-aminothiophenol monolayer-modified Au electrode. The synthesized 3D HRP/CNT networks were characterized with cyclic voltammetry and amperometry, resulting the establishment direct electron transfer between the redox active unit of HRP and the Au surface. Electrochemical measurements reveal that the immobilized HRP exhibits high biological activity and stability and a quasi-reversible redox peak of the redox center of HRP was observed at about −0.355 and −0.275 V vs. Ag/AgCl. The electron transfer rate constant, KS and electron transfer co-efficient were found to be 0.57 s-1 and 0.42, respectively. Based on the electrocatalytic process by direct electrochemistry of HRP, a biosensor for detecting H2O2 was developed. The developed biosensor exhibits excellent electrocatalytic activity for the reduction of H2O2. The proposed biosensor modified with HRP/CNT 3D network displays a broader linear range and a lower detection limit for H2O2 determination. The linear range is from 1.0×10−7 to 1.2×10−4M with a detection limit of 2.2.0×10−8M at 3σ. Moreover, this biosensor exhibits very high sensitivity, good reproducibility and long-time stability. In summary, ease of fabrication, a low cost, fast response and high sensitivity are the main advantages of the new biosensor proposed in this study. These obvious advantages would really help for the real analytical applicability of the proposed biosensor.Keywords: biosensor, nanomaterials, redox enzyme, thiol-modified Au surface
Procedia PDF Downloads 32928426 Improvement of Performance for R. C. Beams Made from Recycled Aggregate by Using Non-Traditional Admixture
Authors: A. H. Yehia, M. M. Rashwan, K. A. Assaf, K. Abd el Samee
Abstract:
The aim of this work is to use an environmental, cheap; organic non-traditional admixture to improve the structural behavior of sustainable reinforced concrete beams contains different ratios of recycled concrete aggregate. The used admixture prepared by using wastes from vegetable oil industry. Under and over reinforced concrete beams made from natural aggregate and different ratios of recycled concrete aggregate were tested under static load until failure. Eight beams were tested to investigate the performance and mechanism effect of admixture on improving deformation characteristics, modulus of elasticity and toughness of tested beams. Test results show efficiency of organic admixture on improving flexural behavior of beams contains 20% recycled concrete aggregate more over the other ratios.Keywords: deflection, modulus of elasticity, non-traditional admixture, recycled concrete aggregate, strain, toughness, under and over reinforcement
Procedia PDF Downloads 46428425 Geospatial Curve Fitting Methods for Disease Mapping of Tuberculosis in Eastern Cape Province, South Africa
Authors: Davies Obaromi, Qin Yongsong, James Ndege
Abstract:
To interpolate scattered or regularly distributed data, there are imprecise or exact methods. However, there are some of these methods that could be used for interpolating data in a regular grid and others in an irregular grid. In spatial epidemiology, it is important to examine how a disease prevalence rates are distributed in space, and how they relate with each other within a defined distance and direction. In this study, for the geographic and graphic representation of the disease prevalence, linear and biharmonic spline methods were implemented in MATLAB, and used to identify, localize and compare for smoothing in the distribution patterns of tuberculosis (TB) in Eastern Cape Province. The aim of this study is to produce a more “smooth” graphical disease map for TB prevalence patterns by a 3-D curve fitting techniques, especially the biharmonic splines that can suppress noise easily, by seeking a least-squares fit rather than exact interpolation. The datasets are represented generally as a 3D or XYZ triplets, where X and Y are the spatial coordinates and Z is the variable of interest and in this case, TB counts in the province. This smoothing spline is a method of fitting a smooth curve to a set of noisy observations using a spline function, and it has also become the conventional method for its high precision, simplicity and flexibility. Surface and contour plots are produced for the TB prevalence at the provincial level for 2012 – 2015. From the results, the general outlook of all the fittings showed a systematic pattern in the distribution of TB cases in the province and this is consistent with some spatial statistical analyses carried out in the province. This new method is rarely used in disease mapping applications, but it has a superior advantage to be assessed at subjective locations rather than only on a rectangular grid as seen in most traditional GIS methods of geospatial analyses.Keywords: linear, biharmonic splines, tuberculosis, South Africa
Procedia PDF Downloads 23828424 Tc-99m MIBI Scintigraphy to Differentiate Malignant from Benign Lesions, Detected on Planar Bone Scan
Authors: Aniqa Jabeen
Abstract:
The aim of this study was to evaluate the effectiveness of Tc-99m MIBI (Technetium 99-methoxy-iso-butyl-isonitrile) scintigraphy to differentiate malignancies from benign lesions, which were detected on planar bone scans. Materials and Methods: 59 patients with bone lesions were enrolled in the study. The scintigraphic findings were compared with the clinical, radiological and the histological findings. Each patient initially underwent a three-phase bone scan with Tc-99m MDP (Methylene Diphosphonate) and if evidence of lesion found, the patient then underwent a dynamic and static MIBI scintigraphy after three to four days. The MDP and MIBI scans were evaluated visually and quantitatively. For quantitative analysis count ratios of lesions and contralateral normal side (L/C) were taken by region of interests drawn on scans. The Student T test was applied to assess the significant difference between benign and malignant lesions p-value < 0.05 was considered significant. Result: The MDP scans showed the increase tracer uptake, but there was no significant difference between benign and malignant uptake of the radiotracer. However significant difference (p-value 0.015), in uptake was seen in malignant (L/C = 3.51 ± 1.02) and benign lesion (L/C = 2.50±0.42) on MIBI scan. Three of thirty benign lesions did not show significant MIBI uptake. Seven malignant appeared as false negatives. Specificity of the scan was 86.66%, and its Negative Predictive Value (NPV) was 81.25% whereas the sensitivity of scan was 79.31%. In excluding the axial metastasis from the lesions, the sensitivity of MIBI scan increased to 91.66% and the NPV also increased to 92.85%. Conclusion: MIBI scintigraphy provides its usefulness by distinguishing malignant from benign lesions. MIBI also correctly identifies metastatic lesions. The negative predictive value of the scan points towards its ability to accurately diagnose the normal (benign) cases. However, biopsy remains the gold standard and a definitive diagnostic modality in musculoskeletal tumors. MIBI scan provides useful information in preoperative assessment and in distinguishing between malignant and benign lesions.Keywords: benign, malignancies, MDP bone scan, MIBI scintigraphy
Procedia PDF Downloads 40428423 Neural Synchronization - The Brain’s Transfer of Sensory Data
Authors: David Edgar
Abstract:
To understand how the brain’s subconscious and conscious functions, we must conquer the physics of Unity, which leads to duality’s algorithm. Where the subconscious (bottom-up) and conscious (top-down) processes function together to produce and consume intelligence, we use terms like ‘time is relative,’ but we really do understand the meaning. In the brain, there are different processes and, therefore, different observers. These different processes experience time at different rates. A sensory system such as the eyes cycles measurement around 33 milliseconds, the conscious process of the frontal lobe cycles at 300 milliseconds, and the subconscious process of the thalamus cycle at 5 milliseconds. Three different observers experience time differently. To bridge observers, the thalamus, which is the fastest of the processes, maintains a synchronous state and entangles the different components of the brain’s physical process. The entanglements form a synchronous cohesion between the brain components allowing them to share the same state and execute in the same measurement cycle. The thalamus uses the shared state to control the firing sequence of the brain’s linear subconscious process. Sharing state also allows the brain to cheat on the amount of sensory data that must be exchanged between components. Only unpredictable motion is transferred through the synchronous state because predictable motion already exists in the shared framework. The brain’s synchronous subconscious process is entirely based on energy conservation, where prediction regulates energy usage. So, the eyes every 33 milliseconds dump their sensory data into the thalamus every day. The thalamus is going to perform a motion measurement to identify the unpredictable motion in the sensory data. Here is the trick. The thalamus conducts its measurement based on the original observation time of the sensory system (33 ms), not its own process time (5 ms). This creates a data payload of synchronous motion that preserves the original sensory observation. Basically, a frozen moment in time (Flat 4D). The single moment in time can then be processed through the single state maintained by the synchronous process. Other processes, such as consciousness (300 ms), can interface with the synchronous state to generate awareness of that moment. Now, synchronous data traveling through a separate faster synchronous process creates a theoretical time tunnel where observation time is tunneled through the synchronous process and is reproduced on the other side in the original time-relativity. The synchronous process eliminates time dilation by simply removing itself from the equation so that its own process time does not alter the experience. To the original observer, the measurement appears to be instantaneous, but in the thalamus, a linear subconscious process generating sensory perception and thought production is being executed. It is all just occurring in the time available because other observation times are slower than thalamic measurement time. For life to exist in the physical universe requires a linear measurement process, it just hides by operating at a faster time relativity. What’s interesting is time dilation is not the problem; it’s the solution. Einstein said there was no universal time.Keywords: neural synchronization, natural intelligence, 99.95% IoT data transmission savings, artificial subconscious intelligence (ASI)
Procedia PDF Downloads 12628422 Relevancy Measures of Errors in Displacements of Finite Elements Analysis Results
Authors: A. B. Bolkhir, A. Elshafie, T. K. Yousif
Abstract:
This paper highlights the methods of error estimation in finite element analysis (FEA) results. It indicates that the modeling error could be eliminated by performing finite element analysis with successively finer meshes or by extrapolating response predictions from an orderly sequence of relatively low degree of freedom analysis results. In addition, the paper eliminates the round-off error by running the code at a higher precision. The paper provides application in finite element analysis results. It draws a conclusion based on results of application of methods of error estimation.Keywords: finite element analysis (FEA), discretization error, round-off error, mesh refinement, richardson extrapolation, monotonic convergence
Procedia PDF Downloads 495