Search results for: linear FM chirp
2546 Planktivorous Fish Schooling Responses to Current at Natural and Artificial Reefs
Authors: Matthew Holland, Jason Everett, Martin Cox, Iain Suthers
Abstract:
High spatial-resolution distribution of planktivorous reef fish can reveal behavioural adaptations to optimise the balance between feeding success and predator avoidance. We used a multi-beam echosounder to record bathymetry and the three-dimensional distribution of fish schools associated with natural and artificial reefs. We utilised generalised linear models to assess the distribution, orientation, and aggregation of fish schools relative to the structure, vertical relief, and currents. At artificial reefs, fish schooled more closely to the structure and demonstrated a preference for the windward side, particularly when exposed to strong currents. Similarly, at natural reefs fish demonstrated a preference for windward aspects of bathymetry, particularly when associated with high vertical relief. Our findings suggest that under conditions with stronger current velocity, fish can exercise their preference to remain close to structure for predator avoidance, while still receiving an adequate supply of zooplankton delivered by the current. Similarly, when current velocity is low, fish tend to disperse for better access to zooplankton. As artificial reefs are generally deployed with the goal of creating productivity rather than simply attracting fish from elsewhere, we advise that future artificial reefs be designed as semi-linear arrays perpendicular to the prevailing current, with multiple tall towers. This will facilitate the conversion of dispersed zooplankton into energy for higher trophic levels, enhancing reef productivity and fisheries.Keywords: artificial reef, current, forage fish, multi-beam, planktivorous fish, reef fish, schooling
Procedia PDF Downloads 1582545 On the Existence of Homotopic Mapping Between Knowledge Graphs and Graph Embeddings
Authors: Jude K. Safo
Abstract:
Knowledge Graphs KG) and their relation to Graph Embeddings (GE) represent a unique data structure in the landscape of machine learning (relative to image, text and acoustic data). Unlike the latter, GEs are the only data structure sufficient for representing hierarchically dense, semantic information needed for use-cases like supply chain data and protein folding where the search space exceeds the limits traditional search methods (e.g. page-rank, Dijkstra, etc.). While GEs are effective for compressing low rank tensor data, at scale, they begin to introduce a new problem of ’data retreival’ which we observe in Large Language Models. Notable attempts by transE, TransR and other prominent industry standards have shown a peak performance just north of 57% on WN18 and FB15K benchmarks, insufficient practical industry applications. They’re also limited, in scope, to next node/link predictions. Traditional linear methods like Tucker, CP, PARAFAC and CANDECOMP quickly hit memory limits on tensors exceeding 6.4 million nodes. This paper outlines a topological framework for linear mapping between concepts in KG space and GE space that preserve cardinality. Most importantly we introduce a traceable framework for composing dense linguistic strcutures. We demonstrate performance on WN18 benchmark this model hits. This model does not rely on Large Langauge Models (LLM) though the applications are certainy relevant here as well.Keywords: representation theory, large language models, graph embeddings, applied algebraic topology, applied knot theory, combinatorics
Procedia PDF Downloads 682544 Nonlinear Finite Element Analysis of Optimally Designed Steel Angelina™ Beams
Authors: Ferhat Erdal, Osman Tunca, Serkan Tas, Serdar Carbas
Abstract:
Web-expanded steel beams provide an easy and economical solution for the systems having longer structural members. The main goal of manufacturing these beams is to increase the moment of inertia and section modulus, which results in greater strength and rigidity. Until recently, there were two common types of open web-expanded beams: with hexagonal openings, also called castellated beams, and beams with circular openings referred to as cellular beams, until the generation of sinusoidal web-expanded beams. In the present research, the optimum design of a new generation beams, namely sinusoidal web-expanded beams, will be carried out and the design results will be compared with castellated and cellular beam solutions. Thanks to a reduced fabrication process and substantial material savings, the web-expanded beam with sinusoidal holes (Angelina™ Beam) meets the economic requirements of steel design problems while ensuring optimum safety. The objective of this research is to carry out non-linear finite element analysis (FEA) of the web-expanded beam with sinusoidal holes. The FE method has been used to predict their entire response to increasing values of external loading until they lose their load carrying capacity. FE model of each specimen that is utilized in the experimental studies is carried out. These models are used to simulate the experimental work to verify of test results and to investigate the non-linear behavior of failure modes such as web-post buckling, shear buckling and vierendeel bending of beams.Keywords: steel structures, web-expanded beams, angelina beam, optimum design, failure modes, finite element analysis
Procedia PDF Downloads 2812543 Intelligent Control of Bioprocesses: A Software Application
Authors: Mihai Caramihai, Dan Vasilescu
Abstract:
The main research objective of the experimental bioprocess analyzed in this paper was to obtain large biomass quantities. The bioprocess is performed in 100 L Bioengineering bioreactor with 42 L cultivation medium made of peptone, meat extract and sodium chloride. The reactor was equipped with pH, temperature, dissolved oxygen, and agitation controllers. The operating parameters were 37 oC, 1.2 atm, 250 rpm and air flow rate of 15 L/min. The main objective of this paper is to present a case study to demonstrate that intelligent control, describing the complexity of the biological process in a qualitative and subjective manner as perceived by human operator, is an efficient control strategy for this kind of bioprocesses. In order to simulate the bioprocess evolution, an intelligent control structure, based on fuzzy logic has been designed. The specific objective is to present a fuzzy control approach, based on human expert’ rules vs. a modeling approach of the cells growth based on bioprocess experimental data. The kinetic modeling may represent only a small number of bioprocesses for overall biosystem behavior while fuzzy control system (FCS) can manipulate incomplete and uncertain information about the process assuring high control performance and provides an alternative solution to non-linear control as it is closer to the real world. Due to the high degree of non-linearity and time variance of bioprocesses, the need of control mechanism arises. BIOSIM, an original developed software package, implements such a control structure. The simulation study has showed that the fuzzy technique is quite appropriate for this non-linear, time-varying system vs. the classical control method based on a priori model.Keywords: intelligent, control, fuzzy model, bioprocess optimization
Procedia PDF Downloads 3272542 Vibration Absorption Strategy for Multi-Frequency Excitation
Authors: Der Chyan Lin
Abstract:
Since the early introduction by Ormondroyd and Den Hartog, vibration absorber (VA) has become one of the most commonly used vibration mitigation strategies. The strategy is most effective for a primary plant subjected to a single frequency excitation. For continuous systems, notable advances in vibration absorption in the multi-frequency system were made. However, the efficacy of the VA strategy for systems under multi-frequency excitation is not well understood. For example, for an N degrees-of-freedom (DOF) primary-absorber system, there are N 'peak' frequencies of large amplitude vibration per every new excitation frequency. In general, the usable range for vibration absorption can be greatly reduced as a result. Frequency modulated harmonic excitation is a commonly seen multi-frequency excitation example: f(t) = cos(ϖ(t)t) where ϖ(t)=ω(1+α sin(δt)). It is known that f(t) has a series expansion given by the Bessel function of the first kind, which implies an infinity of forcing frequencies in the frequency modulated harmonic excitation. For an SDOF system of natural frequency ωₙ subjected to f(t), it can be shown that amplitude peaks emerge at ω₍ₚ,ₖ₎=(ωₙ ± 2kδ)/(α ∓ 1),k∈Z; i.e., there is an infinity of resonant frequencies ω₍ₚ,ₖ₎, k∈Z, making the use of VA strategy ineffective. In this work, we propose an absorber frequency placement strategy for SDOF vibration systems subjected to frequency-modulated excitation. An SDOF linear mass-spring system coupled to lateral absorber systems is used to demonstrate the ideas. Although the mechanical components are linear, the governing equations for the coupled system are nonlinear. We show using N identical absorbers, for N ≫ 1, that (a) there is a cluster of N+1 natural frequencies around every natural absorber frequency, and (b) the absorber frequencies can be moved away from the plant's resonance frequency (ω₀) as N increases. Moreover, we also show the bandwidth of the VA performance increases with N. The derivations of the clustering and bandwidth widening effect will be given, and the superiority of the proposed strategy will be demonstrated via numerical experiments.Keywords: Bessel function, bandwidth, frequency modulated excitation, vibration absorber
Procedia PDF Downloads 1572541 The Relationship between Land Use Factors and Feeling of Happiness at the Neighbourhood Level
Authors: M. Moeinaddini, Z. Asadi-Shekari, Z. Sultan, M. Zaly Shah
Abstract:
Happiness can be related to everything that can provide a feeling of satisfaction or pleasure. This study tries to consider the relationship between land use factors and feeling of happiness at the neighbourhood level. Land use variables (beautiful and attractive neighbourhood design, availability and quality of shopping centres, sufficient recreational spaces and facilities, and sufficient daily service centres) are used as independent variables and the happiness score is used as the dependent variable in this study. In addition to the land use variables, socio-economic factors (gender, race, marital status, employment status, education, and income) are also considered as independent variables. This study uses the Oxford happiness questionnaire to estimate happiness score of more than 300 people living in six neighbourhoods. The neighbourhoods are selected randomly from Skudai neighbourhoods in Johor, Malaysia. The land use data were obtained by adding related questions to the Oxford happiness questionnaire. The strength of the relationship in this study is found using generalised linear modelling (GLM). The findings of this research indicate that increase in happiness feeling is correlated with an increasing income, more beautiful and attractive neighbourhood design, sufficient shopping centres, recreational spaces, and daily service centres. The results show that all land use factors in this study have significant relationship with happiness but only income, among socio-economic factors, can affect happiness significantly. Therefore, land use factors can affect happiness in Skudai more than socio-economic factors.Keywords: neighbourhood land use, neighbourhood design, happiness, socio-economic factors, generalised linear modelling
Procedia PDF Downloads 1492540 Thermoluminescence Characteristic of Nanocrystalline BaSO4 Doped with Europium
Authors: Kanika S. Raheja, A. Pandey, Shaila Bahl, Pratik Kumar, S. P. Lochab
Abstract:
The subject of undertaking for this paper is the study of BaSO4 nanophosphor doped with Europium in which mainly the concentration of the rare earth impurity Eu (0.05, 0.1, 0.2, 0.5, and 1 mol %) has been varied. A comparative study of the thermoluminescence(TL) properties of the given nanophosphor has also been done using a well-known standard dosimetry material i.e. TLD-100.Firstly, a number of samples were prepared successfully by the chemical co-precipitation method. The whole lot was then compared to a well established standard material (TLD-100) for its TL sensitivity property. BaSO4:Eu ( 0.2 mol%) showed the highest sensitivity out of the lot. It was also found that when compared to the standard TLD-100, BaSo4:Eu (0.2mol%) showed surprisingly high sensitivity for a large range of doses. The TL response curve for all prepared samples has also been studied over a wide range of doses i.e 10Gy to 2kGy for gamma radiation. Almost all the samples of BaSO4:Eu showed a remarkable linearity for a broad range of doses, which is a characteristic feature of a fine TL dosimeter. The graph remained linear even beyond 1kGy for gamma radiation. Thus, the given nanophosphor has been successfully optimised for the concentration of the dopant material to achieve its highest TL sensitivity. Further, the comparative study with the standard material revealed that the current optimised sample shows an astonishingly better TL sensitivity and a phenomenal linear response curve for an incredibly wide range of doses for gamma radiation (Co-60) as compared to the standard TLD-100, which makes the current optimised BaSo4:Eu quite promising as an efficient gamma radiation dosimeter. Lastly, the present phosphor has been optimised for its annealing temperature to acquire the best results while also studying its fading and reusability properties.Keywords: gamma radiation, nanoparticles, radiation dosimetry, thermoluminescence
Procedia PDF Downloads 4302539 Development of a Sensitive Electrochemical Sensor Based on Carbon Dots and Graphitic Carbon Nitride for the Detection of 2-Chlorophenol and Arsenic
Authors: Theo H. G. Moundzounga
Abstract:
Arsenic and 2-chlorophenol are priority pollutants that pose serious health threats to humans and ecology. An electrochemical sensor, based on graphitic carbon nitride (g-C₃N₄) and carbon dots (CDs), was fabricated and used for the determination of arsenic and 2-chlorophenol. The g-C₃N₄/CDs nanocomposite was prepared via microwave irradiation heating method and was dropped-dried on the surface of the glassy carbon electrode (GCE). Transmission electron microscopy (TEM), X-ray diffraction (XRD), photoluminescence (PL), Fourier transform infrared spectroscopy (FTIR), UV-Vis diffuse reflectance spectroscopy (UV-Vis DRS) were used for the characterization of structure and morphology of the nanocomposite. Electrochemical characterization was done by electrochemical impedance spectroscopy (EIS) and cyclic voltammetry (CV). The electrochemical behaviors of arsenic and 2-chlorophenol on different electrodes (GCE, CDs/GCE, and g-C₃N₄/CDs/GCE) was investigated by differential pulse voltammetry (DPV). The results demonstrated that the g-C₃N₄/CDs/GCE significantly enhanced the oxidation peak current of both analytes. The analytes detection sensitivity was greatly improved, suggesting that this new modified electrode has great potential in the determination of trace level of arsenic and 2-chlorophenol. Experimental conditions which affect the electrochemical response of arsenic and 2-chlorophenol were studied, the oxidation peak currents displayed a good linear relationship to concentration for 2-chlorophenol (R²=0.948, n=5) and arsenic (R²=0.9524, n=5), with a linear range from 0.5 to 2.5μM for 2-CP and arsenic and a detection limit of 2.15μM and 0.39μM respectively. The modified electrode was used to determine arsenic and 2-chlorophenol in spiked tap and effluent water samples by the standard addition method, and the results were satisfying. According to the measurement, the new modified electrode is a good alternative as chemical sensor for determination of other phenols.Keywords: electrochemistry, electrode, limit of detection, sensor
Procedia PDF Downloads 1452538 Comparison of the Existing Damage Indices in Steel Moment-Resisting Frame Structures
Authors: Hamid Kazemi, Abbasali Sadeghi
Abstract:
Assessment of seismic behavior of frame structures is just done for evaluating life and financial damages or lost. The new structural seismic behavior assessment methods have been proposed, so it is necessary to define a formulation as a damage index, which the damage amount has been quantified and qualified. In this paper, four new steel moment-resisting frames with intermediate ductility and different height (2, 5, 8, and 12-story) with regular geometry and simple rectangular plan were supposed and designed. The three existing groups’ damage indices were studied, each group consisting of local index (Drift, Maximum Roof Displacement, Banon Failure, Kinematic, Banon Normalized Cumulative Rotation, Cumulative Plastic Rotation and Ductility), global index (Roufaiel and Meyer, Papadopoulos, Sozen, Rosenblueth, Ductility and Base Shear), and story (Banon Failure and Inter-story Rotation). The necessary parameters for these damage indices have been calculated under the effect of far-fault ground motion records by Non-linear Dynamic Time History Analysis. Finally, prioritization of damage indices is defined based on more conservative values in terms of more damageability rate. The results show that the selected damage index has an important effect on estimation of the damage state. Also, failure, drift, and Rosenblueth damage indices are more conservative indices respectively for local, story and global damage indices.Keywords: damage index, far-fault ground motion records, non-linear time history analysis, SeismoStruct software, steel moment-resisting frame
Procedia PDF Downloads 2922537 Comparison of Different Machine Learning Algorithms for Solubility Prediction
Authors: Muhammet Baldan, Emel Timuçin
Abstract:
Molecular solubility prediction plays a crucial role in various fields, such as drug discovery, environmental science, and material science. In this study, we compare the performance of five machine learning algorithms—linear regression, support vector machines (SVM), random forests, gradient boosting machines (GBM), and neural networks—for predicting molecular solubility using the AqSolDB dataset. The dataset consists of 9981 data points with their corresponding solubility values. MACCS keys (166 bits), RDKit properties (20 properties), and structural properties(3) features are extracted for every smile representation in the dataset. A total of 189 features were used for training and testing for every molecule. Each algorithm is trained on a subset of the dataset and evaluated using metrics accuracy scores. Additionally, computational time for training and testing is recorded to assess the efficiency of each algorithm. Our results demonstrate that random forest model outperformed other algorithms in terms of predictive accuracy, achieving an 0.93 accuracy score. Gradient boosting machines and neural networks also exhibit strong performance, closely followed by support vector machines. Linear regression, while simpler in nature, demonstrates competitive performance but with slightly higher errors compared to ensemble methods. Overall, this study provides valuable insights into the performance of machine learning algorithms for molecular solubility prediction, highlighting the importance of algorithm selection in achieving accurate and efficient predictions in practical applications.Keywords: random forest, machine learning, comparison, feature extraction
Procedia PDF Downloads 412536 Efficient Utilization of Negative Half Wave of Regulator Rectifier Output to Drive Class D LED Headlamp
Authors: Lalit Ahuja, Nancy Das, Yashas Shetty
Abstract:
LED lighting has been increasingly adopted for vehicles in both domestic and foreign automotive markets. Although this miniaturized technology gives the best light output, low energy consumption, and cost-efficient solutions for driving, the same is the need of the hour. In this paper, we present a methodology for driving the highest class two-wheeler headlamp with regulator and rectifier (RR) output. Unlike usual LED headlamps, which are driven by a battery, regulator, and rectifier (RR) driven, a low-cost and highly efficient LED Driver Module (LDM) is proposed. The positive half of magneto output is regulated and used to charge batteries used for various peripherals. While conventionally, the negative half was used for operating bulb-based exterior lamps. But with advancements in LED-based headlamps, which are driven by a battery, this negative half pulse remained unused in most of the vehicles. Our system uses negative half-wave rectified DC output from RR to provide constant light output at all RPMs of the vehicle. With the negative rectified DC output of RR, we have the advantage of pulsating DC input which periodically goes to zero, thus helping us to generate a constant DC output equivalent to the required LED load, and with a change in RPM, additional active thermal bypass circuit help us to maintain the efficiency and thermal rise. The methodology uses the negative half wave output of the RR along with a linear constant current driver with significantly higher efficiency. Although RR output has varied frequency and duty cycles at different engine RPMs, the driver is designed such that it provides constant current to LEDs with minimal ripple. In LED Headlamps, a DC-DC switching regulator is usually used, which is usually bulky. But with linear regulators, we’re eliminating bulky components and improving the form factor. Hence, this is both cost-efficient and compact. Presently, output ripple-free amplitude drivers with fewer components and less complexity are limited to lower-power LED Lamps. The focus of current high-efficiency research is often on high LED power applications. This paper presents a method of driving LED load at both High Beam and Low Beam using the negative half wave rectified pulsating DC from RR with minimum components, maintaining high efficiency within the thermal limitations. Linear regulators are significantly inefficient, with efficiencies typically about 40% and reaching as low as 14%. This leads to poor thermal performance. Although they don’t require complex and bulky circuitry, powering high-power devices is difficult to realise with the same. But with the input being negative half wave rectified pulsating DC, this efficiency can be improved as this helps us to generate constant DC output equivalent to LED load minimising the voltage drop on the linear regulator. Hence, losses are significantly reduced, and efficiency as high as 75% is achieved. With a change in RPM, DC voltage increases, which can be managed by active thermal bypass circuitry, thus resulting in better thermal performance. Hence, the use of bulky and expensive heat sinks can be avoided. Hence, the methodology to utilize the unused negative pulsating DC output of RR to optimize the utilization of RR output power and provide a cost-efficient solution as compared to costly DC-DC drivers.Keywords: class D LED headlamp, regulator and rectifier, pulsating DC, low cost and highly efficient, LED driver module
Procedia PDF Downloads 672535 Cessna Citation X Business Aircraft Stability Analysis Using Linear Fractional Representation LFRs Model
Authors: Yamina Boughari, Ruxandra Mihaela Botez, Florian Theel, Georges Ghazi
Abstract:
Clearance of flight control laws of a civil aircraft is a long and expensive process in the Aerospace industry. Thousands of flight combinations in terms of speeds, altitudes, gross weights, centers of gravity and angles of attack have to be investigated, and proved to be safe. Nonetheless, in this method, a worst flight condition can be easily missed, and its missing would lead to a critical situation. Definitively, it would be impossible to analyze a model because of the infinite number of cases contained within its flight envelope, that might require more time, and therefore more design cost. Therefore, in industry, the technique of the flight envelope mesh is commonly used. For each point of the flight envelope, the simulation of the associated model ensures the satisfaction or not of specifications. In order to perform fast, comprehensive and effective analysis, other varying parameters models were developed by incorporating variations, or uncertainties in the nominal models, known as Linear Fractional Representation LFR models; these LFR models were able to describe the aircraft dynamics by taking into account uncertainties over the flight envelope. In this paper, the LFRs models are developed using the speeds and altitudes as varying parameters; The LFR models were built using several flying conditions expressed in terms of speeds and altitudes. The use of such a method has gained a great interest by the aeronautical companies that have seen a promising future in the modeling, and particularly in the design and certification of control laws. In this research paper, we will focus on the Cessna Citation X open loop stability analysis. The data are provided by a Research Aircraft Flight Simulator of Level D, that corresponds to the highest level flight dynamics certification; this simulator was developed by CAE Inc. and its development was based on the requirements of research at the LARCASE laboratory. The acquisition of these data was used to develop a linear model of the airplane in its longitudinal and lateral motions, and was further used to create the LFR’s models for 12 XCG /weights conditions, and thus the whole flight envelope using a friendly Graphical User Interface developed during this study. Then, the LFR’s models are analyzed using Interval Analysis method based upon Lyapunov function, and also the ‘stability and robustness analysis’ toolbox. The results were presented under the form of graphs, thus they have offered good readability, and were easily exploitable. The weakness of this method stays in a relatively long calculation, equal to about four hours for the entire flight envelope.Keywords: flight control clearance, LFR, stability analysis, robustness analysis
Procedia PDF Downloads 3522534 Hydromagnetic Linear Instability Analysis of Giesekus Fluids in Taylor-Couette Flow
Authors: K. Godazandeh, K. Sadeghy
Abstract:
In the present study, the effect of magnetic field on the hydrodynamic instability of Taylor-Couette flow between two concentric rotating cylinders has been numerically investigated. At the beginning the basic flow has been solved using continuity, Cauchy equations (with regards to Lorentz force) and the constitutive equations of a viscoelastic model called "Giesekus" model. Small perturbations, considered to be normal mode, have been superimposed to the basic flow and the unsteady perturbation equations have been derived consequently. Neglecting non-linear terms, the general eigenvalue problem obtained has been solved using pseudo spectral method (combination of Chebyshev polynomials). The objective of the calculations is to study the effect of magnetic fields on the onset of first mode of instability (axisymmetric mode) for different dimensionless parameters of the flow. The results show that the stability picture is highly influenced by the magnetic field. When magnetic field increases, it first has a destabilization effect which changes to stabilization effect due to more increase of magnetic fields. Therefor there is a critical magnetic number (Hartmann number) for instability of Taylor-Couette flow. Also, the effect of magnetic field is more dominant in large gaps. Also based on the results obtained, magnetic field shows a more considerable effect on the stability at higher Weissenberg numbers (at higher elasticity), while the "mobility factor" changes show no dominant role on the intense of suction and injection effect on the flow's instability.Keywords: magnetic field, Taylor-Couette flow, Giesekus model, pseudo spectral method, Chebyshev polynomials, Hartmann number, Weissenberg number, mobility factor
Procedia PDF Downloads 3902533 Rule-Of-Mixtures: Predicting the Bending Modulus of Unidirectional Fiber Reinforced Dental Composites
Authors: Niloofar Bahramian, Mohammad Atai, Mohammad Reza Naimi-Jamal
Abstract:
Rule of mixtures is the simple analytical model is used to predict various properties of composites before design. The aim of this study was to demonstrate the benefits and limitations of the Rule-of-Mixtures (ROM) for predicting bending modulus of a continuous and unidirectional fiber reinforced composites using in dental applications. The Composites were fabricated from light curing resin (with and without silica nanoparticles) and modified and non-modified fibers. Composite samples were divided into eight groups with ten specimens for each group. The bending modulus (flexural modulus) of samples was determined from the slope of the initial linear region of stress-strain curve on 2mm×2mm×25mm specimens with different designs: fibers corona treatment time (0s, 5s, 7s), fibers silane treatment (0%wt, 2%wt), fibers volume fraction (41%, 33%, 25%) and nanoparticles incorporation in resin (0%wt, 10%wt, 15%wt). To study the fiber and matrix interface after fracture, single edge notch beam (SENB) method and scanning electron microscope (SEM) were used. SEM also was used to show the nanoparticles dispersion in resin. Experimental results of bending modulus for composites made of both physical (corona) and chemical (silane) treated fibers were in reasonable agreement with linear ROM estimates, but untreated fibers or non-optimized treated fibers and poor nanoparticles dispersion did not correlate as well with ROM results. This study shows that the ROM is useful to predict the mechanical behavior of unidirectional dental composites but fiber-resin interface and quality of nanoparticles dispersion play important role in ROM accurate predictions.Keywords: bending modulus, fiber reinforced composite, fiber treatment, rule-of-mixtures
Procedia PDF Downloads 2742532 Climate Changes in Albania and Their Effect on Cereal Yield
Authors: Lule Basha, Eralda Gjika
Abstract:
This study is focused on analyzing climate change in Albania and its potential effects on cereal yields. Initially, monthly temperature and rainfalls in Albania were studied for the period 1960-2021. Climacteric variables are important variables when trying to model cereal yield behavior, especially when significant changes in weather conditions are observed. For this purpose, in the second part of the study, linear and nonlinear models explaining cereal yield are constructed for the same period, 1960-2021. The multiple linear regression analysis and lasso regression method are applied to the data between cereal yield and each independent variable: average temperature, average rainfall, fertilizer consumption, arable land, land under cereal production, and nitrous oxide emissions. In our regression model, heteroscedasticity is not observed, data follow a normal distribution, and there is a low correlation between factors, so we do not have the problem of multicollinearity. Machine-learning methods, such as random forest, are used to predict cereal yield responses to climacteric and other variables. Random Forest showed high accuracy compared to the other statistical models in the prediction of cereal yield. We found that changes in average temperature negatively affect cereal yield. The coefficients of fertilizer consumption, arable land, and land under cereal production are positively affecting production. Our results show that the Random Forest method is an effective and versatile machine-learning method for cereal yield prediction compared to the other two methods.Keywords: cereal yield, climate change, machine learning, multiple regression model, random forest
Procedia PDF Downloads 922531 Improved Performance of AlGaN/GaN HEMTs Using N₂/NH₃ Pretreatment before Passivation
Authors: Yifan Gao
Abstract:
Owing to the high breakdown field, high saturation drift velocity, 2DEG with high density and mobility and so on, AlGaN/GaN HEMTs have been widely used in high-frequency and high-power applications. To acquire a higher power often means higher breakdown voltage and higher drain current. Surface leakage current is usually the key issue affecting the breakdown voltage and power performance. In this work, we have performed in-situ N₂/NH₃ pretreatment before the passivation to suppress the surface leakage and achieve device performance enhancement. The AlGaN/GaN HEMT used in this work was grown on a 3-in. SiC substrate, whose epitaxial structure consists of a 3.5-nm GaN cap layer, a 25-nm Al₀.₂₅GaN barrier layer, a 1-nm AlN layer, a 400-nm i-GaN layer and a buffer layer. In order to analyze the mechanism for the N-based pretreatment, the details are measured by XPS analysis. It is found that the intensity of Ga-O bonds is decreasing and the intensity of Ga-N bonds is increasing, which means with the supplement of N, the dangling bonds on the surface are indeed reduced with the forming of Ga-N bonds, reducing the surface states. The surface states have a great influence on the leakage current, and improved surface states represent a better off-state of the device. After the N-based pretreatment, the breakdown voltage of the device with Lₛ𝒹=6 μm increased from 93V to 170V, which increased by 82.8%. Moreover, for HEMTs with Lₛ𝒹 of 6-μm, we can obtain a peak output power (Pout) of 12.79W/mm, power added efficiency (PAE) of 49.84% and a linear gain of 20.2 dB at 60V under 3.6GHz. Comparing the result with the reference 6-μm device, Pout is increased by 16.5%. Meanwhile, PAE and the linear gain also have a slight increase. The experimental results indicate that using N₂/NH₃ pretreatment before passivation is an attractive approach to achieving power performance enhancement.Keywords: AlGaN/GaN HEMT, N-based pretreatment, output power, passivation
Procedia PDF Downloads 3172530 Evaluation of Short-Term Load Forecasting Techniques Applied for Smart Micro-Grids
Authors: Xiaolei Hu, Enrico Ferrera, Riccardo Tomasi, Claudio Pastrone
Abstract:
Load Forecasting plays a key role in making today's and future's Smart Energy Grids sustainable and reliable. Accurate power consumption prediction allows utilities to organize in advance their resources or to execute Demand Response strategies more effectively, which enables several features such as higher sustainability, better quality of service, and affordable electricity tariffs. It is easy yet effective to apply Load Forecasting at larger geographic scale, i.e. Smart Micro Grids, wherein the lower available grid flexibility makes accurate prediction more critical in Demand Response applications. This paper analyses the application of short-term load forecasting in a concrete scenario, proposed within the EU-funded GreenCom project, which collect load data from single loads and households belonging to a Smart Micro Grid. Three short-term load forecasting techniques, i.e. linear regression, artificial neural networks, and radial basis function network, are considered, compared, and evaluated through absolute forecast errors and training time. The influence of weather conditions in Load Forecasting is also evaluated. A new definition of Gain is introduced in this paper, which innovatively serves as an indicator of short-term prediction capabilities of time spam consistency. Two models, 24- and 1-hour-ahead forecasting, are built to comprehensively compare these three techniques.Keywords: short-term load forecasting, smart micro grid, linear regression, artificial neural networks, radial basis function network, gain
Procedia PDF Downloads 4702529 Rehabilitation Team after Brain Damages as Complex System Integrating Consciousness
Authors: Olga Maksakova
Abstract:
A work with unconscious patients after acute brain damages besides special knowledge and practical skills of all the participants requires a very specific organization. A lot of said about team approach in neurorehabilitation, usually as for outpatient mode. Rehabilitologists deal with fixed patient problems or deficits (motion, speech, cognitive or emotional disorder). Team-building means superficial paradigm of management psychology. Linear mode of teamwork fits casual relationships there. Cases with deep altered states of consciousness (vegetative states, coma, and confusion) require non-linear mode of teamwork: recovery of consciousness might not be the goal due to phenomenon uncertainty. Rehabilitation team as Semi-open Complex System includes the patient as a part. Patient's response pattern becomes formed not only with brain deficits but questions-stimuli, context, and inquiring person. Teamwork is sourcing of phenomenology knowledge of patient's processes as Third-person approach is replaced with Second- and after First-person approaches. Here is a chance for real-time change. Patient’s contacts with his own body and outward things create a basement for restoration of consciousness. The most important condition is systematic feedbacks to any minimal movement or vegetative signal of the patient. Up to now, recovery work with the most severe contingent is carried out in the mode of passive physical interventions, while an effective rehabilitation team should include specially trained psychologists and psychotherapists. It is they who are able to create a network of feedbacks with the patient and inter-professional ones building up the team. Characteristics of ‘Team-Patient’ system (TPS) are energy, entropy, and complexity. Impairment of consciousness as the absence of linear contact appears together with a loss of essential functions (low energy), vegetative-visceral fits (excessive energy and low order), motor agitation (excessive energy and excessive order), etc. Techniques of teamwork are different in these cases for resulting optimization of the system condition. Directed regulation of the system complexity is one of the recovery tools. Different signs of awareness appear as a result of system self-organization. Joint meetings are an important part of teamwork. Regular or event-related discussions form the language of inter-professional communication, as well as the patient's shared mental model. Analysis of complex communication process in TPS may be useful for creation of the general theory of consciousness.Keywords: rehabilitation team, urgent rehabilitation, severe brain damage, consciousness disorders, complex system theory
Procedia PDF Downloads 1462528 Assessing the Actual Status and Farmer’s Attitude towards Agroforestry in Chiniot, Pakistan
Authors: M. F. Nawaz, S. Gul, T. H. Farooq, M. T. Siddiqui, M. Asif, I. Ahmad, N. K. Niazi
Abstract:
In Pakistan, major demands of fuel wood and timber wood are fulfilled by agroforestry. However, the information regarding economic significance of agroforestry and its productivity in Pakistan is still insufficient and unreliable. Survey of field conditions to examine the agroforestry status at local level helps us to know the future trends and to formulate the policies for sustainable wood supply. The objectives of this research were to examine the actual status and potential of agroforestry and to point out the barriers that are faced by farmers in the adoption of agroforestry. Research was carried out in Chiniot district, Pakistan because it is the famous city for furniture industry that is largely dependent on farm trees. A detailed survey of district Chiniot was carried out from 150 randomly selected farmer respondents using multi-objective oriented and pre-tested questionnaire. It was found that linear tree planting method was more adopted (45%) as compared to linear + interplanting (42%) and/or compact planting (12.6%). Chi-square values at P-value <0.5 showed that age (11.35) and education (17.09) were two more important factors in the quick adoption of agroforestry as compared to land holdings (P-value of 0.7). The major reason of agroforestry adoption was to obtain income, fodder and fuelwood. The most dominant species in farmlands was shisham (Dalbergia sissoo) but since last five years, mostly farmers were growing Sufeida (Eucalyptus camaldulensis), kikar (Acacia nilotica) and popular (Populus deltoides) on their fields due to “Shisham die-back” problem. It was found that agro-forestry can be increased by providing good quality planting material to farmers and improving wood markets.Keywords: agroforestry, trees, services, agriculture, farmers
Procedia PDF Downloads 4512527 Assessment of Work-Related Stress and Its Predictors in Ethiopian Federal Bureau of Investigation in Addis Ababa
Authors: Zelalem Markos Borko
Abstract:
Work-related stress is a reaction that occurs when the work weight progress toward becoming excessive. Therefore, unless properly managed, stress leads to high employee turnover, decreased performance, illness and absenteeism. Yet, little has been addressed regarding work-related stress and its predictors in the study area. Therefore, the objective of this study was to assess stress prevalence and its predictors in the study area. To that effect, a cross-sectional study design was conducted on 281 employees from the Ethiopian Federal Bureau of Investigation by using stratified random sampling techniques. Survey questionnaire scales were employed to collect data. Data were analyzed by percentage, Pearson correlation coefficients, simple linear regression, multiple linear regressions, independent t-test and one-way ANOVA statistical techniques. In the present study13.9% of participants faced high stress, whereas 13.5% of participants faced low stress and the rest 72.6% of officers experienced moderate stress. There is no significant group difference among workers due to age, gender, marital status, educational level, years of service and police rank. This study concludes that factors such as role conflict, performance over-utilization, role ambiguity, and qualitative and quantitative role overload together predict 39.6% of work-related stress. This indicates that 60.4% of the variation in stress is explained by other factors, so other additional research should be done to identify additional factors predicting stress. To prevent occupational stress among police, the Ethiopian Federal Bureau of Investigation should develop strategies based on factors that will help to develop stress reduction management.Keywords: work-related stress, Ethiopian federal bureau of investigation, predictors, Addis Ababa
Procedia PDF Downloads 702526 Comparing Test Equating by Item Response Theory and Raw Score Methods with Small Sample Sizes on a Study of the ARTé: Mecenas Learning Game
Authors: Steven W. Carruthers
Abstract:
The purpose of the present research is to equate two test forms as part of a study to evaluate the educational effectiveness of the ARTé: Mecenas art history learning game. The researcher applied Item Response Theory (IRT) procedures to calculate item, test, and mean-sigma equating parameters. With the sample size n=134, test parameters indicated “good” model fit but low Test Information Functions and more acute than expected equating parameters. Therefore, the researcher applied equipercentile equating and linear equating to raw scores and compared the equated form parameters and effect sizes from each method. Item scaling in IRT enables the researcher to select a subset of well-discriminating items. The mean-sigma step produces a mean-slope adjustment from the anchor items, which was used to scale the score on the new form (Form R) to the reference form (Form Q) scale. In equipercentile equating, scores are adjusted to align the proportion of scores in each quintile segment. Linear equating produces a mean-slope adjustment, which was applied to all core items on the new form. The study followed a quasi-experimental design with purposeful sampling of students enrolled in a college level art history course (n=134) and counterbalancing design to distribute both forms on the pre- and posttests. The Experimental Group (n=82) was asked to play ARTé: Mecenas online and complete Level 4 of the game within a two-week period; 37 participants completed Level 4. Over the same period, the Control Group (n=52) did not play the game. The researcher examined between group differences from post-test scores on test Form Q and Form R by full-factorial Two-Way ANOVA. The raw score analysis indicated a 1.29% direct effect of form, which was statistically non-significant but may be practically significant. The researcher repeated the between group differences analysis with all three equating methods. For the IRT mean-sigma adjusted scores, form had a direct effect of 8.39%. Mean-sigma equating with a small sample may have resulted in inaccurate equating parameters. Equipercentile equating aligned test means and standard deviations, but resultant skewness and kurtosis worsened compared to raw score parameters. Form had a 3.18% direct effect. Linear equating produced the lowest Form effect, approaching 0%. Using linearly equated scores, the researcher conducted an ANCOVA to examine the effect size in terms of prior knowledge. The between group effect size for the Control Group versus Experimental Group participants who completed the game was 14.39% with a 4.77% effect size attributed to pre-test score. Playing and completing the game increased art history knowledge, and individuals with low prior knowledge tended to gain more from pre- to post test. Ultimately, researchers should approach test equating based on their theoretical stance on Classical Test Theory and IRT and the respective assumptions. Regardless of the approach or method, test equating requires a representative sample of sufficient size. With small sample sizes, the application of a range of equating approaches can expose item and test features for review, inform interpretation, and identify paths for improving instruments for future study.Keywords: effectiveness, equipercentile equating, IRT, learning games, linear equating, mean-sigma equating
Procedia PDF Downloads 1922525 Trajectory Optimization of Re-Entry Vehicle Using Evolutionary Algorithm
Authors: Muhammad Umar Kiani, Muhammad Shahbaz
Abstract:
Performance of any vehicle can be predicted by its design/modeling and optimization. Design optimization leads to efficient performance. Followed by horizontal launch, the air launch re-entry vehicle undergoes a launch maneuver by introducing a carefully selected angle of attack profile. This angle of attack profile is the basic element to complete a specified mission. Flight program of said vehicle is optimized under the constraints of the maximum allowed angle of attack, lateral and axial loads and with the objective of reaching maximum altitude. The main focus of this study is the endo-atmospheric phase of the ascent trajectory. A three degrees of freedom trajectory model is simulated in MATLAB. The optimization process uses evolutionary algorithm, because of its robustness and efficient capacity to explore the design space in search of the global optimum. Evolutionary Algorithm based trajectory optimization also offers the added benefit of being a generalized method that may work with continuous, discontinuous, linear, and non-linear performance matrix. It also eliminates the requirement of a starting solution. Optimization is particularly beneficial to achieve maximum advantage without increasing the computational cost and affecting the output of the system. For the case of launch vehicles we are immensely anxious to achieve maximum performance and efficiency under different constraints. In a launch vehicle, flight program means the prescribed variation of vehicle pitching angle during the flight which has substantial influence reachable altitude and accuracy of orbit insertion and aerodynamic loading. Results reveal that the angle of attack profile significantly affects the performance of the vehicle.Keywords: endo-atmospheric, evolutionary algorithm, efficient performance, optimization process
Procedia PDF Downloads 4052524 The Aesthetics of Time in Thus Spoke Zarathustra: A Reappraisal of the Eternal Recurrence of the Same
Authors: Melanie Tang
Abstract:
According to Nietzsche, the eternal recurrence is his most important idea. However, it is perhaps his most cryptic and difficult to interpret. Early readings considered it as a cosmological hypothesis about the cyclic nature of time. However, following Nehamas’s ‘Life as Literature’ (1985), it has become a widespread interpretation that the eternal recurrence never really had any theoretical dimensions, and is not actually a philosophy of time, but a practical thought experiment intended to measure the extent to which we have mastered and perfected our lives. This paper endeavours to challenge this line of thought becoming scholarly consensus, and to carry out a more complex analysis of the eternal recurrence as it is presented in Thus Spoke Zarathustra. In its wider scope, this research proposes that Thus Spoke Zarathustra — as opposed to The Birth of Tragedy — be taken as the primary source for a study of Nietzsche’s Aesthetics, due to its more intrinsic aesthetic qualities and expressive devices. The eternal recurrence is the central philosophy of a work that communicates its ideas in unprecedentedly experimental and aesthetic terms, and a more in-depth understanding of why Nietzsche chooses to present his conception of time in aesthetic terms is warranted. Through hermeneutical analysis of Thus Spoke Zarathustra and engagement with secondary sources such as those by Nehamas, Karl Löwith, and Jill Marsden, the present analysis challenges the ethics of self-perfection upon which current interpretations of the recurrence are based, as well as their reliance upon a linear conception of time. Instead, it finds the recurrence to be a cyclic interplay between the self and the world, rather than a metric pertaining solely to the self. In this interpretation, time is found to be composed of an intertemporal rather than linear multitude of will to power, which structures itself through tensional cycles into an experience of circular time that can be seen to have aesthetic dimensions. In putting forth this understanding of the eternal recurrence, this research hopes to reopen debate on this key concept in the field of Nietzsche studies.Keywords: Nietzsche, eternal recurrence, Zarathustra, aesthetics, time
Procedia PDF Downloads 1502523 Machine Learning Techniques for Estimating Ground Motion Parameters
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this study is to evaluate the advantages and disadvantages of various machine learning techniques in forecasting ground-motion intensity measures given source characteristics, source-to-site distance, and local site condition. Intensity measures such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Estimating these variables for future earthquake events is a key step in seismic hazard assessment and potentially subsequent risk assessment of different types of structures. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as a statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The algorithms are adjusted to quantify event-to-event and site-to-site variability of the ground motions by implementing them as random effects in the proposed models to reduce the aleatory uncertainty. All the algorithms are trained using a selected database of 4,528 ground-motions, including 376 seismic events with magnitude 3 to 5.8, recorded over the hypocentral distance range of 4 to 500 km in Oklahoma, Kansas, and Texas since 2005. The main reason of the considered database stems from the recent increase in the seismicity rate of these states attributed to petroleum production and wastewater disposal activities, which necessities further investigation in the ground motion models developed for these states. Accuracy of the models in predicting intensity measures, generalization capability of the models for future data, as well as usability of the models are discussed in the evaluation process. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available.Keywords: artificial neural network, ground-motion models, machine learning, random forest, support vector machine
Procedia PDF Downloads 1222522 Free Vibration Analysis of Timoshenko Beams at Higher Modes with Central Concentrated Mass Using Coupled Displacement Field Method
Authors: K. Meera Saheb, K. Krishna Bhaskar
Abstract:
Complex structures used in many fields of engineering are made up of simple structural elements like beams, plates etc. These structural elements, sometimes carry concentrated masses at discrete points, and when subjected to severe dynamic environment tend to vibrate with large amplitudes. The frequency amplitude relationship is very much essential in determining the response of these structural elements subjected to the dynamic loads. For Timoshenko beams, the effects of shear deformation and rotary inertia are to be considered to evaluate the fundamental linear and nonlinear frequencies. A commonly used method for solving vibration problem is energy method, or a finite element analogue of the same. In the present Coupled Displacement Field method the number of undetermined coefficients is reduced to half when compared to the famous Rayleigh Ritz method, which significantly simplifies the procedure to solve the vibration problem. This is accomplished by using a coupling equation derived from the static equilibrium of the shear flexible structural element. The prime objective of the present paper here is to study, in detail, the effect of a central concentrated mass on the large amplitude free vibrations of uniform shear flexible beams. Accurate closed form expressions for linear frequency parameter for uniform shear flexible beams with a central concentrated mass was developed and the results are presented in digital form.Keywords: coupled displacement field, coupling equation, large amplitude vibrations, moderately thick plates
Procedia PDF Downloads 2262521 Predicting Growth of Eucalyptus Marginata in a Mediterranean Climate Using an Individual-Based Modelling Approach
Authors: S.K. Bhandari, E. Veneklaas, L. McCaw, R. Mazanec, K. Whitford, M. Renton
Abstract:
Eucalyptus marginata, E. diversicolor and Corymbia calophylla form widespread forests in south-west Western Australia (SWWA). These forests have economic and ecological importance, and therefore, tree growth and sustainable management are of high priority. This paper aimed to analyse and model the growth of these species at both stand and individual levels, but this presentation will focus on predicting the growth of E. Marginata at the individual tree level. More specifically, the study wanted to investigate how well individual E. marginata tree growth could be predicted by considering the diameter and height of the tree at the start of the growth period, and whether this prediction could be improved by also accounting for the competition from neighbouring trees in different ways. The study also wanted to investigate how many neighbouring trees or what neighbourhood distance needed to be considered when accounting for competition. To achieve this aim, the Pearson correlation coefficient was examined among competition indices (CIs), between CIs and dbh growth, and selected the competition index that can best predict the diameter growth of individual trees of E. marginata forest managed under different thinning regimes at Inglehope in SWWA. Furthermore, individual tree growth models were developed using simple linear regression, multiple linear regression, and linear mixed effect modelling approaches. Individual tree growth models were developed for thinned and unthinned stand separately. The developed models were validated using two approaches. In the first approach, models were validated using a subset of data that was not used in model fitting. In the second approach, the model of the one growth period was validated with the data of another growth period. Tree size (diameter and height) was a significant predictor of growth. This prediction was improved when the competition was included in the model. The fit statistic (coefficient of determination) of the model ranged from 0.31 to 0.68. The model with spatial competition indices validated as being more accurate than with non-spatial indices. The model prediction can be optimized if 10 to 15 competitors (by number) or competitors within ~10 m (by distance) from the base of the subject tree are included in the model, which can reduce the time and cost of collecting the information about the competitors. As competition from neighbours was a significant predictor with a negative effect on growth, it is recommended including neighbourhood competition when predicting growth and considering thinning treatments to minimize the effect of competition on growth. These model approaches are likely to be useful tools for the conservations and sustainable management of forests of E. marginata in SWWA. As a next step in optimizing the number and distance of competitors, further studies in larger size plots and with a larger number of plots than those used in the present study are recommended.Keywords: competition, growth, model, thinning
Procedia PDF Downloads 1282520 Assessment of Forest Above Ground Biomass Through Linear Modeling Technique Using SAR Data
Authors: Arjun G. Koppad
Abstract:
The study was conducted in Joida taluk of Uttara Kannada district, Karnataka, India, to assess the land use land cover (LULC) and forest aboveground biomass using L band SAR data. The study area covered has dense, moderately dense, and sparse forests. The sampled area was 0.01 percent of the forest area with 30 sampling plots which were selected randomly. The point center quadrate (PCQ) method was used to select the tree and collected the tree growth parameters viz., tree height, diameter at breast height (DBH), and diameter at the tree base. The tree crown density was measured with a densitometer. Each sample plot biomass was estimated using the standard formula. In this study, the LULC classification was done using Freeman-Durden, Yamaghuchi and Pauli polarimetric decompositions. It was observed that the Freeman-Durden decomposition showed better LULC classification with an accuracy of 88 percent. An attempt was made to estimate the aboveground biomass using SAR backscatter. The ALOS-2 PALSAR-2 L-band data (HH, HV, VV &VH) fully polarimetric quad-pol SAR data was used. SAR backscatter-based regression model was implemented to retrieve forest aboveground biomass of the study area. Cross-polarization (HV) has shown a good correlation with forest above-ground biomass. The Multi Linear Regression analysis was done to estimate aboveground biomass of the natural forest areas of the Joida taluk. The different polarizations (HH &HV, VV &HH, HV & VH, VV&VH) combination of HH and HV polarization shows a good correlation with field and predicted biomass. The RMSE and value for HH & HV and HH & VV were 78 t/ha and 0.861, 81 t/ha and 0.853, respectively. Hence the model can be recommended for estimating AGB for the dense, moderately dense, and sparse forest.Keywords: forest, biomass, LULC, back scatter, SAR, regression
Procedia PDF Downloads 262519 On Fourier Type Integral Transform for a Class of Generalized Quotients
Authors: A. S. Issa, S. K. Q. AL-Omari
Abstract:
In this paper, we investigate certain spaces of generalized functions for the Fourier and Fourier type integral transforms. We discuss convolution theorems and establish certain spaces of distributions for the considered integrals. The new Fourier type integral is well-defined, linear, one-to-one and continuous with respect to certain types of convergences. Many properties and an inverse problem are also discussed in some details.Keywords: Boehmian, Fourier integral, Fourier type integral, generalized quotient
Procedia PDF Downloads 3652518 Perceived Stigma, Perception of Burden and Psychological Distress among Parents of Intellectually Disable Children: Role of Perceived Social Support
Authors: Saima Shafiq, Najma Iqbal Malik
Abstract:
This study was aimed to explore the relationship of perceived stigma, perception of burden and psychological distress among parents of intellectually disabled children. The study also aimed to explore the moderating role of perceived social support on all the variables of the study. The sample of the study comprised of (N = 250) parents of intellectually disabled children. The present study utilized the co-relational research design. It consists of two phases. Phase-I consisted of two steps which contained the translation of two scales that were used in the present study and tried out on the sample of parents (N = 70). The Affiliated Stigma Scale and Care Giver Burden Inventory were translated into Urdu for the present study. Phase-1 revealed that translated scaled entailed satisfactory psychometric properties. Phase -II of the study was carried out in order to test the hypothesis. Correlation, linear regression analysis, and t-test were computed for hypothesis testing. Hierarchical regression analysis was applied to study the moderating effect of perceived social support. Findings revealed that there was a positive relationship between perceived stigma and psychological distress, perception of burden and psychological distress. Linear regression analysis showed that perceived stigma and perception of burden were positive predictors of psychological distress. The study did not show the moderating role of perceived social support among variables of the present study. The major limitation of the study is the sample size and the major implication is awareness regarding problems of parents of intellectually disabled children.Keywords: perceived stigma, perception of burden, psychological distress, perceived social support
Procedia PDF Downloads 2132517 Bi-Directional Impulse Turbine for Thermo-Acoustic Generator
Authors: A. I. Dovgjallo, A. B. Tsapkova, A. A. Shimanov
Abstract:
The paper is devoted to one of engine types with external heating – a thermoacoustic engine. In thermoacoustic engine heat energy is converted to an acoustic energy. Further, acoustic energy of oscillating gas flow must be converted to mechanical energy and this energy in turn must be converted to electric energy. The most widely used way of transforming acoustic energy to electric one is application of linear generator or usual generator with crank mechanism. In both cases, the piston is used. Main disadvantages of piston use are friction losses, lubrication problems and working fluid pollution which cause decrease of engine power and ecological efficiency. Using of a bidirectional impulse turbine as an energy converter is suggested. The distinctive feature of this kind of turbine is that the shock wave of oscillating gas flow passing through the turbine is reflected and passes through the turbine again in the opposite direction. The direction of turbine rotation does not change in the process. Different types of bidirectional impulse turbines for thermoacoustic engines are analyzed. The Wells turbine is the simplest and least efficient of them. A radial impulse turbine has more complicated design and is more efficient than the Wells turbine. The most appropriate type of impulse turbine was chosen. This type is an axial impulse turbine, which has a simpler design than that of a radial turbine and similar efficiency. The peculiarities of the method of an impulse turbine calculating are discussed. They include changes in gas pressure and velocity as functions of time during the generation of gas oscillating flow shock waves in a thermoacoustic system. In thermoacoustic system pressure constantly changes by a certain law due to acoustic waves generation. Peak values of pressure are amplitude which determines acoustic power. Gas, flowing in thermoacoustic system, periodically changes its direction and its mean velocity is equal to zero but its peak values can be used for bi-directional turbine rotation. In contrast with feed turbine, described turbine operates on un-steady oscillating flows with direction changes which significantly influence the algorithm of its calculation. Calculated power output is 150 W with frequency 12000 r/min and pressure amplitude 1,7 kPa. Then, 3-d modeling and numerical research of impulse turbine was carried out. As a result of numerical modeling, main parameters of the working fluid in turbine were received. On the base of theoretical and numerical data model of impulse turbine was made on 3D printer. Experimental unit was designed for numerical modeling results verification. Acoustic speaker was used as acoustic wave generator. Analysis if the acquired data shows that use of the bi-directional impulse turbine is advisable. By its characteristics as a converter, it is comparable with linear electric generators. But its lifetime cycle will be higher and engine itself will be smaller due to turbine rotation motion.Keywords: acoustic power, bi-directional pulse turbine, linear alternator, thermoacoustic generator
Procedia PDF Downloads 378