Search results for: absolute entropy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 895

Search results for: absolute entropy

295 Embedded System of Signal Processing on FPGA: Underwater Application Architecture

Authors: Abdelkader Elhanaoui, Mhamed Hadji, Rachid Skouri, Said Agounad

Abstract:

The purpose of this paper is to study the phenomenon of acoustic scattering by using a new method. The signal processing (Fast Fourier Transform FFT Inverse Fast Fourier Transform iFFT and BESSEL functions) is widely applied to obtain information with high precision accuracy. Signal processing has a wider implementation in general-purpose pro-cessors. Our interest was focused on the use of FPGAs (Field-Programmable Gate Ar-rays) in order to minimize the computational complexity in single processor architecture, then be accelerated on FPGA and meet real-time and energy efficiency requirements. Gen-eral-purpose processors are not efficient for signal processing. We implemented the acous-tic backscattered signal processing model on the Altera DE-SOC board and compared it to Odroid xu4. By comparison, the computing latency of Odroid xu4 and FPGA is 60 sec-onds and 3 seconds, respectively. The detailed SoC FPGA-based system has shown that acoustic spectra are performed up to 20 times faster than the Odroid xu4 implementation. FPGA-based system of processing algorithms is realized with an absolute error of about 10⁻³. This study underlines the increasing importance of embedded systems in underwater acoustics, especially in non-destructive testing. It is possible to obtain information related to the detection and characterization of submerged cells. So we have achieved good exper-imental results in real-time and energy efficiency.

Keywords: DE1 FPGA, acoustic scattering, form function, signal processing, non-destructive testing

Procedia PDF Downloads 62
294 Optical Variability of Faint Quasars

Authors: Kassa Endalamaw Rewnu

Abstract:

The variability properties of a quasar sample, spectroscopically complete to magnitude J = 22.0, are investigated on a time baseline of 2 years using three different photometric bands (U, J and F). The original sample was obtained using a combination of different selection criteria: colors, slitless spectroscopy and variability, based on a time baseline of 1 yr. The main goals of this work are two-fold: first, to derive the percentage of variable quasars on a relatively short time baseline; secondly, to search for new quasar candidates missed by the other selection criteria; and, thus, to estimate the completeness of the spectroscopic sample. In order to achieve these goals, we have extracted all the candidate variable objects from a sample of about 1800 stellar or quasi-stellar objects with limiting magnitude J = 22.50 over an area of about 0.50 deg2. We find that > 65% of all the objects selected as possible variables are either confirmed quasars or quasar candidates on the basis of their colors. This percentage increases even further if we exclude from our lists of variable candidates a number of objects equal to that expected on the basis of `contamination' induced by our photometric errors. The percentage of variable quasars in the spectroscopic sample is also high, reaching about 50%. On the basis of these results, we can estimate that the incompleteness of the original spectroscopic sample is < 12%. We conclude that variability analysis of data with small photometric errors can be successfully used as an efficient and independent (or at least auxiliary) selection method in quasar surveys, even when the time baseline is relatively short. Finally, when corrected for the different intrinsic time lags corresponding to a fixed observed time baseline, our data do not show a statistically significant correlation between variability and either absolute luminosity or redshift.

Keywords: nuclear activity, galaxies, active quasars, variability

Procedia PDF Downloads 60
293 Conservation Planning of Paris Polyphylla Smith, an Important Medicinal Herb of the Indian Himalayan Region Using Predictive Distribution Modelling

Authors: Mohd Tariq, Shyamal K. Nandi, Indra D. Bhatt

Abstract:

Paris polyphylla Smith (Family- Liliaceae; English name-Love apple: Local name- Satuwa) is an important folk medicinal herb of the Indian subcontinent, being a source of number of bioactive compounds for drug formulation. The rhizomes are widely used as antihelmintic, antispasmodic, digestive stomachic, expectorant and vermifuge, antimicrobial, anti-inflammatory, heart and vascular malady, anti-fertility and sedative. Keeping in view of this, the species is being constantly removed from nature for trade and various pharmaceuticals purpose, as a result, the availability of the species in its natural habitat is decreasing. In this context, it would be pertinent to conserve this species and reintroduce them in its natural habitat. Predictive distribution modelling of this species was performed in Western Himalayan Region. One such recent method is Ecological Niche Modelling, also popularly known as Species distribution modelling, which uses computer algorithms to generate predictive maps of species distributions in a geographic space by correlating the point distributional data with a set of environmental raster data. In case of P. polyphylla, and to understand its potential distribution zones and setting up of artificial introductions, or selecting conservation sites, and conservation and management of their native habitat. Among the different districts of Uttarakhand (28°05ˈ-31°25ˈ N and 77°45ˈ-81°45ˈ E) Uttarkashi, Rudraprayag, Chamoli, Pauri Garhwal and some parts of Bageshwar, 'Maximum Entropy' (Maxent) has predicted wider potential distribution of P. polyphylla Smith. Distribution of P. polyphylla is mainly governed by Precipitation of Driest Quarter and Mean Diurnal Range i.e., 27.08% and 18.99% respectively which indicates that humidity (27%) and average temperature (19°C) might be suitable for better growth of Paris polyphylla.

Keywords: biodiversity conservation, Indian Himalayan region, Paris polyphylla, predictive distribution modelling

Procedia PDF Downloads 314
292 Exergy Analysis of a Vapor Absorption Refrigeration System Using Carbon Dioxide as Refrigerant

Authors: Samsher Gautam, Apoorva Roy, Bhuvan Aggarwal

Abstract:

Vapor absorption refrigeration systems can replace vapor compression systems in many applications as they can operate on a low-grade heat source and are environment-friendly. Widely used refrigerants such as CFCs and HFCs cause significant global warming. Natural refrigerants can be an alternative to them, among which carbon dioxide is promising for use in automotive air conditioning systems. Its inherent safety, ability to withstand high pressure and high heat transfer coefficient coupled with easy availability make it a likely choice for refrigerant. Various properties of the ionic liquid [bmim][PF₆], such as non-toxicity, stability over a wide temperature range and ability to dissolve gases like carbon dioxide, make it a suitable absorbent for a vapor absorption refrigeration system. In this paper, an absorption chiller consisting of a generator, condenser, evaporator and absorber was studied at an operating temperature of 70⁰C. A thermodynamic model was set up using the Peng-Robinson equations of state to predict the behavior of the refrigerant and absorbent pair at different points in the system. A MATLAB code was used to obtain the values of enthalpy and entropy at selected points in the system. The exergy destruction in each component and exergetic coefficient of performance (ECOP) of the system were calculated by performing an exergy analysis based on the second law of thermodynamics. Graphs were plotted between varying operating conditions and the ECOP obtained in each case. The effect of every component on the ECOP was examined. The exergetic coefficient of performance was found to be lesser than the coefficient of performance based on the first law of thermodynamics.

Keywords: [bmim][PF₆] as absorbent, carbon dioxide as refrigerant, exergy analysis, Peng-Robinson equations of state, vapor absorption refrigeration

Procedia PDF Downloads 269
291 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing

Authors: Yehjune Heo

Abstract:

As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.

Keywords: anti-spoofing, CNN, fingerprint recognition, loss function, optimizer

Procedia PDF Downloads 120
290 Forecasting Equity Premium Out-of-Sample with Sophisticated Regression Training Techniques

Authors: Jonathan Iworiso

Abstract:

Forecasting the equity premium out-of-sample is a major concern to researchers in finance and emerging markets. The quest for a superior model that can forecast the equity premium with significant economic gains has resulted in several controversies on the choice of variables and suitable techniques among scholars. This research focuses mainly on the application of Regression Training (RT) techniques to forecast monthly equity premium out-of-sample recursively with an expanding window method. A broad category of sophisticated regression models involving model complexity was employed. The RT models include Ridge, Forward-Backward (FOBA) Ridge, Least Absolute Shrinkage and Selection Operator (LASSO), Relaxed LASSO, Elastic Net, and Least Angle Regression were trained and used to forecast the equity premium out-of-sample. In this study, the empirical investigation of the RT models demonstrates significant evidence of equity premium predictability both statistically and economically relative to the benchmark historical average, delivering significant utility gains. They seek to provide meaningful economic information on mean-variance portfolio investment for investors who are timing the market to earn future gains at minimal risk. Thus, the forecasting models appeared to guarantee an investor in a market setting who optimally reallocates a monthly portfolio between equities and risk-free treasury bills using equity premium forecasts at minimal risk.

Keywords: regression training, out-of-sample forecasts, expanding window, statistical predictability, economic significance, utility gains

Procedia PDF Downloads 87
289 A Two-Stage Bayesian Variable Selection Method with the Extension of Lasso for Geo-Referenced Data

Authors: Georgiana Onicescu, Yuqian Shen

Abstract:

Due to the complex nature of geo-referenced data, multicollinearity of the risk factors in public health spatial studies is a commonly encountered issue, which leads to low parameter estimation accuracy because it inflates the variance in the regression analysis. To address this issue, we proposed a two-stage variable selection method by extending the least absolute shrinkage and selection operator (Lasso) to the Bayesian spatial setting, investigating the impact of risk factors to health outcomes. Specifically, in stage I, we performed the variable selection using Bayesian Lasso and several other variable selection approaches. Then, in stage II, we performed the model selection with only the selected variables from stage I and compared again the methods. To evaluate the performance of the two-stage variable selection methods, we conducted a simulation study with different distributions for the risk factors, using geo-referenced count data as the outcome and Michigan as the research region. We considered the cases when all candidate risk factors are independently normally distributed, or follow a multivariate normal distribution with different correlation levels. Two other Bayesian variable selection methods, Binary indicator, and the combination of Binary indicator and Lasso were considered and compared as alternative methods. The simulation results indicated that the proposed two-stage Bayesian Lasso variable selection method has the best performance for both independent and dependent cases considered. When compared with the one-stage approach, and the other two alternative methods, the two-stage Bayesian Lasso approach provides the highest estimation accuracy in all scenarios considered.

Keywords: Lasso, Bayesian analysis, spatial analysis, variable selection

Procedia PDF Downloads 123
288 InP Nanocrystals Core and Surface Electronic Structure from Ab Initio Calculations

Authors: Hamad R. Jappor, Zeyad Adnan Saleh, Mudar A. Abdulsattar

Abstract:

The ab initio restricted Hartree-Fock method is used to simulate the electronic structure of indium phosphide (InP) nanocrystals (NCs) (216-738 atoms) with sizes ranging up to about 2.5 nm in diameter. The calculations are divided into two parts, surface, and core. The oxygenated (001)-(1×1) facet that expands with larger sizes of nanocrystals is investigated to determine the rule of the surface in nanocrystals electronic structure. Results show that lattice constant and ionicity of the core part show decreasing order as nanocrystals grow up in size. The smallest investigated nanocrystal is 1.6% larger in lattice constant and 131.05% larger in ionicity than the converged value of largest investigated nanocrystal. Increasing nanocrystals size also resulted in an increase of core cohesive energy (absolute value), increase of core energy gap, and increase of core valence. The surface states are found mostly non-degenerated because of the effect of surface discontinuity and oxygen atoms. Valence bandwidth is wider on the surface due to splitting and oxygen atoms. The method also shows fluctuations in the converged energy gap, valence bandwidth and cohesive energy of core part of nanocrystals duo to shape variation. The present work suggests the addition of ionicity and lattice constant to the quantities that are affected by quantum confinement phenomenon. The method of the present model has threefold results; it can be used to approach the electronic structure of crystals bulk, surface, and nanocrystals.

Keywords: InP, nanocrystals core, ionicity, Hartree-Fock method, large unit cell

Procedia PDF Downloads 384
287 Removal of Pb²⁺ from Waste Water Using Nano Silica Spheres Synthesized on CaCO₃ as a Template: Equilibrium and Thermodynamic Studies

Authors: Milton Manyangadze, Joseph Govha, T. Bala Narsaiah, Ch. Shilpa Chakra

Abstract:

The availability and access to fresh water is today a serious global challenge. This has been a direct result of factors such as the current rapid industrialization and industrial growth, persistent droughts in some parts of the world, especially in the sub-Saharan Africa as well as population growth. Growth of the chemical processing industry has also seen an increase in the levels of pollutants in our water bodies which include heavy metals among others. Heavy metals are known to be dangerous to both human and aquatic life. As such, they have been linked to several diseases. This is mainly because they are highly toxic. They are also known to be bio accumulative and non-biodegradable. Lead for example, has been linked to a number of health problems which include damage of vital internal body systems like the nervous and reproductive system as well as the kidneys. From this background therefore, the removal of the toxic heavy metal, Pb2+ from waste water was investigated using nano silica hollow spheres (NSHS) as the adsorbent. Synthesis of NSHS was done using a three-stage process in which CaCO3 nanoparticles were initially prepared as a template. This was followed by treatment of the formed oxide particles with NaSiO3 to give a nanocomposite. Finally, the template was destroyed using 2.0M HCl to give NSHS. Characterization of the nanoparticles was done using analytical techniques like XRD, SEM, and TGA. For the adsorption process, both thermodynamic and equilibrium studies were carried out. Thermodynamic studies were carried out and the Gibbs free energy, Enthalpy and Entropy of the adsorption process were determined. The results revealed that the adsorption process was both endothermic and spontaneous. Equilibrium studies were also carried out in which the Langmuir and Freundlich isotherms were tested. The results showed that the Langmuir model best described the adsorption equilibrium.

Keywords: characterization, endothermic, equilibrium studies, Freundlich, Langmuir, nanoparticles, thermodynamic studies

Procedia PDF Downloads 195
286 Quantifying Meaning in Biological Systems

Authors: Richard L. Summers

Abstract:

The advanced computational analysis of biological systems is becoming increasingly dependent upon an understanding of the information-theoretic structure of the materials, energy and interactive processes that comprise those systems. The stability and survival of these living systems are fundamentally contingent upon their ability to acquire and process the meaning of information concerning the physical state of its biological continuum (biocontinuum). The drive for adaptive system reconciliation of a divergence from steady-state within this biocontinuum can be described by an information metric-based formulation of the process for actionable knowledge acquisition that incorporates the axiomatic inference of Kullback-Leibler information minimization driven by survival replicator dynamics. If the mathematical expression of this process is the Lagrangian integrand for any change within the biocontinuum then it can also be considered as an action functional for the living system. In the direct method of Lyapunov, such a summarizing mathematical formulation of global system behavior based on the driving forces of energy currents and constraints within the system can serve as a platform for the analysis of stability. As the system evolves in time in response to biocontinuum perturbations, the summarizing function then conveys information about its overall stability. This stability information portends survival and therefore has absolute existential meaning for the living system. The first derivative of the Lyapunov energy information function will have a negative trajectory toward a system's steady state if the driving force is dissipating. By contrast, system instability leading to system dissolution will have a positive trajectory. The direction and magnitude of the vector for the trajectory then serves as a quantifiable signature of the meaning associated with the living system’s stability information, homeostasis and survival potential.

Keywords: meaning, information, Lyapunov, living systems

Procedia PDF Downloads 118
285 Utilization of Activated Carbon for the Extraction and Separation of Methylene Blue in the Presence of Acid Yellow 61 Using an Inclusion Polymer Membrane

Authors: Saâd Oukkass, Abderrahim Bouftou, Rachid Ouchn, L. Lebrun, Miloudi Hlaibi

Abstract:

We invariably exist in a world steeped in colors, whether in our clothing, food, cosmetics, or even medications. However, most of the dyes we use pose significant problems, being both harmful to the environment and resistant to degradation. Among these dyes, methylene blue and acid yellow 61 stand out, commonly used to dye various materials such as cotton, wood, and silk. Fortunately, various methods have been developed to treat and remove these polluting dyes, among which membrane processes play a prominent role. These methods are praised for their low energy consumption, ease of operation, and their ability to achieve effective separation of components. Adsorption on activated carbon is also a widely employed technique, complementing the basic processes. It proves particularly effective in capturing and removing organic compounds from water due to its substantial specific surface area while retaining its properties unchanged. In the context of our study, we examined two crucial aspects. Firstly, we explored the possibility of selectively extracting methylene blue from a mixture containing another dye, acid yellow 61, using a polymer inclusion membrane (PIM) made of PVA. After characterizing the morphology and porosity of the membrane, we applied kinetic and thermodynamic models to determine the values of permeability (P), initial flux (J0), association constant (Kass), and apparent diffusion coefficient (D*). Subsequently, we measured activation parameters (activation energy (Ea), enthalpy (ΔH#ass), entropy (ΔS#)). Finally, we studied the effect of activated carbon on the processes carried out through the membrane, demonstrating a clear improvement. These results make the membrane developed in this study a potentially pivotal player in the field of membrane separation.

Keywords: dyes, methylene blue, membrane, activated carbon

Procedia PDF Downloads 53
284 Electricity Price Forecasting: A Comparative Analysis with Shallow-ANN and DNN

Authors: Fazıl Gökgöz, Fahrettin Filiz

Abstract:

Electricity prices have sophisticated features such as high volatility, nonlinearity and high frequency that make forecasting quite difficult. Electricity price has a volatile and non-random character so that, it is possible to identify the patterns based on the historical data. Intelligent decision-making requires accurate price forecasting for market traders, retailers, and generation companies. So far, many shallow-ANN (artificial neural networks) models have been published in the literature and showed adequate forecasting results. During the last years, neural networks with many hidden layers, which are referred to as DNN (deep neural networks) have been using in the machine learning community. The goal of this study is to investigate electricity price forecasting performance of the shallow-ANN and DNN models for the Turkish day-ahead electricity market. The forecasting accuracy of the models has been evaluated with publicly available data from the Turkish day-ahead electricity market. Both shallow-ANN and DNN approach would give successful result in forecasting problems. Historical load, price and weather temperature data are used as the input variables for the models. The data set includes power consumption measurements gathered between January 2016 and December 2017 with one-hour resolution. In this regard, forecasting studies have been carried out comparatively with shallow-ANN and DNN models for Turkish electricity markets in the related time period. The main contribution of this study is the investigation of different shallow-ANN and DNN models in the field of electricity price forecast. All models are compared regarding their MAE (Mean Absolute Error) and MSE (Mean Square) results. DNN models give better forecasting performance compare to shallow-ANN. Best five MAE results for DNN models are 0.346, 0.372, 0.392, 0,402 and 0.409.

Keywords: deep learning, artificial neural networks, energy price forecasting, turkey

Procedia PDF Downloads 276
283 Complicating Representations of Domestic Violence Perpetration through a Qualitative Content Analysis and Socio-Ecological Approach

Authors: Charlotte Lucke

Abstract:

This study contributes to the body of literature that analyzes and complicates oversimplified and sensationalized representations of trauma and violence through a close examination and complication of representations of perpetrators of domestic violence in the mass media. This study determines the ways the media frames perpetrators of domestic violence through a qualitative content analysis and socio-ecological approach to the perpetration of violence. While the qualitative analysis has not been carried out, through preliminary research, this study hypothesizes that the media represents perpetrators through tropes such as the 'predator' or 'offender,' or as a demonized 'other.' It is necessary to expose and work through such stereotypes because cultivation theory demonstrates that the mass media determines societal beliefs about and perceptions of the world. Thus, representations of domestic violence in the mass media can lead people to believe that perpetrators of violence are mere animals or criminals and overlook the trauma that many perpetrators experience. When the media represents perpetrators as pure evil, monsters, or absolute 'others,' it leaves out the complexities of what moves people to commit domestic violence. By analyzing and placing media representations of perpetrators into conversation with the socio-ecological approach to violence perpetration, this study complicates domestic violence stereotypes. The socio-ecological model allows researchers to consider the way the interplay between individuals and their families, friends, communities, and cultures can move people to act violently. Using this model, along with psychological and psychoanalytic approaches to the etiology of domestic violence, this paper argues that media stereotypes conceal the way people’s experiences of trauma, along with community and cultural norms, perpetuates the cycle of systemic trauma and violence in the home.

Keywords: domestic violence, media images, representing trauma, theorising trauma

Procedia PDF Downloads 216
282 Hedgerow Detection and Characterization Using Very High Spatial Resolution SAR DATA

Authors: Saeid Gharechelou, Stuart Green, Fiona Cawkwell

Abstract:

Hedgerow has an important role for a wide range of ecological habitats, landscape, agriculture management, carbon sequestration, wood production. Hedgerow detection accurately using satellite imagery is a challenging problem in remote sensing techniques, because in the special approach it is very similar to line object like a road, from a spectral viewpoint, a hedge is very similar to a forest. Remote sensors with very high spatial resolution (VHR) recently enable the automatic detection of hedges by the acquisition of images with enough spectral and spatial resolution. Indeed, recently VHR remote sensing data provided the opportunity to detect the hedgerow as line feature but still remain difficulties in monitoring the characterization in landscape scale. In this research is used the TerraSAR-x Spotlight and Staring mode with 3-5 m resolution in wet and dry season in the test site of Fermoy County, Ireland to detect the hedgerow by acquisition time of 2014-2015. Both dual polarization of Spotlight data in HH/VV is using for detection of hedgerow. The varied method of SAR image technique with try and error way by integration of classification algorithm like texture analysis, support vector machine, k-means and random forest are using to detect hedgerow and its characterization. We are applying the Shannon entropy (ShE) and backscattering analysis in single and double bounce in polarimetric analysis for processing the object-oriented classification and finally extracting the hedgerow network. The result still is in progress and need to apply the other method as well to find the best method in study area. Finally, this research is under way to ahead to get the best result and here just present the preliminary work that polarimetric image of TSX potentially can detect the hedgerow.

Keywords: TerraSAR-X, hedgerow detection, high resolution SAR image, dual polarization, polarimetric analysis

Procedia PDF Downloads 222
281 Film Censorship and Female Chastity: Exploring State's Discourses and Patriarchal Values in Reconstructing Chinese Film Stardom of Tang Wei

Authors: Xinchen Zhu

Abstract:

The rapid fame of the renowned female film star Tang Wei has made her a typical subject (or object) entangled with sensitive issues involving the official ideology, sexuality, and patriarchal values of contemporary China. In 2008, Tang Wei’s official ban has triggered the wave of debates concerning state power and censorship, actor’s rights, sexual ethics, and feminism in the public sphere. Her ban implies that Chinese film censorship acts as a key factor in reconstructing Chinese film stardom. Following the ban, as sensational media texts are re-interpreting the official discourses, the texts also functioned as a crucial vehicle in reconstructing Tang's female image. Therefore, the case study of Tang's film stardom allows us to further explore how female stardom has been entangled with the issues involving official ideology, female sexual ethics, and patriarchal values in contemporary China. This paper argues that Chinese female film stars shoulder the responsibility of film acting which would conform to the official male-dominated values. However, with the development of the Internet, the state no longer remains an absolute control over the new venues. The netizens’ discussion about her ban reshaped Tang’s image as a victim and scapegoat under the unfair oppression of the official authority. Additionally, this paper argues that similar to State’s discourse, netizens’ discourse did not reject patriarchal values, and in turn emphasized Tang Wei’s female chastity.

Keywords: film censorship, Chinese female film stardom, party-state’s power, national discourses, Tang Wei

Procedia PDF Downloads 149
280 Role of Hyperbaric Oxygen Therapy in Management of Diabetic Foot

Authors: Magdy Al Shourbagi

Abstract:

Diabetes mellitus is the commonest cause of neuropathy. The common pattern is a distal symmetrical sensory polyneuropathy, associated with autonomic disturbances. Less often, Diabetes mellitus is responsible for a focal or multifocal neuropathy. Common causes for non-healing of diabetic foot are the infection and ischemia. Diabetes mellitus is associated with a defective cellular and humoral immunity. Particularly, decreased phagocytosis, decreased chemotaxis, impaired bacterial killing and abnormal lymphocytic function resulting in a reduced inflammatory reaction and defective wound healing. Hyperbaric oxygen therapy is defined by the Undersea and Hyperbaric Medical Society as a treatment in which a patient intermittently breathes 100% oxygen and the treatment chamber is pressurized to a pressure greater than sea level (1 atmosphere absolute). The pressure increase may be applied in mono-place (single person) or multi-place chambers. Multi-place chambers are pressurized with air, with oxygen given via face mask or endotracheal tube; while mono-place chambers are pressurized with oxygen. Oxygen gas plays an important role in the physiology of wound healing. Hyperbaric oxygen therapy can raise tissue oxygen tensions to levels where wound healing can be expected. HBOT increases the killing ability of leucocytes also it is lethal for certain anaerobic bacteria and inhibits toxin formation in many other anaerobes. Multiple anecdotal reports and studies in HBO therapy in diabetic patients report that HBO can be an effective adjunct therapy in the management of diabetic foot wounds and is associated with better functional outcomes.

Keywords: hyperbari oxygen therapy, diabetic foot, neuropathy, multiplace chambers

Procedia PDF Downloads 274
279 Outcome Analysis of Surgical and Nonsurgical Treatment on Indicated Operative Chronic Subdural Hematoma: Serial Case in Cipto Mangunkusumo Hospital Indonesia

Authors: Novie Nuraini, Sari Hanifa, Yetty Ramli

Abstract:

Chronic subdural hematoma (cSDH) is a common condition after head trauma. Although the size of the thickness of cSDH has an important role in the decision to perform surgery, but the size limit of the thickness is not absolute. In this serial case report, we evaluate three case report of cSDH that indicated to get the surgical procedure because of deficit neurologic and neuroimaging finding with subfalcine herniation more than 0.5 cm and hematoma thickness more than one cm. On the first case, the patient got evacuation hematoma procedure, but the second and third case, we did nonsurgical treatment because the patient and family refused to do the operation. We did the conservative treatment with bed rest and mannitol. Serial radiologic evaluation is done when we found worsening condition. We also reevaluated radiologic examination two weeks after the treatment. The results in this serial case report, the first and second case have a good outcome. On the third case, there was a worsening condition, which in this patient there was a comorbid with type two diabetic mellitus, pneumonie and chronic kidney disease. Some conservative treatment such as bed rest, corticosteroid, mannitol or the other hyperosmolar has a good outcome in patient without neurologic deficits, small hematoma, and or patient without comorbid disease. Evacuate hematome is the best choice in cSDH treatment with deficit neurologic finding. Afterall, there is some condition that we can not do the surgical procedure. Serial radiologic examination needed after two weeks to evaluate the treatment or if there is any worsening condition.

Keywords: chronic subdural hematoma, traumatic brain injury, surgical treatment, nonsurgical treatment, outcome

Procedia PDF Downloads 315
278 Simplified Stress Gradient Method for Stress-Intensity Factor Determination

Authors: Jeries J. Abou-Hanna

Abstract:

Several techniques exist for determining stress-intensity factors in linear elastic fracture mechanics analysis. These techniques are based on analytical, numerical, and empirical approaches that have been well documented in literature and engineering handbooks. However, not all techniques share the same merit. In addition to overly-conservative results, the numerical methods that require extensive computational effort, and those requiring copious user parameters hinder practicing engineers from efficiently evaluating stress-intensity factors. This paper investigates the prospects of reducing the complexity and required variables to determine stress-intensity factors through the utilization of the stress gradient and a weighting function. The heart of this work resides in the understanding that fracture emanating from stress concentration locations cannot be explained by a single maximum stress value approach, but requires use of a critical volume in which the crack exists. In order to understand the effectiveness of this technique, this study investigated components of different notch geometry and varying levels of stress gradients. Two forms of weighting functions were employed to determine stress-intensity factors and results were compared to analytical exact methods. The results indicated that the “exponential” weighting function was superior to the “absolute” weighting function. An error band +/- 10% was met for cases ranging from a steep stress gradient in a sharp v-notch to the less severe stress transitions of a large circular notch. The incorporation of the proposed method has shown to be a worthwhile consideration.

Keywords: fracture mechanics, finite element method, stress intensity factor, stress gradient

Procedia PDF Downloads 123
277 Evaluation of Synthesis and Structure Elucidation of Some Benzimidazoles as Antimicrobial Agents

Authors: Ozlem Temiz Arpaci, Meryem Tasci, Hakan Goker

Abstract:

Benzimidazole, a structural isostere of indol and purine nuclei that can interact with biopolymers, can be identified as master key. So that benzimidazole compounds are important fragments in medicinal chemistry because of their wide range of biological activities including antimicrobial activity. We planned to synthesize some benzimidazole compounds for developing new antimicrobial drug candidates. In this study, we put some heterocyclic rings on second position and an amidine group on the fifth position of benzimidazole ring and synthesized them using a multiple step procedure. For the synthesis of the compounds, as the first step, 4-chloro-3-nitrobenzonitrile was reacted with cyclohexylamine in dimethyl formamide. Imidate esters (compound 2) were then prepared with absolute ethanol saturated with dry HCl gas. These imidate esters which were not too stable were converted to compound 3 by passing ammonia gas through ethanol. At the Pd / C catalyst, the nitro group is reduced to the amine group (compound 4). Finally, various aldehyde derivatives were reacted with sodium metabisulfite addition products to give compound 5-20. Melting points were determined on a Buchi B-540 melting point apparatus in open capillary tubes and are uncorrected. Elemental analyses were done a Leco CHNS 932 elemental analyzer. 1H-NMR and 13C-NMR spectra were recorded on a Varian Mercury 400 MHz spectrometer using DMSO-d6. Mass spectra were acquired on a Waters Micromass ZQ using the ESI(+) method. The structures of them were supported by spectral data. The 1H-NMR, 13C NMR and mass spectra and elemental analysis results agree with those of the proposed structures. Antimicrobial activity studies of the synthesized compounds are under the investigation.

Keywords: benzimidazoles, synthesis, structure elucidation, antimicrobial

Procedia PDF Downloads 139
276 Forecasting Container Throughput: Using Aggregate or Terminal-Specific Data?

Authors: Gu Pang, Bartosz Gebka

Abstract:

We forecast the demand of total container throughput at the Indonesia’s largest seaport, Tanjung Priok Port. We propose four univariate forecasting models, including SARIMA, the additive Seasonal Holt-Winters, the multiplicative Seasonal Holt-Winters and the Vector Error Correction Model. Our aim is to provide insights into whether forecasting the total container throughput obtained by historical aggregated port throughput time series is superior to the forecasts of the total throughput obtained by summing up the best individual terminal forecasts. We test the monthly port/individual terminal container throughput time series between 2003 and 2013. The performance of forecasting models is evaluated based on Mean Absolute Error and Root Mean Squared Error. Our results show that the multiplicative Seasonal Holt-Winters model produces the most accurate forecasts of total container throughput, whereas SARIMA generates the worst in-sample model fit. The Vector Error Correction Model provides the best model fits and forecasts for individual terminals. Our results report that the total container throughput forecasts based on modelling the total throughput time series are consistently better than those obtained by combining those forecasts generated by terminal-specific models. The forecasts of total throughput until the end of 2018 provide an essential insight into the strategic decision-making on the expansion of port's capacity and construction of new container terminals at Tanjung Priok Port.

Keywords: SARIMA, Seasonal Holt-Winters, Vector Error Correction Model, container throughput

Procedia PDF Downloads 489
275 Evaluating Accuracy of Foetal Weight Estimation by Clinicians in Christian Medical College Hospital, India and Its Correlation to Actual Birth Weight: A Clinical Audit

Authors: Aarati Susan Mathew, Radhika Narendra Patel, Jiji Mathew

Abstract:

A retrospective study conducted at Christian Medical College (CMC) Teaching Hospital, Vellore, India on 14th August 2014 to assess the accuracy of clinically estimated foetal weight upon labour admission. Estimating foetal weight is a crucial factor in assessing maternal and foetal complications during and after labour. Medical notes of ninety-eight postnatal women who fulfilled the inclusion criteria were studied to evaluate the correlation between their recorded Estimated Foetal Weight (EFW) on admission and actual birth weight (ABW) of the newborn after delivery. Data concerning maternal and foetal demographics was also noted. Accuracy was determined by absolute percentage error and proportion of estimates within 10% of ABW. Actual birth weights ranged from 950-4080g. A strong positive correlation between EFW and ABW (r=0.904) was noted. Term deliveries (≥40 weeks) in the normal weight range (2500-4000g) had a 59.5% estimation accuracy (n=74) compared to pre-term (<40 weeks) with an estimation accuracy of 0% (n=2). Out of the term deliveries, macrosomic babies (>4000g) were underestimated by 25% (n=3) and low birthweight (LBW) babies were overestimated by 12.7% (n=9). Registrars who estimated foetal weight were accurate in babies within normal weight ranges. However, there needs to be an improvement in predicting weight of macrosomic and LBW foetuses. We have suggested the use of an amended version of the Johnson’s formula for the Indian population for improvement and a need to re-audit once implemented.

Keywords: clinical palpation, estimated foetal weight, pregnancy, India, Johnson’s formula

Procedia PDF Downloads 354
274 Official Secrecy and Confidentiality in Tax Administration and Its Impact on Right to Access Information: Nigerian Perspectives

Authors: Kareem Adedokun

Abstract:

Official secrecy is one of the colonial vestiges which upholds non – disclosure of essential information for public consumption. Information, though an indispensable tool in tax administration, is not to be divulged by any person in an official duty of the revenue agency. As a matter o fact, the Federal Inland Revenue Service (Establishment) Act, 2007 emphasizes secrecy and confidentiality in dealing with tax payer’s document, information, returns and assessment in a manner reminiscent of protecting tax payer’s privacy in all situations. It is so serious that any violation attracts criminal sanction. However, Nigeria, being a democratic and egalitarian state recently enacted Freedom of Information Act which heralded in openness in governance and takes away the confidentialities associated with official secrets Laws. Official secrecy no doubts contradicts the philosophy of freedom of information but maintaining a proper balance between protected rights of tax payers and public interest which revenue agency upholds is an uphill task. Adopting the Doctrinal method, therefore, the author of this paper probes into the real nature of the relationship between taxpayers and Revenue Agencies. It also interfaces official secrecy with the doctrine of Freedom of Information and consequently queries the retention of non – disclosure clause under Federal Inland Revenue Service (Establishment) Act (FIRSEA) 2007. The paper finds among others that non – disclosure provision in tax statutes particularly as provided for in FIRSEA is not absolute; so also is the constitutional rights and freedom of information and unless the non – disclosure clause finds justification under any recognized exemption provided under the Freedom of Information Act, its retention is antithesis to democratic ethos and beliefs as it may hinder public interest and public order.

Keywords: confidentiality, information, official secrecy, tax administration

Procedia PDF Downloads 313
273 Physical Dynamics of Planet Earth and Their Implications for Global Climate Change and Mitigation: A Case Study of Sistan Plain, Balochistan Region, Southeastern Iran

Authors: Hamidoddin Yousefi, Ahmad Nikbakht

Abstract:

The Sistan Plain, situated in the Balochistan region of southeastern Iran, is renowned for its arid climatic conditions and prevailing winds that persist for approximately 120 days annually. The region faces multiple challenges, including drought susceptibility, exacerbated by wind erosion, temperature fluctuations, and the influence of policies implemented by neighboring Afghanistan and Iran. This study focuses on investigating the characteristics of jet streams within the Sistan Plain and their implications for global climate change. Various models are employed to analyze convective mass fluxes, horizontal moisture transport, temporal variance, and the calculation of radiation convective equilibrium within the atmosphere. Key considerations encompass the distribution of relative humidity, dry air, and absolute humidity. Moreover, the research aims to predict the interplay between jet streams and human activities, particularly regarding their environmental impacts and water scarcity. The investigation encompasses both local and global environmental consequences, drawing upon historical climate change data and comprehensive field research. The anticipated outcomes of this study hold substantial potential for mitigating global climate change and its associated environmental ramifications. By comprehending the dynamics of jet streams and their interconnections with human activities, effective strategies can be formulated to address water scarcity and minimize environmental degradation.

Keywords: Sistani plain, Baluchistan, Hamoun lake, climate change, jet streams, environmental impact, water scarcity, mitigation

Procedia PDF Downloads 50
272 A Hierarchical Bayesian Calibration of Data-Driven Models for Composite Laminate Consolidation

Authors: Nikolaos Papadimas, Joanna Bennett, Amir Sakhaei, Timothy Dodwell

Abstract:

Composite modeling of consolidation processes is playing an important role in the process and part design by indicating the formation of possible unwanted prior to expensive experimental iterative trial and development programs. Composite materials in their uncured state display complex constitutive behavior, which has received much academic interest, and this with different models proposed. Errors from modeling and statistical which arise from this fitting will propagate through any simulation in which the material model is used. A general hyperelastic polynomial representation was proposed, which can be readily implemented in various nonlinear finite element packages. In our case, FEniCS was chosen. The coefficients are assumed uncertain, and therefore the distribution of parameters learned using Markov Chain Monte Carlo (MCMC) methods. In engineering, the approach often followed is to select a single set of model parameters, which on average, best fits a set of experiments. There are good statistical reasons why this is not a rigorous approach to take. To overcome these challenges, A hierarchical Bayesian framework was proposed in which population distribution of model parameters is inferred from an ensemble of experiments tests. The resulting sampled distribution of hyperparameters is approximated using Maximum Entropy methods so that the distribution of samples can be readily sampled when embedded within a stochastic finite element simulation. The methodology is validated and demonstrated on a set of consolidation experiments of AS4/8852 with various stacking sequences. The resulting distributions are then applied to stochastic finite element simulations of the consolidation of curved parts, leading to a distribution of possible model outputs. With this, the paper, as far as the authors are aware, represents the first stochastic finite element implementation in composite process modelling.

Keywords: data-driven , material consolidation, stochastic finite elements, surrogate models

Procedia PDF Downloads 132
271 The Structure and Composition of Plant Communities in Ajluon Forest Reserve in Jordan

Authors: Maher J. Tadros, Yaseen Ananbeh

Abstract:

The study area is located in Ajluon Forest Reserve northern part of Jordan. It consists of Mediterranean hills dominated by open woodlands of oak and pistachio. The aims of the study were to investigate the positive and negative relationships between the locals and the protected area and how it can affect the long-term forest conservation. The main research objectives are to review the impact of establishing Ajloun Forest Reserve on nature conservation and on the livelihood level of local communities around the reserve. The Ajloun forest reserve plays a fundamental role in Ajloun area development. The existence of initiatives of nature conservation in the area supports various socio-economic activities around the reserve that contribute towards the development of local communities in Ajloun area. A part of this research was to conduct a survey to study the impact of Ajloun forest reserve on biodiversity composition. Also, studying the biodiversity content especially for vegetation to determine the economic impacts of Ajloun forest reserve on its surroundings was studied. In this study, several methods were used to fill the objectives including point-centered quarter method which involves selecting randomly 50 plots at the study site. The collected data from the field showed that the absolute density was (1031.24 plant per hectare). Density was recorded and found to be the highest for Quecus coccifera, and relative density of (73.7%), this was followed by Arbutus andrachne and relative density (7.1%), Pistacia palaestina and relative density (10.5%) and Crataegus azarulus (82.5 p/ha) and relative density (5.1%),

Keywords: composition, density, frequency, importance value, point-centered quarter, structure, tree cover

Procedia PDF Downloads 258
270 Research on the Spatio-Temporal Evolution Pattern of Traffic Dominance in Shaanxi Province

Authors: Leng Jian-Wei, Wang Lai-Jun, Li Ye

Abstract:

In order to measure and analyze the transportation situation within the counties of Shaanxi province over a certain period of time and to promote the province's future transportation planning and development, this paper proposes a reasonable layout plan and compares model rationality. The study uses entropy weight method to measure the transportation advantages of 107 counties in Shaanxi province from three dimensions: road network density, trunk line influence and location advantage in 2013 and 2021, and applies spatial autocorrelation analysis method to analyze the spatial layout and development trend of county-level transportation, and conducts ordinary least square (OLS)regression on transportation impact factors and other influencing factors. The paper also compares the regression fitting degree of the Geographically weighted regression(GWR) model and the OLS model. The results show that spatially, the transportation advantages of Shaanxi province generally show a decreasing trend from the Weihe Plain to the surrounding areas and mainly exhibit high-high clustering phenomenon. Temporally, transportation advantages show an overall upward trend, and the phenomenon of spatial imbalance gradually decreases. People's travel demands have changed to some extent, and the demand for rapid transportation has increased overall. The GWR model regression fitting degree of transportation advantages is 0.74, which is higher than the OLS regression model's fitting degree of 0.64. Based on the evolution of transportation advantages, it is predicted that this trend will continue for a period of time in the future. To improve the transportation advantages of Shaanxi province increasing the layout of rapid transportation can effectively enhance the transportation advantages of Shaanxi province. When analyzing spatial heterogeneity, geographic factors should be considered to establish a more reliable model

Keywords: traffic dominance, GWR model, spatial autocorrelation analysis, temporal and spatial evolution

Procedia PDF Downloads 74
269 A Review of Kinematics and Joint Load Forces in Total Knee Replacements Influencing Surgical Outcomes

Authors: Samira K. Al-Nasser, Siamak Noroozi, Roya Haratian, Adrian Harvey

Abstract:

A total knee replacement (TKR) is a surgical procedure necessary when there is severe pain and/or loss of function in the knee. Surgeons balance the load in the knee and the surrounding soft tissue by feeling the tension at different ranges of motion. This method can be unreliable and lead to early failure of the joint. The ideal kinematics and load distribution have been debated significantly based on previous biomechanical studies surrounding both TKRs and normal knees. Intraoperative sensors like VERASENSE and eLibra have provided a method for the quantification of the load indicating a balanced knee. A review of the literature written about intraoperative sensors and tension/stability of the knee was done. Studies currently debate the quantification of the load in medial and lateral compartments specifically. However, most research reported that following a TKR the medial compartment was loaded more heavily than the lateral compartment. In several cases, these results were shown to increase the success of the surgery because they mimic the normal kinematics of the knee. In conclusion, most research agrees that an intercompartmental load differential of between 10 and 20 pounds, where the medial load was higher than the lateral, and an absolute load of less than 70 pounds was ideal. However, further intraoperative sensor development could help improve the accuracy and understanding of the load distribution on the surgical outcomes in a TKR. A reduction in early revision surgeries for TKRs would provide an improved quality of life for patients and reduce the economic burden placed on both the National Health Service (NHS) and the patient.

Keywords: intraoperative sensors, joint load forces, kinematics, load balancing, and total knee replacement

Procedia PDF Downloads 120
268 Demonstration of Logical Inconsistency in the Discussion of the Problem of Evil

Authors: Mohammad Soltani Renani

Abstract:

The problem of evil is one of the heated battlegrounds of the idea of theism and its critics. Since time immemorial and in various philosophical schools and religions, the belief in an Omniscient, Omnipotent, and Absolutely Good God has been considered inconsistent with the existence of the evil in the universe. The theist thinkers have generally adopted one of the following four ways for answering this problem: denial of the existence of evil or considering it to be relative, privation theory of evil, attribution of evil to something other than God, and depiction of an alternative picture of God. Defense or criticism of these alternative answers have given rise to an extensive and unending dispute. However, evaluation of the presupposition and context upon/in which a question is raised precedes offering an answer to it. This point in the discussion of the problem of evil is of paramount importance for both parties, i.e., questioners and answerers, that the attributes of knowledge, power, love, good-will, among others, can be supposed to be infinite only in the essence of the attributed and the domain of potentiality but what can be realized in the domain of actuality is always finite. Therefore, infinite nature of Divine Attributes and realization of evil belong to two spheres. Divine Attributes are infinite (absolute) in Divine Essence, but when they are created, each one becomes bounded by the other. This boundedness is a result of the state of being surrounded of the attributes by each other in finite world of possibility. Evil also appears in this limited world. This inconsistency leads to the collapse of the problem of evil from within: the place of infinity of the Divine Attributes, in the words of Muslim mystics, lies in the Holiest Manifestation [Feyze Aqdas] while evil emerges in the Holy Manifestation where the Divine Attributes become bounded by each other. This idea is neither a new answer to the problem of evil nor a defense of theism; rather it reveals a logical inconsistency in the discussion of the problem of evil.

Keywords: problem of evil, infinity of divine attributes, boundedness of divine attributes, holiest manifestation, holy manifestation

Procedia PDF Downloads 130
267 Body Mass Components in Young Soccer Players

Authors: Elizabeta Sivevska, Sunchica Petrovska, Vaska Antevska, Lidija Todorovska, Sanja Manchevska, Beti Dejanova, Ivanka Karagjozova, Jasmina Pluncevic Gligoroska

Abstract:

Introduction: Body composition plays an important role in the selection of young soccer players and it is associated with their successful performance. The most commonly used model of body composition divides the body into two compartments: fat components and fat-free mass (muscular and bone components). The aims of the study were to determine the body composition parameters of young male soccer players and to show the differences in age groups. Material and methods: A sample of 52 young male soccer players, with an age span from 9 to 14 years were divided into two groups according to the age (group 1 aged 9 to 12 years and group 2 aged 12 to 14 years). Anthropometric measurements were taken according to the method of Mateigka. The following measurements were made: body weight, body height, circumferences (arm, forearm, thigh and calf), diameters (elbow, knee, wrist, ankle) and skinfold thickness (biceps, triceps, thigh, leg, chest, abdomen). The measurements were used in Mateigka’s equations. Results: Body mass components were analyzed as absolute values (in kilograms) and as percentage values: the muscular component (MC kg and MC%), the bone component (BCkg and BC%) and the body fat (BFkg and BF%). The group up to 12 years showed the following mean values of the analyzed parameters: MM=21.5kg; MM%=46.3%; BC=8.1kg; BC%=19.1%; BF= 6.3kg; BF%= 15.7%. The second group aged 12-14 year had mean values of body composition parameters as follows: MM=25.6 kg; MM%=48.2%; BC = 11.4 kg; BC%=21.6%; BF= 8.5 kg; BF%= 14. 7%. Conclusions: The young soccer players aged 12 up to 14 years who are in the pre-pubertal phase of growth and development had higher bone component (p<0.05) compared to younger players. There is no significant difference in muscular and fat body component between the two groups of young soccer players.

Keywords: body composition, young soccer players, body fat, fat-free mass

Procedia PDF Downloads 442
266 Strengthening Regulation and Supervision of Microfinance Sector for Development in Ethiopia

Authors: Megersa Dugasa Fite

Abstract:

This paper analyses regulatory and supervisory issues in the Ethiopian micro finance sector, which caters to the needs of those who have been excluded from the formal financial sector. Micro-finance has received increased importance in development because of its grand goal to give credits to the poor to raise their economic and social well-being and improve the quality of lives. The micro-finance at present has been moving towards a credit-plus period through covering savings and insurance functions. It thus helps in reducing the rate of financial exclusion and social segregation, alleviating poverty and, consequently, stimulating development. The Ethiopian micro finance policy has been generally positive and developmental but major regulatory and supervisory limitations such as the absolute prohibition of NGOs to participate in micro credit functions, higher risks for depositors of micro-finance institutions, lack of credit information services with research and development, the unmet demand, and risks of market failures due to over-regulation are disappointing. Therefore, to remove the limited reach and high degree of problems typical in the informal means of financial intermediation plus to deal with the failure of formal banks to provide basic financial services to a significant portion of the country’s population, more needs to be done on micro finance. Certain key regulatory and supervisory revisions hence need to be taken to strengthen the Ethiopian micro finance sector so that it can practically provide majority poor access to a range of high quality financial services that help them work their way out of poverty and the incapacity it imposes.

Keywords: micro-finance, micro-finance regulation and supervision, micro-finance institutions, financial access, social segregation, poverty alleviation, development, Ethiopia

Procedia PDF Downloads 372