Search results for: absolute liability
287 Tool for Analysing the Sensitivity and Tolerance of Mechatronic Systems in Matlab GUI
Authors: Bohuslava Juhasova, Martin Juhas, Renata Masarova, Zuzana Sutova
Abstract:
The article deals with the tool in Matlab GUI form that is designed to analyse a mechatronic system sensitivity and tolerance. In the analysed mechatronic system, a torque is transferred from the drive to the load through a coupling containing flexible elements. Different methods of control system design are used. The classic form of the feedback control is proposed using Naslin method, modulus optimum criterion and inverse dynamics method. The cascade form of the control is proposed based on combination of modulus optimum criterion and symmetric optimum criterion. The sensitivity is analysed on the basis of absolute and relative sensitivity of system function to the change of chosen parameter value of the mechatronic system, as well as the control subsystem. The tolerance is analysed in the form of determining the range of allowed relative changes of selected system parameters in the field of system stability. The tool allows to analyse an influence of torsion stiffness, torsion damping, inertia moments of the motor and the load and controller(s) parameters. The sensitivity and tolerance are monitored in terms of the impact of parameter change on the response in the form of system step response and system frequency-response logarithmic characteristics. The Symbolic Math Toolbox for expression of the final shape of analysed system functions was used. The sensitivity and tolerance are graphically represented as 2D graph of sensitivity or tolerance of the system function and 3D/2D static/interactive graph of step/frequency response.Keywords: mechatronic systems, Matlab GUI, sensitivity, tolerance
Procedia PDF Downloads 433286 The Inversion of Helical Twist Sense in Liquid Crystal by Spectroscopy Methods
Authors: Anna Drzewicz, Marzena Tykarska
Abstract:
The chiral liquid crystal phases form the helicoidal structure, which is characterized by the helical pitch and the helical twist sense. In anticlinic smectic phase with antiferroelectric properties three types of helix temperature dependence have been obtained: increased helical pitch with temperature and right-handed helix, decreased helical pitch with temperature and left-handed helix and the inversion of both. The change of helical twist sense may be observed during the transition from one liquid crystal phase to another or within one phase for the same substance. According to Gray and McDonnell theory, the helical handedness depends on the absolute configuration of the assymetric carbon atom and its position related to the rigid core of the molecule. However, this theory does not explain the inversion of helical twist sense phenomenon. It is supposed, that it may be caused by the presence of different conformers with opposite handendess, which concentration may change with temperature. In this work, the inversion of helical twist sense in the chiral liquid crystals differing in the length of alkyl chain, in the substitution the benzene ring by fluorine atoms and in the type of helix handedness was tested by vibrational spectroscopy (infrared and raman spectroscopy) and by nuclear magnetic resonance spectroscopy. The results obtained from the vibrational spectroscopy confirm the presence of different conformers. Moreover, the analysis of nuclear magnetic resonance spectra is very useful to check, on which structural fragments the change of conformations are important for the change of helical twist sense.Keywords: helical twist sense, liquid crystals, nuclear magnetic resonance spectroscopy, vibrational spectroscopy
Procedia PDF Downloads 282285 Mechanism of pH Sensitive Flocculation for Organic Load and Colour Reduction in Landfill Leachate
Authors: Brayan Daniel Riascos Arteaga, Carlos Costa Perez
Abstract:
Landfill leachate has an important fraction of humic substances, mainly humic acids (HAs), which often represent more than half value of COD, specially in liquids proceeded from composting processes of organic fraction of solid wastes. We propose in this article a new method of pH sensitive flocculation for COD and colour reduction in landfill leachate based on the chemical properties of HAs. Landfill leachate with a high content of humic acids can be efficiently treated by pH sensitive flocculation at pH 2.0, reducing COD value in 86.1% and colour in 84.7%. Mechanism of pH sensitive flocculation is based in protonation first of phenolic groups and later of carboxylic acid groups in the HAs molecules, resulting in a reduction of Zeta potential value. For pH over neutrality, carboxylic acid and phenolic groups are ionized and Zeta potential increases in absolute value, maintaining HAs in suspension as colloids and conducting flocculation to be obstructed. Ionized anionic groups (carboxylates) can interact electrostatically with cations abundant in leachate (site binding) aiding to maintain HAs in suspension. Simulation of this situation and ideal visualization of Zeta potential behavior is described in the paper and aggregation of molecules by H-bonds is proposed as the main step in separation of HAs from leachate and reduction of COD value in this complex liquid. CHNS analysis, FT-IR spectrometry and UV–VIS spectrophotometry show chemical elements content in the range of natural and commercial HAs, clear aromaticity and carboxylic acids and phenolic groups presence in the precipitate from landfill leachateKeywords: landfill leachate, humic acids, COD, chemical treatment, flocculation
Procedia PDF Downloads 71284 Optical Emission Studies of Laser Produced Lead Plasma: Measurements of Transition Probabilities of the 6P7S → 6P2 Transitions Array
Authors: Javed Iqbal, R. Ahmed, M. A. Baig
Abstract:
We present new data on the optical emission spectra of the laser produced lead plasma using a pulsed Nd:YAG laser at 1064 nm (pulse energy 400 mJ, pulse width 5 ns, 10 Hz repetition rate) in conjunction with a set of miniature spectrometers covering the spectral range from 200 nm to 720 nm. Well resolved structure due to the 6p7s → 6p2 transition array of neutral lead and a few multiplets of singly ionized lead have been observed. The electron temperatures have been calculated in the range (9000 - 10800) ± 500 K using four methods; two line ratio, Boltzmann plot, Saha-Boltzmann plot and Morrata method whereas, the electron number densities have been determined in the range (2.0 – 8.0) ± 0.6 ×1016 cm-3 using the Stark broadened line profiles of neutral lead lines, singly ionized lead lines and hydrogen Hα-line. Full width at half maximum (FWHM) of a number of neutral and singly ionized lead lines have been extracted by the Lorentzian fit to the experimentally observed line profiles. Furthermore, branching fractions have been deduced for eleven lines of the 6p7s → 6p2 transition array in lead whereas the absolute values of the transition probabilities have been calculated by combining the experimental branching fractions with the life times of the excited levels The new results are compared with the existing data showing a good agreement.Keywords: LIBS, plasma parameters, transition probabilities, branching fractions, stark width
Procedia PDF Downloads 283283 Integrating Machine Learning and Rule-Based Decision Models for Enhanced B2B Sales Forecasting and Customer Prioritization
Authors: Wenqi Liu, Reginald Bailey
Abstract:
This study explores an advanced approach to enhancing B2B sales forecasting by integrating machine learning models with a rule-based decision framework. The methodology begins with the development of a machine learning classification model to predict conversion likelihood, aiming to improve accuracy over traditional methods like logistic regression. The classification model's effectiveness is measured using metrics such as accuracy, precision, recall, and F1 score, alongside a feature importance analysis to identify key predictors. Following this, a machine learning regression model is used to forecast sales value, with the objective of reducing mean absolute error (MAE) compared to linear regression techniques. The regression model's performance is assessed using MAE, root mean square error (RMSE), and R-squared metrics, emphasizing feature contribution to the prediction. To bridge the gap between predictive analytics and decision-making, a rule-based decision model is introduced that prioritizes customers based on predefined thresholds for conversion probability and predicted sales value. This approach significantly enhances customer prioritization and improves overall sales performance by increasing conversion rates and optimizing revenue generation. The findings suggest that this combined framework offers a practical, data-driven solution for sales teams, facilitating more strategic decision-making in B2B environments.Keywords: sales forecasting, machine learning, rule-based decision model, customer prioritization, predictive analytics
Procedia PDF Downloads 17282 A Study of Predicting Judgments on Causes of Online Privacy Invasions: Based on U.S Judicial Cases
Authors: Minjung Park, Sangmi Chai, Myoung Jun Lee
Abstract:
Since there are growing concerns on online privacy, enterprises could involve various personal privacy infringements cases resulting legal causations. For companies that are involving online business, it is important for them to pay extra attentions to protect users’ privacy. If firms can aware consequences from possible online privacy invasion cases, they can more actively prevent future online privacy infringements. This study attempts to predict the probability of ruling types caused by various invasion cases under U.S Personal Privacy Act. More specifically, this research explores online privacy invasion cases which was sentenced guilty to identify types of criminal punishments such as penalty, imprisonment, probation as well as compensation in civil cases. Based on the 853 U.S judicial cases ranged from January, 2000 to May, 2016, which related on data privacy, this research examines the relationship between personal information infringements cases and adjudications. Upon analysis results of 41,724 words extracted from 853 regal cases, this study examined online users’ privacy invasion cases to predict the probability of conviction for a firm as an offender in both of criminal and civil law. This research specifically examines that a cause of privacy infringements and a judgment type, whether it leads a civil or criminal liability, from U.S court. This study applies network text analysis (NTA) for data analysis, which is regarded as a useful method to discover embedded social trends within texts. According to our research results, certain online privacy infringement cases caused by online spamming and adware have a high possibility that firms are liable in the case. Our research results provide meaningful insights to academia as well as industry. First, our study is providing a new insight by applying Big Data analytics to legal cases so that it can predict the cause of invasions and legal consequences. Since there are few researches applying big data analytics in the domain of law, specifically in online privacy, this study suggests new area that future studies can explore. Secondly, this study reflects social influences, such as a development of privacy invasion technologies and changes of users’ level of awareness of online privacy on judicial cases analysis by adopting NTA method. Our research results indicate that firms need to improve technical and managerial systems to protect users’ online privacy to avoid negative legal consequences.Keywords: network text analysis, online privacy invasions, personal information infringements, predicting judgements
Procedia PDF Downloads 229281 Embedded System of Signal Processing on FPGA: Underwater Application Architecture
Authors: Abdelkader Elhanaoui, Mhamed Hadji, Rachid Skouri, Said Agounad
Abstract:
The purpose of this paper is to study the phenomenon of acoustic scattering by using a new method. The signal processing (Fast Fourier Transform FFT Inverse Fast Fourier Transform iFFT and BESSEL functions) is widely applied to obtain information with high precision accuracy. Signal processing has a wider implementation in general-purpose pro-cessors. Our interest was focused on the use of FPGAs (Field-Programmable Gate Ar-rays) in order to minimize the computational complexity in single processor architecture, then be accelerated on FPGA and meet real-time and energy efficiency requirements. Gen-eral-purpose processors are not efficient for signal processing. We implemented the acous-tic backscattered signal processing model on the Altera DE-SOC board and compared it to Odroid xu4. By comparison, the computing latency of Odroid xu4 and FPGA is 60 sec-onds and 3 seconds, respectively. The detailed SoC FPGA-based system has shown that acoustic spectra are performed up to 20 times faster than the Odroid xu4 implementation. FPGA-based system of processing algorithms is realized with an absolute error of about 10⁻³. This study underlines the increasing importance of embedded systems in underwater acoustics, especially in non-destructive testing. It is possible to obtain information related to the detection and characterization of submerged cells. So we have achieved good exper-imental results in real-time and energy efficiency.Keywords: DE1 FPGA, acoustic scattering, form function, signal processing, non-destructive testing
Procedia PDF Downloads 78280 Dual Challenges in Host State Regulation on Transnational Corporate Damages: China's Dilemma and Breakthrough
Authors: Xinchao Liu
Abstract:
Regulating environmental and human rights damages caused by transnational corporations in host States is a core issue in the business and human rights discourse. In current regulatory practices, host States, which are territorially based and should bear primary regulation responsibility, face dual challenges at both domestic and international levels, leading to their continued marginalization. Specifically, host States as TNC damage regulators are constrained domestically by territorial jurisdiction limitations and internationally by the neoliberal international economic order exemplified by investment protection mechanisms. Taking China as a sample, it currently lacks a comprehensive regulation system to address TNC damages; while domestic constraints manifest as the marginalization of judicial regulation, the absence of corporate duty of care, and inadequate extraterritorial regulation effectiveness, international constraints are reflected in the absence of foreign investor obligations in investment agreements and the asymmetry of dispute resolution clauses, challenging regulatory sovereignty. As China continues to advance its policy of high-quality opening up, the risks of negative externalities from transnational capital will continue to increase, necessitating a focus on building and perfecting a regulation mechanism for TNC damages within the framework of international law. To address domestic constraints, it is essential to clarify the division of regulation responsibilities between judicial and administrative bodies, promote the normalization of judicial regulation, and enhance judicial oversight of governmental settlements. Improving the choice of law rules for cross-border torts and the standards for parent company liability for omissions, and enhancing extraterritorial judicial effectiveness through transnational judicial dialogue and cooperation mechanisms are also crucial. To counteract international constraints, specifying investor obligations in investment treaties and designing symmetrical dispute resolution clauses are indispensable to eliminate regulatory chill. Additionally, actively advancing the implementation of TNC obligations in business and human rights treaty negotiations will lay an international legal foundation for the regulation sovereignty of host States.Keywords: transnational corporate damages, home state litigation, optimization limit, investor-state dispute settlement
Procedia PDF Downloads 8279 Optical Variability of Faint Quasars
Authors: Kassa Endalamaw Rewnu
Abstract:
The variability properties of a quasar sample, spectroscopically complete to magnitude J = 22.0, are investigated on a time baseline of 2 years using three different photometric bands (U, J and F). The original sample was obtained using a combination of different selection criteria: colors, slitless spectroscopy and variability, based on a time baseline of 1 yr. The main goals of this work are two-fold: first, to derive the percentage of variable quasars on a relatively short time baseline; secondly, to search for new quasar candidates missed by the other selection criteria; and, thus, to estimate the completeness of the spectroscopic sample. In order to achieve these goals, we have extracted all the candidate variable objects from a sample of about 1800 stellar or quasi-stellar objects with limiting magnitude J = 22.50 over an area of about 0.50 deg2. We find that > 65% of all the objects selected as possible variables are either confirmed quasars or quasar candidates on the basis of their colors. This percentage increases even further if we exclude from our lists of variable candidates a number of objects equal to that expected on the basis of `contamination' induced by our photometric errors. The percentage of variable quasars in the spectroscopic sample is also high, reaching about 50%. On the basis of these results, we can estimate that the incompleteness of the original spectroscopic sample is < 12%. We conclude that variability analysis of data with small photometric errors can be successfully used as an efficient and independent (or at least auxiliary) selection method in quasar surveys, even when the time baseline is relatively short. Finally, when corrected for the different intrinsic time lags corresponding to a fixed observed time baseline, our data do not show a statistically significant correlation between variability and either absolute luminosity or redshift.Keywords: nuclear activity, galaxies, active quasars, variability
Procedia PDF Downloads 80278 Forecasting Equity Premium Out-of-Sample with Sophisticated Regression Training Techniques
Authors: Jonathan Iworiso
Abstract:
Forecasting the equity premium out-of-sample is a major concern to researchers in finance and emerging markets. The quest for a superior model that can forecast the equity premium with significant economic gains has resulted in several controversies on the choice of variables and suitable techniques among scholars. This research focuses mainly on the application of Regression Training (RT) techniques to forecast monthly equity premium out-of-sample recursively with an expanding window method. A broad category of sophisticated regression models involving model complexity was employed. The RT models include Ridge, Forward-Backward (FOBA) Ridge, Least Absolute Shrinkage and Selection Operator (LASSO), Relaxed LASSO, Elastic Net, and Least Angle Regression were trained and used to forecast the equity premium out-of-sample. In this study, the empirical investigation of the RT models demonstrates significant evidence of equity premium predictability both statistically and economically relative to the benchmark historical average, delivering significant utility gains. They seek to provide meaningful economic information on mean-variance portfolio investment for investors who are timing the market to earn future gains at minimal risk. Thus, the forecasting models appeared to guarantee an investor in a market setting who optimally reallocates a monthly portfolio between equities and risk-free treasury bills using equity premium forecasts at minimal risk.Keywords: regression training, out-of-sample forecasts, expanding window, statistical predictability, economic significance, utility gains
Procedia PDF Downloads 107277 A Two-Stage Bayesian Variable Selection Method with the Extension of Lasso for Geo-Referenced Data
Authors: Georgiana Onicescu, Yuqian Shen
Abstract:
Due to the complex nature of geo-referenced data, multicollinearity of the risk factors in public health spatial studies is a commonly encountered issue, which leads to low parameter estimation accuracy because it inflates the variance in the regression analysis. To address this issue, we proposed a two-stage variable selection method by extending the least absolute shrinkage and selection operator (Lasso) to the Bayesian spatial setting, investigating the impact of risk factors to health outcomes. Specifically, in stage I, we performed the variable selection using Bayesian Lasso and several other variable selection approaches. Then, in stage II, we performed the model selection with only the selected variables from stage I and compared again the methods. To evaluate the performance of the two-stage variable selection methods, we conducted a simulation study with different distributions for the risk factors, using geo-referenced count data as the outcome and Michigan as the research region. We considered the cases when all candidate risk factors are independently normally distributed, or follow a multivariate normal distribution with different correlation levels. Two other Bayesian variable selection methods, Binary indicator, and the combination of Binary indicator and Lasso were considered and compared as alternative methods. The simulation results indicated that the proposed two-stage Bayesian Lasso variable selection method has the best performance for both independent and dependent cases considered. When compared with the one-stage approach, and the other two alternative methods, the two-stage Bayesian Lasso approach provides the highest estimation accuracy in all scenarios considered.Keywords: Lasso, Bayesian analysis, spatial analysis, variable selection
Procedia PDF Downloads 143276 InP Nanocrystals Core and Surface Electronic Structure from Ab Initio Calculations
Authors: Hamad R. Jappor, Zeyad Adnan Saleh, Mudar A. Abdulsattar
Abstract:
The ab initio restricted Hartree-Fock method is used to simulate the electronic structure of indium phosphide (InP) nanocrystals (NCs) (216-738 atoms) with sizes ranging up to about 2.5 nm in diameter. The calculations are divided into two parts, surface, and core. The oxygenated (001)-(1×1) facet that expands with larger sizes of nanocrystals is investigated to determine the rule of the surface in nanocrystals electronic structure. Results show that lattice constant and ionicity of the core part show decreasing order as nanocrystals grow up in size. The smallest investigated nanocrystal is 1.6% larger in lattice constant and 131.05% larger in ionicity than the converged value of largest investigated nanocrystal. Increasing nanocrystals size also resulted in an increase of core cohesive energy (absolute value), increase of core energy gap, and increase of core valence. The surface states are found mostly non-degenerated because of the effect of surface discontinuity and oxygen atoms. Valence bandwidth is wider on the surface due to splitting and oxygen atoms. The method also shows fluctuations in the converged energy gap, valence bandwidth and cohesive energy of core part of nanocrystals duo to shape variation. The present work suggests the addition of ionicity and lattice constant to the quantities that are affected by quantum confinement phenomenon. The method of the present model has threefold results; it can be used to approach the electronic structure of crystals bulk, surface, and nanocrystals.Keywords: InP, nanocrystals core, ionicity, Hartree-Fock method, large unit cell
Procedia PDF Downloads 399275 Quantifying Meaning in Biological Systems
Authors: Richard L. Summers
Abstract:
The advanced computational analysis of biological systems is becoming increasingly dependent upon an understanding of the information-theoretic structure of the materials, energy and interactive processes that comprise those systems. The stability and survival of these living systems are fundamentally contingent upon their ability to acquire and process the meaning of information concerning the physical state of its biological continuum (biocontinuum). The drive for adaptive system reconciliation of a divergence from steady-state within this biocontinuum can be described by an information metric-based formulation of the process for actionable knowledge acquisition that incorporates the axiomatic inference of Kullback-Leibler information minimization driven by survival replicator dynamics. If the mathematical expression of this process is the Lagrangian integrand for any change within the biocontinuum then it can also be considered as an action functional for the living system. In the direct method of Lyapunov, such a summarizing mathematical formulation of global system behavior based on the driving forces of energy currents and constraints within the system can serve as a platform for the analysis of stability. As the system evolves in time in response to biocontinuum perturbations, the summarizing function then conveys information about its overall stability. This stability information portends survival and therefore has absolute existential meaning for the living system. The first derivative of the Lyapunov energy information function will have a negative trajectory toward a system's steady state if the driving force is dissipating. By contrast, system instability leading to system dissolution will have a positive trajectory. The direction and magnitude of the vector for the trajectory then serves as a quantifiable signature of the meaning associated with the living system’s stability information, homeostasis and survival potential.Keywords: meaning, information, Lyapunov, living systems
Procedia PDF Downloads 131274 Electricity Price Forecasting: A Comparative Analysis with Shallow-ANN and DNN
Authors: Fazıl Gökgöz, Fahrettin Filiz
Abstract:
Electricity prices have sophisticated features such as high volatility, nonlinearity and high frequency that make forecasting quite difficult. Electricity price has a volatile and non-random character so that, it is possible to identify the patterns based on the historical data. Intelligent decision-making requires accurate price forecasting for market traders, retailers, and generation companies. So far, many shallow-ANN (artificial neural networks) models have been published in the literature and showed adequate forecasting results. During the last years, neural networks with many hidden layers, which are referred to as DNN (deep neural networks) have been using in the machine learning community. The goal of this study is to investigate electricity price forecasting performance of the shallow-ANN and DNN models for the Turkish day-ahead electricity market. The forecasting accuracy of the models has been evaluated with publicly available data from the Turkish day-ahead electricity market. Both shallow-ANN and DNN approach would give successful result in forecasting problems. Historical load, price and weather temperature data are used as the input variables for the models. The data set includes power consumption measurements gathered between January 2016 and December 2017 with one-hour resolution. In this regard, forecasting studies have been carried out comparatively with shallow-ANN and DNN models for Turkish electricity markets in the related time period. The main contribution of this study is the investigation of different shallow-ANN and DNN models in the field of electricity price forecast. All models are compared regarding their MAE (Mean Absolute Error) and MSE (Mean Square) results. DNN models give better forecasting performance compare to shallow-ANN. Best five MAE results for DNN models are 0.346, 0.372, 0.392, 0,402 and 0.409.Keywords: deep learning, artificial neural networks, energy price forecasting, turkey
Procedia PDF Downloads 292273 Complicating Representations of Domestic Violence Perpetration through a Qualitative Content Analysis and Socio-Ecological Approach
Authors: Charlotte Lucke
Abstract:
This study contributes to the body of literature that analyzes and complicates oversimplified and sensationalized representations of trauma and violence through a close examination and complication of representations of perpetrators of domestic violence in the mass media. This study determines the ways the media frames perpetrators of domestic violence through a qualitative content analysis and socio-ecological approach to the perpetration of violence. While the qualitative analysis has not been carried out, through preliminary research, this study hypothesizes that the media represents perpetrators through tropes such as the 'predator' or 'offender,' or as a demonized 'other.' It is necessary to expose and work through such stereotypes because cultivation theory demonstrates that the mass media determines societal beliefs about and perceptions of the world. Thus, representations of domestic violence in the mass media can lead people to believe that perpetrators of violence are mere animals or criminals and overlook the trauma that many perpetrators experience. When the media represents perpetrators as pure evil, monsters, or absolute 'others,' it leaves out the complexities of what moves people to commit domestic violence. By analyzing and placing media representations of perpetrators into conversation with the socio-ecological approach to violence perpetration, this study complicates domestic violence stereotypes. The socio-ecological model allows researchers to consider the way the interplay between individuals and their families, friends, communities, and cultures can move people to act violently. Using this model, along with psychological and psychoanalytic approaches to the etiology of domestic violence, this paper argues that media stereotypes conceal the way people’s experiences of trauma, along with community and cultural norms, perpetuates the cycle of systemic trauma and violence in the home.Keywords: domestic violence, media images, representing trauma, theorising trauma
Procedia PDF Downloads 238272 Issues of Accounting of Lease and Revenue according to International Financial Reporting Standards
Authors: Nadezhda Kvatashidze, Elena Kharabadze
Abstract:
It is broadly known that lease is a flexible means of funding enterprises. Lease reduces the risk related to access and possession of assets, as well as obtainment of funding. Therefore, it is important to refine lease accounting. The lease accounting regulations under the applicable standard (International Accounting Standards 17) make concealment of liabilities possible. As a result, the information users get inaccurate and incomprehensive information and have to resort to an additional assessment of the off-balance sheet lease liabilities. In order to address the problem, the International Financial Reporting Standards Board decided to change the approach to lease accounting. With the deficiencies of the applicable standard taken into account, the new standard (IFRS 16 ‘Leases’) aims at supplying appropriate and fair lease-related information to the users. Save certain exclusions; the lessee is obliged to recognize all the lease agreements in its financial report. The approach was determined by the fact that under the lease agreement, rights and obligations arise by way of assets and liabilities. Immediately upon conclusion of the lease agreement, the lessee takes an asset into its disposal and assumes the obligation to effect the lease-related payments in order to meet the recognition criteria defined by the Conceptual Framework for Financial Reporting. The payments are to be entered into the financial report. The new lease accounting standard secures supply of quality and comparable information to the financial information users. The International Accounting Standards Board and the US Financial Accounting Standards Board jointly developed IFRS 15: ‘Revenue from Contracts with Customers’. The standard allows the establishment of detailed revenue recognition practical criteria such as identification of the performance obligations in the contract, determination of the transaction price and its components, especially price variable considerations and other important components, as well as passage of control over the asset to the customer. IFRS 15: ‘Revenue from Contracts with Customers’ is very similar to the relevant US standards and includes requirements more specific and consistent than those of the standards in place. The new standard is going to change the recognition terms and techniques in the industries, such as construction, telecommunications (mobile and cable networks), licensing (media, science, franchising), real property, software etc.Keywords: assessment of the lease assets and liabilities, contractual liability, division of contract, identification of contracts, contract price, lease identification, lease liabilities, off-balance sheet, transaction value
Procedia PDF Downloads 319271 Film Censorship and Female Chastity: Exploring State's Discourses and Patriarchal Values in Reconstructing Chinese Film Stardom of Tang Wei
Authors: Xinchen Zhu
Abstract:
The rapid fame of the renowned female film star Tang Wei has made her a typical subject (or object) entangled with sensitive issues involving the official ideology, sexuality, and patriarchal values of contemporary China. In 2008, Tang Wei’s official ban has triggered the wave of debates concerning state power and censorship, actor’s rights, sexual ethics, and feminism in the public sphere. Her ban implies that Chinese film censorship acts as a key factor in reconstructing Chinese film stardom. Following the ban, as sensational media texts are re-interpreting the official discourses, the texts also functioned as a crucial vehicle in reconstructing Tang's female image. Therefore, the case study of Tang's film stardom allows us to further explore how female stardom has been entangled with the issues involving official ideology, female sexual ethics, and patriarchal values in contemporary China. This paper argues that Chinese female film stars shoulder the responsibility of film acting which would conform to the official male-dominated values. However, with the development of the Internet, the state no longer remains an absolute control over the new venues. The netizens’ discussion about her ban reshaped Tang’s image as a victim and scapegoat under the unfair oppression of the official authority. Additionally, this paper argues that similar to State’s discourse, netizens’ discourse did not reject patriarchal values, and in turn emphasized Tang Wei’s female chastity.Keywords: film censorship, Chinese female film stardom, party-state’s power, national discourses, Tang Wei
Procedia PDF Downloads 169270 Role of Hyperbaric Oxygen Therapy in Management of Diabetic Foot
Authors: Magdy Al Shourbagi
Abstract:
Diabetes mellitus is the commonest cause of neuropathy. The common pattern is a distal symmetrical sensory polyneuropathy, associated with autonomic disturbances. Less often, Diabetes mellitus is responsible for a focal or multifocal neuropathy. Common causes for non-healing of diabetic foot are the infection and ischemia. Diabetes mellitus is associated with a defective cellular and humoral immunity. Particularly, decreased phagocytosis, decreased chemotaxis, impaired bacterial killing and abnormal lymphocytic function resulting in a reduced inflammatory reaction and defective wound healing. Hyperbaric oxygen therapy is defined by the Undersea and Hyperbaric Medical Society as a treatment in which a patient intermittently breathes 100% oxygen and the treatment chamber is pressurized to a pressure greater than sea level (1 atmosphere absolute). The pressure increase may be applied in mono-place (single person) or multi-place chambers. Multi-place chambers are pressurized with air, with oxygen given via face mask or endotracheal tube; while mono-place chambers are pressurized with oxygen. Oxygen gas plays an important role in the physiology of wound healing. Hyperbaric oxygen therapy can raise tissue oxygen tensions to levels where wound healing can be expected. HBOT increases the killing ability of leucocytes also it is lethal for certain anaerobic bacteria and inhibits toxin formation in many other anaerobes. Multiple anecdotal reports and studies in HBO therapy in diabetic patients report that HBO can be an effective adjunct therapy in the management of diabetic foot wounds and is associated with better functional outcomes.Keywords: hyperbari oxygen therapy, diabetic foot, neuropathy, multiplace chambers
Procedia PDF Downloads 290269 Outcome Analysis of Surgical and Nonsurgical Treatment on Indicated Operative Chronic Subdural Hematoma: Serial Case in Cipto Mangunkusumo Hospital Indonesia
Authors: Novie Nuraini, Sari Hanifa, Yetty Ramli
Abstract:
Chronic subdural hematoma (cSDH) is a common condition after head trauma. Although the size of the thickness of cSDH has an important role in the decision to perform surgery, but the size limit of the thickness is not absolute. In this serial case report, we evaluate three case report of cSDH that indicated to get the surgical procedure because of deficit neurologic and neuroimaging finding with subfalcine herniation more than 0.5 cm and hematoma thickness more than one cm. On the first case, the patient got evacuation hematoma procedure, but the second and third case, we did nonsurgical treatment because the patient and family refused to do the operation. We did the conservative treatment with bed rest and mannitol. Serial radiologic evaluation is done when we found worsening condition. We also reevaluated radiologic examination two weeks after the treatment. The results in this serial case report, the first and second case have a good outcome. On the third case, there was a worsening condition, which in this patient there was a comorbid with type two diabetic mellitus, pneumonie and chronic kidney disease. Some conservative treatment such as bed rest, corticosteroid, mannitol or the other hyperosmolar has a good outcome in patient without neurologic deficits, small hematoma, and or patient without comorbid disease. Evacuate hematome is the best choice in cSDH treatment with deficit neurologic finding. Afterall, there is some condition that we can not do the surgical procedure. Serial radiologic examination needed after two weeks to evaluate the treatment or if there is any worsening condition.Keywords: chronic subdural hematoma, traumatic brain injury, surgical treatment, nonsurgical treatment, outcome
Procedia PDF Downloads 332268 Analysis of Delays during Initial Phase of Construction Projects and Mitigation Measures
Authors: Sunaitan Al Mutairi
Abstract:
A perfect start is a key factor for project completion on time. The study examined the effects of delayed mobilization of resources during the initial phases of the project. This paper mainly highlights the identification and categorization of all delays during the initial construction phase and their root cause analysis with corrective/control measures for the Kuwait Oil Company oil and gas projects. A relatively good percentage of the delays identified during the project execution (Contract award to end of defects liability period) attributed to mobilization/preliminary activity delays. Data analysis demonstrated significant increase in average project delay during the last five years compared to the previous period. Contractors had delays/issues during the initial phase, which resulted in slippages and progressively increased, resulting in time and cost overrun. Delays/issues not mitigated on time during the initial phase had very high impact on project completion. Data analysis of the delays for the past five years was carried out using trend chart, scatter plot, process map, box plot, relative importance index and Pareto chart. Construction of any project inside the Gathering Centers involves complex management skills related to work force, materials, plant, machineries, new technologies etc. Delay affects completion of projects and compromises quality, schedule and budget of project deliverables. Works executed as per plan during the initial phase and start-up duration of the project construction activities resulted in minor slippages/delays in project completion. In addition, there was a good working environment between client and contractor resulting in better project execution and management. Mainly, the contractor was on the front foot in the execution of projects, which had minimum/no delays during the initial and construction period. Hence, having a perfect start during the initial construction phase shall have a positive influence on the project success. Our research paper studies each type of delay with some real example supported by statistic results and suggests mitigation measures. Detailed analysis carried out with all stakeholders based on impact and occurrence of delays to have a practical and effective outcome to mitigate the delays. The key to improvement is to have proper control measures and periodic evaluation/audit to ensure implementation of the mitigation measures. The focus of this research is to reduce the delays encountered during the initial construction phase of the project life cycle.Keywords: construction activities delays, delay analysis for construction projects, mobilization delays, oil & gas projects delays
Procedia PDF Downloads 318267 Simplified Stress Gradient Method for Stress-Intensity Factor Determination
Authors: Jeries J. Abou-Hanna
Abstract:
Several techniques exist for determining stress-intensity factors in linear elastic fracture mechanics analysis. These techniques are based on analytical, numerical, and empirical approaches that have been well documented in literature and engineering handbooks. However, not all techniques share the same merit. In addition to overly-conservative results, the numerical methods that require extensive computational effort, and those requiring copious user parameters hinder practicing engineers from efficiently evaluating stress-intensity factors. This paper investigates the prospects of reducing the complexity and required variables to determine stress-intensity factors through the utilization of the stress gradient and a weighting function. The heart of this work resides in the understanding that fracture emanating from stress concentration locations cannot be explained by a single maximum stress value approach, but requires use of a critical volume in which the crack exists. In order to understand the effectiveness of this technique, this study investigated components of different notch geometry and varying levels of stress gradients. Two forms of weighting functions were employed to determine stress-intensity factors and results were compared to analytical exact methods. The results indicated that the “exponential” weighting function was superior to the “absolute” weighting function. An error band +/- 10% was met for cases ranging from a steep stress gradient in a sharp v-notch to the less severe stress transitions of a large circular notch. The incorporation of the proposed method has shown to be a worthwhile consideration.Keywords: fracture mechanics, finite element method, stress intensity factor, stress gradient
Procedia PDF Downloads 135266 Evaluation of Synthesis and Structure Elucidation of Some Benzimidazoles as Antimicrobial Agents
Authors: Ozlem Temiz Arpaci, Meryem Tasci, Hakan Goker
Abstract:
Benzimidazole, a structural isostere of indol and purine nuclei that can interact with biopolymers, can be identified as master key. So that benzimidazole compounds are important fragments in medicinal chemistry because of their wide range of biological activities including antimicrobial activity. We planned to synthesize some benzimidazole compounds for developing new antimicrobial drug candidates. In this study, we put some heterocyclic rings on second position and an amidine group on the fifth position of benzimidazole ring and synthesized them using a multiple step procedure. For the synthesis of the compounds, as the first step, 4-chloro-3-nitrobenzonitrile was reacted with cyclohexylamine in dimethyl formamide. Imidate esters (compound 2) were then prepared with absolute ethanol saturated with dry HCl gas. These imidate esters which were not too stable were converted to compound 3 by passing ammonia gas through ethanol. At the Pd / C catalyst, the nitro group is reduced to the amine group (compound 4). Finally, various aldehyde derivatives were reacted with sodium metabisulfite addition products to give compound 5-20. Melting points were determined on a Buchi B-540 melting point apparatus in open capillary tubes and are uncorrected. Elemental analyses were done a Leco CHNS 932 elemental analyzer. 1H-NMR and 13C-NMR spectra were recorded on a Varian Mercury 400 MHz spectrometer using DMSO-d6. Mass spectra were acquired on a Waters Micromass ZQ using the ESI(+) method. The structures of them were supported by spectral data. The 1H-NMR, 13C NMR and mass spectra and elemental analysis results agree with those of the proposed structures. Antimicrobial activity studies of the synthesized compounds are under the investigation.Keywords: benzimidazoles, synthesis, structure elucidation, antimicrobial
Procedia PDF Downloads 154265 Forecasting Container Throughput: Using Aggregate or Terminal-Specific Data?
Authors: Gu Pang, Bartosz Gebka
Abstract:
We forecast the demand of total container throughput at the Indonesia’s largest seaport, Tanjung Priok Port. We propose four univariate forecasting models, including SARIMA, the additive Seasonal Holt-Winters, the multiplicative Seasonal Holt-Winters and the Vector Error Correction Model. Our aim is to provide insights into whether forecasting the total container throughput obtained by historical aggregated port throughput time series is superior to the forecasts of the total throughput obtained by summing up the best individual terminal forecasts. We test the monthly port/individual terminal container throughput time series between 2003 and 2013. The performance of forecasting models is evaluated based on Mean Absolute Error and Root Mean Squared Error. Our results show that the multiplicative Seasonal Holt-Winters model produces the most accurate forecasts of total container throughput, whereas SARIMA generates the worst in-sample model fit. The Vector Error Correction Model provides the best model fits and forecasts for individual terminals. Our results report that the total container throughput forecasts based on modelling the total throughput time series are consistently better than those obtained by combining those forecasts generated by terminal-specific models. The forecasts of total throughput until the end of 2018 provide an essential insight into the strategic decision-making on the expansion of port's capacity and construction of new container terminals at Tanjung Priok Port.Keywords: SARIMA, Seasonal Holt-Winters, Vector Error Correction Model, container throughput
Procedia PDF Downloads 504264 Criminal Law and Internet of Things: Challenges and Threats
Authors: Celina Nowak
Abstract:
The development of information and communication technologies (ICT) and a consequent growth of cyberspace have become a reality of modern societies. The newest addition to this complex structure has been Internet of Things which is due to the appearance of smart devices. IoT creates a new dimension of the network, as the communication is no longer the domain of just humans, but has also become possible between devices themselves. The possibility of communication between devices, devoid of human intervention and real-time supervision, generated new societal and legal challenges. Some of them may and certainly will eventually be connected to criminal law. Legislators both on national and international level have been struggling to cope with this technologically evolving environment in order to address new threats created by the ICT. There are legal instruments on cybercrime, however imperfect and not of universal scope, sometimes referring to specific types of prohibited behaviors undertaken by criminals, such as money laundering, sex offences. However, the criminal law seems largely not prepared to the challenges which may arise because of the development of IoT. This is largely due to the fact that criminal law, both on national and international level, is still based on the concept of perpetration of an offence by a human being. This is a traditional approach, historically and factually justified. Over time, some legal systems have developed or accepted the possibility of commission of an offence by a corporation, a legal person. This is in fact a legal fiction, as a legal person cannot commit an offence as such, it needs humans to actually behave in a certain way on its behalf. Yet, the legislators have come to understand that corporations have their own interests and may benefit from crime – and therefore need to be penalized. This realization however has not been welcome by all states and still give rise to doubts of ontological and theoretical nature in many legal systems. For this reason, in many legislations the liability of legal persons for commission of an offence has not been recognized as criminal responsibility. With the technological progress and the growing use of IoT the discussions referring to criminal responsibility of corporations seem rather inadequate. The world is now facing new challenges and new threats related to the ‘smart’ things. They will have to be eventually addressed by legislators if they want to, as they should, to keep up with the pace of technological and societal evolution. This will however require a reevaluation and possibly restructuring of the most fundamental notions of modern criminal law, such as perpetration, guilt, participation in crime. It remains unclear at this point what norms and legal concepts will be and may be established. The main goal of the research is to point out to the challenges ahead of the national and international legislators in the said context and to attempt to formulate some indications as to the directions of changes, having in mind serious threats related to privacy and security related to the use of IoT.Keywords: criminal law, internet of things, privacy, security threats
Procedia PDF Downloads 162263 Evaluating Accuracy of Foetal Weight Estimation by Clinicians in Christian Medical College Hospital, India and Its Correlation to Actual Birth Weight: A Clinical Audit
Authors: Aarati Susan Mathew, Radhika Narendra Patel, Jiji Mathew
Abstract:
A retrospective study conducted at Christian Medical College (CMC) Teaching Hospital, Vellore, India on 14th August 2014 to assess the accuracy of clinically estimated foetal weight upon labour admission. Estimating foetal weight is a crucial factor in assessing maternal and foetal complications during and after labour. Medical notes of ninety-eight postnatal women who fulfilled the inclusion criteria were studied to evaluate the correlation between their recorded Estimated Foetal Weight (EFW) on admission and actual birth weight (ABW) of the newborn after delivery. Data concerning maternal and foetal demographics was also noted. Accuracy was determined by absolute percentage error and proportion of estimates within 10% of ABW. Actual birth weights ranged from 950-4080g. A strong positive correlation between EFW and ABW (r=0.904) was noted. Term deliveries (≥40 weeks) in the normal weight range (2500-4000g) had a 59.5% estimation accuracy (n=74) compared to pre-term (<40 weeks) with an estimation accuracy of 0% (n=2). Out of the term deliveries, macrosomic babies (>4000g) were underestimated by 25% (n=3) and low birthweight (LBW) babies were overestimated by 12.7% (n=9). Registrars who estimated foetal weight were accurate in babies within normal weight ranges. However, there needs to be an improvement in predicting weight of macrosomic and LBW foetuses. We have suggested the use of an amended version of the Johnson’s formula for the Indian population for improvement and a need to re-audit once implemented.Keywords: clinical palpation, estimated foetal weight, pregnancy, India, Johnson’s formula
Procedia PDF Downloads 363262 Official Secrecy and Confidentiality in Tax Administration and Its Impact on Right to Access Information: Nigerian Perspectives
Authors: Kareem Adedokun
Abstract:
Official secrecy is one of the colonial vestiges which upholds non – disclosure of essential information for public consumption. Information, though an indispensable tool in tax administration, is not to be divulged by any person in an official duty of the revenue agency. As a matter o fact, the Federal Inland Revenue Service (Establishment) Act, 2007 emphasizes secrecy and confidentiality in dealing with tax payer’s document, information, returns and assessment in a manner reminiscent of protecting tax payer’s privacy in all situations. It is so serious that any violation attracts criminal sanction. However, Nigeria, being a democratic and egalitarian state recently enacted Freedom of Information Act which heralded in openness in governance and takes away the confidentialities associated with official secrets Laws. Official secrecy no doubts contradicts the philosophy of freedom of information but maintaining a proper balance between protected rights of tax payers and public interest which revenue agency upholds is an uphill task. Adopting the Doctrinal method, therefore, the author of this paper probes into the real nature of the relationship between taxpayers and Revenue Agencies. It also interfaces official secrecy with the doctrine of Freedom of Information and consequently queries the retention of non – disclosure clause under Federal Inland Revenue Service (Establishment) Act (FIRSEA) 2007. The paper finds among others that non – disclosure provision in tax statutes particularly as provided for in FIRSEA is not absolute; so also is the constitutional rights and freedom of information and unless the non – disclosure clause finds justification under any recognized exemption provided under the Freedom of Information Act, its retention is antithesis to democratic ethos and beliefs as it may hinder public interest and public order.Keywords: confidentiality, information, official secrecy, tax administration
Procedia PDF Downloads 341261 Physical Dynamics of Planet Earth and Their Implications for Global Climate Change and Mitigation: A Case Study of Sistan Plain, Balochistan Region, Southeastern Iran
Authors: Hamidoddin Yousefi, Ahmad Nikbakht
Abstract:
The Sistan Plain, situated in the Balochistan region of southeastern Iran, is renowned for its arid climatic conditions and prevailing winds that persist for approximately 120 days annually. The region faces multiple challenges, including drought susceptibility, exacerbated by wind erosion, temperature fluctuations, and the influence of policies implemented by neighboring Afghanistan and Iran. This study focuses on investigating the characteristics of jet streams within the Sistan Plain and their implications for global climate change. Various models are employed to analyze convective mass fluxes, horizontal moisture transport, temporal variance, and the calculation of radiation convective equilibrium within the atmosphere. Key considerations encompass the distribution of relative humidity, dry air, and absolute humidity. Moreover, the research aims to predict the interplay between jet streams and human activities, particularly regarding their environmental impacts and water scarcity. The investigation encompasses both local and global environmental consequences, drawing upon historical climate change data and comprehensive field research. The anticipated outcomes of this study hold substantial potential for mitigating global climate change and its associated environmental ramifications. By comprehending the dynamics of jet streams and their interconnections with human activities, effective strategies can be formulated to address water scarcity and minimize environmental degradation.Keywords: Sistani plain, Baluchistan, Hamoun lake, climate change, jet streams, environmental impact, water scarcity, mitigation
Procedia PDF Downloads 73260 The Structure and Composition of Plant Communities in Ajluon Forest Reserve in Jordan
Authors: Maher J. Tadros, Yaseen Ananbeh
Abstract:
The study area is located in Ajluon Forest Reserve northern part of Jordan. It consists of Mediterranean hills dominated by open woodlands of oak and pistachio. The aims of the study were to investigate the positive and negative relationships between the locals and the protected area and how it can affect the long-term forest conservation. The main research objectives are to review the impact of establishing Ajloun Forest Reserve on nature conservation and on the livelihood level of local communities around the reserve. The Ajloun forest reserve plays a fundamental role in Ajloun area development. The existence of initiatives of nature conservation in the area supports various socio-economic activities around the reserve that contribute towards the development of local communities in Ajloun area. A part of this research was to conduct a survey to study the impact of Ajloun forest reserve on biodiversity composition. Also, studying the biodiversity content especially for vegetation to determine the economic impacts of Ajloun forest reserve on its surroundings was studied. In this study, several methods were used to fill the objectives including point-centered quarter method which involves selecting randomly 50 plots at the study site. The collected data from the field showed that the absolute density was (1031.24 plant per hectare). Density was recorded and found to be the highest for Quecus coccifera, and relative density of (73.7%), this was followed by Arbutus andrachne and relative density (7.1%), Pistacia palaestina and relative density (10.5%) and Crataegus azarulus (82.5 p/ha) and relative density (5.1%),Keywords: composition, density, frequency, importance value, point-centered quarter, structure, tree cover
Procedia PDF Downloads 278259 A Review of Kinematics and Joint Load Forces in Total Knee Replacements Influencing Surgical Outcomes
Authors: Samira K. Al-Nasser, Siamak Noroozi, Roya Haratian, Adrian Harvey
Abstract:
A total knee replacement (TKR) is a surgical procedure necessary when there is severe pain and/or loss of function in the knee. Surgeons balance the load in the knee and the surrounding soft tissue by feeling the tension at different ranges of motion. This method can be unreliable and lead to early failure of the joint. The ideal kinematics and load distribution have been debated significantly based on previous biomechanical studies surrounding both TKRs and normal knees. Intraoperative sensors like VERASENSE and eLibra have provided a method for the quantification of the load indicating a balanced knee. A review of the literature written about intraoperative sensors and tension/stability of the knee was done. Studies currently debate the quantification of the load in medial and lateral compartments specifically. However, most research reported that following a TKR the medial compartment was loaded more heavily than the lateral compartment. In several cases, these results were shown to increase the success of the surgery because they mimic the normal kinematics of the knee. In conclusion, most research agrees that an intercompartmental load differential of between 10 and 20 pounds, where the medial load was higher than the lateral, and an absolute load of less than 70 pounds was ideal. However, further intraoperative sensor development could help improve the accuracy and understanding of the load distribution on the surgical outcomes in a TKR. A reduction in early revision surgeries for TKRs would provide an improved quality of life for patients and reduce the economic burden placed on both the National Health Service (NHS) and the patient.Keywords: intraoperative sensors, joint load forces, kinematics, load balancing, and total knee replacement
Procedia PDF Downloads 136258 Demonstration of Logical Inconsistency in the Discussion of the Problem of Evil
Authors: Mohammad Soltani Renani
Abstract:
The problem of evil is one of the heated battlegrounds of the idea of theism and its critics. Since time immemorial and in various philosophical schools and religions, the belief in an Omniscient, Omnipotent, and Absolutely Good God has been considered inconsistent with the existence of the evil in the universe. The theist thinkers have generally adopted one of the following four ways for answering this problem: denial of the existence of evil or considering it to be relative, privation theory of evil, attribution of evil to something other than God, and depiction of an alternative picture of God. Defense or criticism of these alternative answers have given rise to an extensive and unending dispute. However, evaluation of the presupposition and context upon/in which a question is raised precedes offering an answer to it. This point in the discussion of the problem of evil is of paramount importance for both parties, i.e., questioners and answerers, that the attributes of knowledge, power, love, good-will, among others, can be supposed to be infinite only in the essence of the attributed and the domain of potentiality but what can be realized in the domain of actuality is always finite. Therefore, infinite nature of Divine Attributes and realization of evil belong to two spheres. Divine Attributes are infinite (absolute) in Divine Essence, but when they are created, each one becomes bounded by the other. This boundedness is a result of the state of being surrounded of the attributes by each other in finite world of possibility. Evil also appears in this limited world. This inconsistency leads to the collapse of the problem of evil from within: the place of infinity of the Divine Attributes, in the words of Muslim mystics, lies in the Holiest Manifestation [Feyze Aqdas] while evil emerges in the Holy Manifestation where the Divine Attributes become bounded by each other. This idea is neither a new answer to the problem of evil nor a defense of theism; rather it reveals a logical inconsistency in the discussion of the problem of evil.Keywords: problem of evil, infinity of divine attributes, boundedness of divine attributes, holiest manifestation, holy manifestation
Procedia PDF Downloads 146