Search results for: covering machine
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3486

Search results for: covering machine

2136 Probabilistic Crash Prediction and Prevention of Vehicle Crash

Authors: Lavanya Annadi, Fahimeh Jafari

Abstract:

Transportation brings immense benefits to society, but it also has its costs. Costs include such as the cost of infrastructure, personnel and equipment, but also the loss of life and property in traffic accidents on the road, delays in travel due to traffic congestion and various indirect costs in terms of air transport. More research has been done to identify the various factors that affect road accidents, such as road infrastructure, traffic, sociodemographic characteristics, land use, and the environment. The aim of this research is to predict the probabilistic crash prediction of vehicles using machine learning due to natural and structural reasons by excluding spontaneous reasons like overspeeding etc., in the United States. These factors range from weather factors, like weather conditions, precipitation, visibility, wind speed, wind direction, temperature, pressure, and humidity to human made structures like road structure factors like bump, roundabout, no exit, turning loop, give away, etc. Probabilities are dissected into ten different classes. All the predictions are based on multiclass classification techniques, which are supervised learning. This study considers all crashes that happened in all states collected by the US government. To calculate the probability, multinomial expected value was used and assigned a classification label as the crash probability. We applied three different classification models, including multiclass Logistic Regression, Random Forest and XGBoost. The numerical results show that XGBoost achieved a 75.2% accuracy rate which indicates the part that is being played by natural and structural reasons for the crash. The paper has provided in-deep insights through exploratory data analysis.

Keywords: road safety, crash prediction, exploratory analysis, machine learning

Procedia PDF Downloads 113
2135 A Multilayer Perceptron Neural Network Model Optimized by Genetic Algorithm for Significant Wave Height Prediction

Authors: Luis C. Parra

Abstract:

The significant wave height prediction is an issue of great interest in the field of coastal activities because of the non-linear behavior of the wave height and its complexity of prediction. This study aims to present a machine learning model to forecast the significant wave height of the oceanographic wave measuring buoys anchored at Mooloolaba of the Queensland Government Data. Modeling was performed by a multilayer perceptron neural network-genetic algorithm (GA-MLP), considering Relu(x) as the activation function of the MLPNN. The GA is in charge of optimized the MLPNN hyperparameters (learning rate, hidden layers, neurons, and activation functions) and wrapper feature selection for the window width size. Results are assessed using Mean Square Error (MSE), Root Mean Square Error (RMSE), and Mean Absolute Error (MAE). The GAMLPNN algorithm was performed with a population size of thirty individuals for eight generations for the prediction optimization of 5 steps forward, obtaining a performance evaluation of 0.00104 MSE, 0.03222 RMSE, 0.02338 MAE, and 0.71163% of MAPE. The results of the analysis suggest that the MLPNNGA model is effective in predicting significant wave height in a one-step forecast with distant time windows, presenting 0.00014 MSE, 0.01180 RMSE, 0.00912 MAE, and 0.52500% of MAPE with 0.99940 of correlation factor. The GA-MLP algorithm was compared with the ARIMA forecasting model, presenting better performance criteria in all performance criteria, validating the potential of this algorithm.

Keywords: significant wave height, machine learning optimization, multilayer perceptron neural networks, evolutionary algorithms

Procedia PDF Downloads 108
2134 Customized Temperature Sensors for Sustainable Home Appliances

Authors: Merve Yünlü, Nihat Kandemir, Aylin Ersoy

Abstract:

Temperature sensors are used in home appliances not only to monitor the basic functions of the machine but also to minimize energy consumption and ensure safe operation. In parallel with the development of smart home applications and IoT algorithms, these sensors produce important data such as the frequency of use of the machine, user preferences, and the compilation of critical data in terms of diagnostic processes for fault detection throughout an appliance's operational lifespan. Commercially available thin-film resistive temperature sensors have a well-established manufacturing procedure that allows them to operate over a wide temperature range. However, these sensors are over-designed for white goods applications. The operating temperature range of these sensors is between -70°C and 850°C, while the temperature range requirement in home appliance applications is between 23°C and 500°C. To ensure the operation of commercial sensors in this wide temperature range, usually, a platinum coating of approximately 1-micron thickness is applied to the wafer. However, the use of platinum in coating and the high coating thickness extends the sensor production process time and therefore increases sensor costs. In this study, an attempt was made to develop a low-cost temperature sensor design and production method that meets the technical requirements of white goods applications. For this purpose, a custom design was made, and design parameters (length, width, trim points, and thin film deposition thickness) were optimized by using statistical methods to achieve the desired resistivity value. To develop thin film resistive temperature sensors, one side polished sapphire wafer was used. To enhance adhesion and insulation 100 nm silicon dioxide was coated by inductively coupled plasma chemical vapor deposition technique. The lithography process was performed by a direct laser writer. The lift-off process was performed after the e-beam evaporation of 10 nm titanium and 280 nm platinum layers. Standard four-point probe sheet resistance measurements were done at room temperature. The annealing process was performed. Resistivity measurements were done with a probe station before and after annealing at 600°C by using a rapid thermal processing machine. Temperature dependence between 25-300 °C was also tested. As a result of this study, a temperature sensor has been developed that has a lower coating thickness than commercial sensors but can produce reliable data in the white goods application temperature range. A relatively simplified but optimized production method has also been developed to produce this sensor.

Keywords: thin film resistive sensor, temperature sensor, household appliance, sustainability, energy efficiency

Procedia PDF Downloads 73
2133 Prognostic Value in Meningioma Patients’: A Clinical-Histopathological Study

Authors: Ilham Akbar Rahman, Aflah Dhea Bariz Yasta, Iin Fadhilah Utami Tamasse, Devina Juanita

Abstract:

Meningioma is adult brain tumors originating from the meninges covering the brain and spinal cord. The females have approximately twice higher 2:1 than male in the incidence of meningioma. This study aimed to analyze the histopathological grading and clinical aspect in predicting the prognosis of meningioma patients. An observational study with cross sectional design was used on 53 meningioma patients treated at Dr. Wahidin Sudirohusodo hospital in 2016. The data then were analyzed using SPSS 20.0. Of 53 patients, mostly 41 (77,4%) were female and 12 (22,6%) were male. The distribution of histopathology patients showed the meningothelial meningioma of 18 (43,9%) as the most type found. Fibroplastic meningioma were 8 (19,5%), while atypical meningioma and psammomatous meningioma were 6 (14,6%) each. The rest were malignant meningioma and angiomatous meningioma which found in respectively 2 (4,9%) and 1 (2,4%). Our result found significant finding that mostly male were fibroblastic meningioma (50%), however meningothelial meningioma were found in the majority of female (54,8%) and also seizure comprised only in higher grade meningioma. On the outcome of meningioma patient treated operatively, histopathological grade remained insignificant (p > 0,05). This study can be used as prognostic value of meningioma patients based on gender, histopathological grade, and clinical manifestation. Overall, the outcome of the meningioma’s patients is good and promising as long as it is well managed.

Keywords: meningioma, prognostic value, histopathological grading, clinical manifestation

Procedia PDF Downloads 171
2132 Managing Diversity in MNCS: A Literature Review of Existing Strategic Models for Managing Diversity and a Roadmap to Transfer Them to the Subsidiaries

Authors: Debora Gottardello, Mireia Valverde Aparicio, Juan Llopis Taverner

Abstract:

Globalization has given rise to a great diversity in the composition of people in organizations. Diversity management is therefore key to create growth in today’s competitive global marketplace. This work develops a literature review related to the existing models for managing diversity covering the period from 1980 until 2014. Furthermore, it identifies limitations in previous models. More specifically, the literature review reveals that there is a lack of information about how these models can be adapted from the headquarters to the subsidiaries. Therefore, the contribution of this paper is to suggest how the models should be adapted when they are directed to host countries. Our aim is to highlight the limitations of the developed models with regards to the translation of the diversity management practices to the subsidiaries. Accordingly, a model that will enable MNCs to ensure a global strategy is suggested. Taking advantage of the potential incorporated in a culturally diverse work team should be at the top of every international company’s aims. Executives from headquarters need to use different attitudes when transferring diversity practices towards their subsidiaries. Further studies should reassess local practices of diversity management to find out how this universal management model is translated.

Keywords: culture diversity, diversity management, human resources management, MNCs, subsidiaries, workforce diversity

Procedia PDF Downloads 256
2131 The Reliability and Shape of the Force-Power-Velocity Relationship of Strength-Trained Males Using an Instrumented Leg Press Machine

Authors: Mark Ashton Newman, Richard Blagrove, Jonathan Folland

Abstract:

The force-velocity profile of an individual has been shown to influence success in ballistic movements, independent of the individuals' maximal power output; therefore, effective and accurate evaluation of an individual’s F-V characteristics and not solely maximal power output is important. The relatively narrow range of loads typically utilised during force-velocity profiling protocols due to the difficulty in obtaining force data at high velocities may bring into question the accuracy of the F-V slope along with predictions pertaining to the maximum force that the system can produce at a velocity of null (F₀) and the theoretical maximum velocity against no load (V₀). As such, the reliability of the slope of the force-velocity profile, as well as V₀, has been shown to be relatively poor in comparison to F₀ and maximal power, and it has been recommended to assess velocity at loads closer to both F₀ and V₀. The aim of the present study was to assess the relative and absolute reliability of an instrumented novel leg press machine which enables the assessment of force and velocity data at loads equivalent to ≤ 10% of one repetition maximum (1RM) through to 1RM during a ballistic leg press movement. The reliability of maximal and mean force, velocity, and power, as well as the respective force-velocity and power-velocity relationships and the linearity of the force-velocity relationship, were evaluated. Sixteen male strength-trained individuals (23.6 ± 4.1 years; 177.1 ± 7.0 cm; 80.0 ± 10.8 kg) attended four sessions; during the initial visit, participants were familiarised with the leg press, modified to include a mounted force plate (Type SP3949, Force Logic, Berkshire, UK) and a Micro-Epsilon WDS-2500-P96 linear positional transducer (LPT) (Micro-Epsilon, Merseyside, UK). Peak isometric force (IsoMax) and a dynamic 1RM, both from a starting position of 81% leg length, were recorded for the dominant leg. Visits two to four saw the participants carry out the leg press movement at loads equivalent to ≤ 10%, 30%, 50%, 70%, and 90% 1RM. IsoMax was recorded during each testing visit prior to dynamic F-V profiling repetitions. The novel leg press machine used in the present study appears to be a reliable tool for measuring F and V-related variables across a range of loads, including velocities closer to V₀ when compared to some of the findings within the published literature. Both linear and polynomial models demonstrated good to excellent levels of reliability for SFV and F₀ respectively, with reliability for V₀ being good using a linear model but poor using a 2nd order polynomial model. As such, a polynomial regression model may be most appropriate when using a similar unilateral leg press setup to predict maximal force production capabilities due to only a 5% difference between F₀ and obtained IsoMax values with a linear model being best suited to predict V₀.

Keywords: force-velocity, leg-press, power-velocity, profiling, reliability

Procedia PDF Downloads 60
2130 Procyclicality of Leverage: An Empirical Analysis from Turkish Banks

Authors: Emin Avcı, Çiydem Çatak

Abstract:

The recent economic crisis have shown that procyclicality, which could threaten the stability and growth of the economy, is a major problem of financial and real sector. The term procyclicality refers here the cyclical behavior of banks that lead them to follow the same patterns as the real economy. In this study, leverage which demonstrate how a bank manage its debt, is chosen as bank specific variable to see the effect of changes in it over the economic cycle. The procyclical behavior of Turkish banking sector (commercial, participation, development-investment banks) is tried to explain with analyzing the relationship between leverage and asset growth. On the basis of theoretical explanations, eight different leverage ratios are utilized in eight different panel data models to demonstrate the procyclicality effect of Turkish banks leverage using monthly data covering the 2005-2014 period. It is tested whether there is an increasing (decreasing) trend in the leverage ratio of Turkish banks when there is an enlargement (contraction) in their balance sheet. The major finding of the study indicates that asset growth has a significant effect on all eight leverage ratios. In other words, the leverage of Turkish banks follow a cyclical pattern, which is in line with those of earlier literature.

Keywords: banking, economic cycles, leverage, procyclicality

Procedia PDF Downloads 266
2129 Recent Developments in the Application of Deep Learning to Stock Market Prediction

Authors: Shraddha Jain Sharma, Ratnalata Gupta

Abstract:

Predicting stock movements in the financial market is both difficult and rewarding. Analysts and academics are increasingly using advanced approaches such as machine learning techniques to anticipate stock price patterns, thanks to the expanding capacity of computing and the recent advent of graphics processing units and tensor processing units. Stock market prediction is a type of time series prediction that is incredibly difficult to do since stock prices are influenced by a variety of financial, socioeconomic, and political factors. Furthermore, even minor mistakes in stock market price forecasts can result in significant losses for companies that employ the findings of stock market price prediction for financial analysis and investment. Soft computing techniques are increasingly being employed for stock market prediction due to their better accuracy than traditional statistical methodologies. The proposed research looks at the need for soft computing techniques in stock market prediction, the numerous soft computing approaches that are important to the field, past work in the area with their prominent features, and the significant problems or issue domain that the area involves. For constructing a predictive model, the major focus is on neural networks and fuzzy logic. The stock market is extremely unpredictable, and it is unquestionably tough to correctly predict based on certain characteristics. This study provides a complete overview of the numerous strategies investigated for high accuracy prediction, with a focus on the most important characteristics.

Keywords: stock market prediction, artificial intelligence, artificial neural networks, fuzzy logic, accuracy, deep learning, machine learning, stock price, trading volume

Procedia PDF Downloads 93
2128 Influence of Electrode Assembly on Catalytic Activation and Deactivation of a PT Film Immobilized H+ Conducting Solid Electrolyte in Electrocatalytic Reduction Reactions

Authors: M. A. Hasnat, M. Amirul Islam, M. A. Rashed, Jamil. Safwan, M. Mahabubul Alam

Abstract:

Symmetric (Cu–Pt|Nafion|Pt–Cu) and asymmetric(Pt|Nafion|Pt–Cu) assemblies were fabricated to study the nitrate reduction processes at the cathode. The electrocatalytic nitrate reduction reactions were performed in these assemblies in order to investigate the prerequisite for the enhanced catalytic activity, electrochemical cell durability as well as preferable product selectivity resulting from the reduction of nitrate at the cathode. It has been observed for the symmetric assembly that Cu particles were oxidized on the anode surface under an applied potential and the resulting copper ions migrated to the cathode surface through the Nafion membrane, which deposited as copper oxide on the cathode surface. The formation of this copper oxide covering layer on the Pt–Cu cathode surface is attributed as the reason for the deactivation of the cathode that governed the reduced nitrate reduction along with increasing nitrite selectivity. These problems were addressed and resolved with the asymmetric design of the electrocatalytic reactor, where enhanced hydrogen evolution activates the surface by eroding the CuO over layer as well as speeding up the slow rate determining hydrogenation reactions.

Keywords: membrane, nitrate, electrocatalysis, voltammetry, electrolysis

Procedia PDF Downloads 268
2127 Influence of Magnetized Water on the Split Tensile Strength of Concrete

Authors: Justine Cyril E. Nunag, Nestor B. Sabado Jr., Jienne Chester M. Tolosa

Abstract:

Concrete has high compressive strength but a low-tension strength. The small tensile strength of concrete is regarded as its primary weakness, which is why it is typically reinforced with steel, a material that is resistant to tension. Even with steel, however, cracking can occur. In strengthening concrete, only a few researchers have modified the water to be used in a concrete mix. This study aims to compare the split tensile strength of normal structural concrete to concrete prepared with magnetic water and a quick setting admixture. In this context, magnetic water is defined as tap water that has undergone a magnetic process to become magnetized water. To test the hypothesis that magnetized concrete leads to higher split tensile strength, twenty concrete specimens were made. There were five groups, each with five samples, that were differentiated by the number of cycles (0, 50, 100, and 150). The data from the Universal Testing Machine's split tensile strength were then analyzed using various statistical models and tests to determine the significant effect of magnetized water. The result showed a moderate (+0.579) but still significant degree of correlation. The researchers also discovered that using magnetic water for 50 cycles did not result in a significant increase in the concrete's split tensile strength, which influenced the analysis of variance. These results suggest that a concrete mix containing magnetic water and a quick-setting admixture alters the typical split tensile strength of normal concrete. Magnetic water has a significant impact on concrete tensile strength. The hardness property of magnetic water influenced the split tensile strength of concrete. In addition, a higher number of cycles results in a strong water magnetism. The laboratory test results show that a higher cycle translates to a higher tensile strength.

Keywords: hardness property, magnetic water, quick-setting admixture, split tensile strength, universal testing machine

Procedia PDF Downloads 146
2126 Influence of Different Rhizome Sizes and Operational Speed on the Field Capacity and Efficiency of a Three–Row Turmeric Rhizome Planter

Authors: Muogbo Chukwudi Peter, Gbabo Agidi

Abstract:

Influence of different turmeric rhizome sizes and machine operational speed on the field capacity and efficiency of a developed prototype tractor-drawn turmeric planter was studied. This was done with a view to ascertaining how the field capacity and field efficiency were affected by the turmeric rhizome lengths and tractor operational speed. The turmeric rhizome planter consists of trapezoidal hopper, grooved cylindrical metering devise, rectangular frame, ground wheels made of mild steel, furrow opener, chain/sprocket drive system, three linkage point seed delivery tube and press wheel. The experiment was randomized in a factorial design of three levels of rhizome lengths (30, 45 and 60 mm) and operational speeds of 8, 10, and 12 kmh-1. About 3 kg cleaned turmeric rhizomes were introduced into each hopper of the planter and were planted 30 m2 of experimental plot. During the field evaluation of the planter, the effective field capacity, field efficiency, missing index, multiple index and percentage rhizome bruise were evaluated. 30.08% was recorded for maximum percentage bruise on the rhizome. The mean effective field capacity ranged between 0.63 – 0.96hah-1 at operational speeds of 8 and 12kmh-1 respectively and 45 mm rhizome length. The result also shows that the mean efficiency was obtained to be 65.8%. The percentage rhizome bruise decreases with increase in operational speed. The highest and lowest percentage turmeric rhizome miss index of 35% were recorded for turmeric rhizome length of 30 mm at a speed of 10 kmhr-1 and 8 kmhr-1, respectively. The potential implications of the experimental result is to determine the optimal machine process conditions for higher field capacity and gross reduction in mechanical injury (bruise) of planted turmeric rhizomes.

Keywords: rhizome sizes, operational speed, field capacity. field efficiency, turmeric rhizome, planter

Procedia PDF Downloads 63
2125 Non-Invasive Data Extraction from Machine Display Units Using Video Analytics

Authors: Ravneet Kaur, Joydeep Acharya, Sudhanshu Gaur

Abstract:

Artificial Intelligence (AI) has the potential to transform manufacturing by improving shop floor processes such as production, maintenance and quality. However, industrial datasets are notoriously difficult to extract in a real-time, streaming fashion thus, negating potential AI benefits. The main example is some specialized industrial controllers that are operated by custom software which complicates the process of connecting them to an Information Technology (IT) based data acquisition network. Security concerns may also limit direct physical access to these controllers for data acquisition. To connect the Operational Technology (OT) data stored in these controllers to an AI application in a secure, reliable and available way, we propose a novel Industrial IoT (IIoT) solution in this paper. In this solution, we demonstrate how video cameras can be installed in a factory shop floor to continuously obtain images of the controller HMIs. We propose image pre-processing to segment the HMI into regions of streaming data and regions of fixed meta-data. We then evaluate the performance of multiple Optical Character Recognition (OCR) technologies such as Tesseract and Google vision to recognize the streaming data and test it for typical factory HMIs and realistic lighting conditions. Finally, we use the meta-data to match the OCR output with the temporal, domain-dependent context of the data to improve the accuracy of the output. Our IIoT solution enables reliable and efficient data extraction which will improve the performance of subsequent AI applications.

Keywords: human machine interface, industrial internet of things, internet of things, optical character recognition, video analytics

Procedia PDF Downloads 111
2124 Medicompills Architecture: A Mathematical Precise Tool to Reduce the Risk of Diagnosis Errors on Precise Medicine

Authors: Adriana Haulica

Abstract:

Powered by Machine Learning, Precise medicine is tailored by now to use genetic and molecular profiling, with the aim of optimizing the therapeutic benefits for cohorts of patients. As the majority of Machine Language algorithms come from heuristics, the outputs have contextual validity. This is not very restrictive in the sense that medicine itself is not an exact science. Meanwhile, the progress made in Molecular Biology, Bioinformatics, Computational Biology, and Precise Medicine, correlated with the huge amount of human biology data and the increase in computational power, opens new healthcare challenges. A more accurate diagnosis is needed along with real-time treatments by processing as much as possible from the available information. The purpose of this paper is to present a deeper vision for the future of Artificial Intelligence in Precise medicine. In fact, actual Machine Learning algorithms use standard mathematical knowledge, mostly Euclidian metrics and standard computation rules. The loss of information arising from the classical methods prevents obtaining 100% evidence on the diagnosis process. To overcome these problems, we introduce MEDICOMPILLS, a new architectural concept tool of information processing in Precise medicine that delivers diagnosis and therapy advice. This tool processes poly-field digital resources: global knowledge related to biomedicine in a direct or indirect manner but also technical databases, Natural Language Processing algorithms, and strong class optimization functions. As the name suggests, the heart of this tool is a compiler. The approach is completely new, tailored for omics and clinical data. Firstly, the intrinsic biological intuition is different from the well-known “a needle in a haystack” approach usually used when Machine Learning algorithms have to process differential genomic or molecular data to find biomarkers. Also, even if the input is seized from various types of data, the working engine inside the MEDICOMPILLS does not search for patterns as an integrative tool. This approach deciphers the biological meaning of input data up to the metabolic and physiologic mechanisms, based on a compiler with grammars issued from bio-algebra-inspired mathematics. It translates input data into bio-semantic units with the help of contextual information iteratively until Bio-Logical operations can be performed on the base of the “common denominator “rule. The rigorousness of MEDICOMPILLS comes from the structure of the contextual information on functions, built to be analogous to mathematical “proofs”. The major impact of this architecture is expressed by the high accuracy of the diagnosis. Detected as a multiple conditions diagnostic, constituted by some main diseases along with unhealthy biological states, this format is highly suitable for therapy proposal and disease prevention. The use of MEDICOMPILLS architecture is highly beneficial for the healthcare industry. The expectation is to generate a strategic trend in Precise medicine, making medicine more like an exact science and reducing the considerable risk of errors in diagnostics and therapies. The tool can be used by pharmaceutical laboratories for the discovery of new cures. It will also contribute to better design of clinical trials and speed them up.

Keywords: bio-semantic units, multiple conditions diagnosis, NLP, omics

Procedia PDF Downloads 70
2123 Mapping Context, Roles, and Relations for Adjudicating Robot Ethics

Authors: Adam J. Bowen

Abstract:

Abstract— Should robots have rights or legal protections. Often debates concerning whether robots and AI should be afforded rights focus on conditions of personhood and the possibility of future advanced forms of AI satisfying particular intrinsic cognitive and moral attributes of rights-holding persons. Such discussions raise compelling questions about machine consciousness, autonomy, and value alignment with human interests. Although these are important theoretical concerns, especially from a future design perspective, they provide limited guidance for addressing the moral and legal standing of current and near-term AI that operate well below the cognitive and moral agency of human persons. Robots and AI are already being pressed into service in a wide range of roles, especially in healthcare and biomedical contexts. The design and large-scale implementation of robots in the context of core societal institutions like healthcare systems continues to rapidly develop. For example, we bring them into our homes, hospitals, and other care facilities to assist in care for the sick, disabled, elderly, children, or otherwise vulnerable persons. We enlist surgical robotic systems in precision tasks, albeit still human-in-the-loop technology controlled by surgeons. We also entrust them with social roles involving companionship and even assisting in intimate caregiving tasks (e.g., bathing, feeding, turning, medicine administration, monitoring, transporting). There have been advances to enable severely disabled persons to use robots to feed themselves or pilot robot avatars to work in service industries. As the applications for near-term AI increase and the roles of robots in restructuring our biomedical practices expand, we face pressing questions about the normative implications of human-robot interactions and collaborations in our collective worldmaking, as well as the moral and legal status of robots. This paper argues that robots operating in public and private spaces be afforded some protections as either moral patients or legal agents to establish prohibitions on robot abuse, misuse, and mistreatment. We already implement robots and embed them in our practices and institutions, which generates a host of human-to-machine and machine-to-machine relationships. As we interact with machines, whether in service contexts, medical assistance, or home health companions, these robots are first encountered in relationship to us and our respective roles in the encounter (e.g., surgeon, physical or occupational therapist, recipient of care, patient’s family, healthcare professional, stakeholder). This proposal aims to outline a framework for establishing limiting factors and determining the extent of moral or legal protections for robots. In doing so, it advocates for a relational approach that emphasizes the priority of mapping the complex contextually sensitive roles played and the relations in which humans and robots stand to guide policy determinations by relevant institutions and authorities. The relational approach must also be technically informed by the intended uses of the biomedical technologies in question, Design History Files, extensive risk assessments and hazard analyses, as well as use case social impact assessments.

Keywords: biomedical robots, robot ethics, robot laws, human-robot interaction

Procedia PDF Downloads 123
2122 Cross-Cultural Pragmatics: Apology Strategies by Libyans

Authors: Ahmed Elgadri

Abstract:

In the last thirty years, studies on cross-cultural pragmatics in general and apology strategies in specific have focused on western and East-Asian societies. A small volume of research has been conducted in investigating speech acts production by Arabic dialect speakers. Therefore, this study investigated the apology strategies used by Libyan Arabic speakers using an online Discourse Completion Task (DCT) questionnaire. The DCT consisted of six situations covering different social contexts. The survey was written in Libyan Arabic dialect to help generate vernacular speech as much as possible. The participants were 25 Libyan nationals, 12 females, and 13 males. Also, to get a deeper understanding of the motivation behind the use of certain strategies, the researcher interviewed four participants using the Libyan Arabic dialect as well. The results revealed a high use of IFID, offer of repair, and explanation. Although this might support the universality claim of speech acts strategies, it was clear that cultural norms and religion determined the choice of apology strategies significantly. This led to the discovery of new culture-specific strategies, as outlined later in this paper. This study gives an insight into politeness strategies in Libyan society, and it is hoped to contribute to the field of cross-cultural pragmatics.

Keywords: apologies, cross-cultural pragmatics, language and culture, Libyan Arabic, politeness, pragmatics, socio-pragmatics, speech acts

Procedia PDF Downloads 151
2121 The Design and Analysis of a Novel Type High Gain Microstrip Patch Antenna System for the Satellite Communication

Authors: Shahid M. Ali, Zakiullah

Abstract:

An individual feed, smooth and smart, completely new shaped, dual band microstrip patch antenna has been proposed in this manuscript. Right here three triangular shape slots are usually presented in the 3 edges on the patch and along with a small feed line has utilized another edge on the patch to find out the dual band. The antenna carries a condensed framework wherever patch is around about 8.5mm by means of 7.96mm by means of 1.905mm leading to excellent bandwidths covering 13. 15 GHz to 13. 72 GHz in addition to 16.04 GHz to 16.58GHz. The return loss(RL) decrease in -19. 00dB and will be attained in the first resonant frequency at 13. 61 GHz and -28.69dB is at second resonance frequency at 16.33GHz. The stable average peak gain that may be observed along the operating band in lower and higher frequency is actually three. 53dB in addition to 5.562dB correspondingly. The radiation designs usually are omni directional along with moderate gain within equally most of these functioning bands. Accomplishment is proven within double frequencies at 13.62GHz since downlink in addition to 16.33GHz since uplink. This kind of low and simple configuration of the proposed antenna shows simplest fabrication and make it ensure that it is adaptable for your application within instant in satellite and as well as for the wireless communication system.

Keywords: dual band, microstrip patch antenna, HFSS, Ku band, satellite

Procedia PDF Downloads 362
2120 Bearing Capacity of Sulphuric Acid Content Soil

Authors: R. N. Khare, J. P. Sahu, Rajesh Kumar Tamrakar

Abstract:

Tests were conducted to determine the property of soil with variation of H2SO4 content for soils under different stage. The soils had varying amounts of plasticity’s ranging from low to high plasticity. The unsaturated soil behavior was investigated for different conditions, covering a range of compactive efforts and water contents. The soil characteristic curves were more sensitive to changes in compaction effort than changes in compaction water content. In this research paper two types of water (Ground water Ph =7.9, Turbidity= 13 ppm; Cl =2.1mg/l and surface water Ph =8.65; Turbidity=18.5; Cl=1mg/l) were selected of Bhilai Nagar, State-Chhattisgarh, India which is mixed with a certain type of soil. Results shows that by the presence of ground water day by day the particles are becoming coarser in 7 days thereafter its size reduces; on the other hand by the presence of surface water the courser particles are disintegrating, finer particles are accumulating and also the dry density is reduces. Plasticity soils retained the smallest water content and the highest plasticity soils retained the highest water content at a specified suction. In addition, soil characteristic for soils to be compacted in the laboratory and in the field are still under process for analyzing the bearing capacity. The bearing capacity was reduced 2 to 3 times in the presence of H2SO4.

Keywords: soil compaction, H2SO4, soil water, water conditions

Procedia PDF Downloads 540
2119 Ultra-Wideband (45-50 GHz) mm-Wave Substrate Integrated Waveguide Cavity Slots Antenna for Future Satellite Communications

Authors: Najib Al-Fadhali, Huda Majid

Abstract:

In this article, a substrate integrated waveguide cavity slot antenna was designed using a computer simulation technology software tool to address the specific design challenges for millimeter-wave communications posed by future satellite communications. Due to the symmetrical structure, a high-order mode is generated in SIW, which yields high gain and high efficiency with a compact feed structure. The antenna has dimensions of 20 mm x 20 mm x 1.34 mm. The proposed antenna bandwidth ranges from 45 GHz to 50 GHz, covering a Q-band application such as satellite communication. Antenna efficiency is above 80% over the operational frequency range. The gain of the antenna is above 9 dB with a peak value of 9.4 dB at 47.5 GHz. The proposed antenna is suitable for various millimeter-wave applications such as sensing, body imaging, indoor scenarios, new generations of wireless networks, and future satellite communications. The simulated results show that the SIW antenna resonates throughout the bands of 45 to 50 GHz, making this new antenna cover all applications within this range. The reflection coefficients are below 10 dB in most ranges from 45 to 50 GHz. The compactness, integrity, reliability, and performance at various operating frequencies make the proposed antenna a good candidate for future satellite communications.

Keywords: ultra-wideband, Q-band, SIW, mm-wave, satellite communications

Procedia PDF Downloads 86
2118 A Comprehensive Study and Evaluation on Image Fashion Features Extraction

Authors: Yuanchao Sang, Zhihao Gong, Longsheng Chen, Long Chen

Abstract:

Clothing fashion represents a human’s aesthetic appreciation towards everyday outfits and appetite for fashion, and it reflects the development of status in society, humanity, and economics. However, modelling fashion by machine is extremely challenging because fashion is too abstract to be efficiently described by machines. Even human beings can hardly reach a consensus about fashion. In this paper, we are dedicated to answering a fundamental fashion-related problem: what image feature best describes clothing fashion? To address this issue, we have designed and evaluated various image features, ranging from traditional low-level hand-crafted features to mid-level style awareness features to various current popular deep neural network-based features, which have shown state-of-the-art performance in various vision tasks. In summary, we tested the following 9 feature representations: color, texture, shape, style, convolutional neural networks (CNNs), CNNs with distance metric learning (CNNs&DML), AutoEncoder, CNNs with multiple layer combination (CNNs&MLC) and CNNs with dynamic feature clustering (CNNs&DFC). Finally, we validated the performance of these features on two publicly available datasets. Quantitative and qualitative experimental results on both intra-domain and inter-domain fashion clothing image retrieval showed that deep learning based feature representations far outweigh traditional hand-crafted feature representation. Additionally, among all deep learning based methods, CNNs with explicit feature clustering performs best, which shows feature clustering is essential for discriminative fashion feature representation.

Keywords: convolutional neural network, feature representation, image processing, machine modelling

Procedia PDF Downloads 141
2117 Computing Machinery and Legal Intelligence: Towards a Reflexive Model for Computer Automated Decision Support in Public Administration

Authors: Jacob Livingston Slosser, Naja Holten Moller, Thomas Troels Hildebrandt, Henrik Palmer Olsen

Abstract:

In this paper, we propose a model for human-AI interaction in public administration that involves legal decision-making. Inspired by Alan Turing’s test for machine intelligence, we propose a way of institutionalizing a continuous working relationship between man and machine that aims at ensuring both good legal quality and higher efficiency in decision-making processes in public administration. We also suggest that our model enhances the legitimacy of using AI in public legal decision-making. We suggest that case loads in public administration could be divided between a manual and an automated decision track. The automated decision track will be an algorithmic recommender system trained on former cases. To avoid unwanted feedback loops and biases, part of the case load will be dealt with by both a human case worker and the automated recommender system. In those cases an experienced human case worker will have the role of an evaluator, choosing between the two decisions. This model will ensure that the algorithmic recommender system is not compromising the quality of the legal decision making in the institution. It also enhances the legitimacy of using algorithmic decision support because it provides justification for its use by being seen as superior to human decisions when the algorithmic recommendations are preferred by experienced case workers. The paper outlines in some detail the process through which such a model could be implemented. It also addresses the important issue that legal decision making is subject to legislative and judicial changes and that legal interpretation is context sensitive. Both of these issues requires continuous supervision and adjustments to algorithmic recommender systems when used for legal decision making purposes.

Keywords: administrative law, algorithmic decision-making, decision support, public law

Procedia PDF Downloads 218
2116 Characterizing Nanoparticles Generated from the Different Working Type and the Stack Flue during 3D Printing Process

Authors: Kai-Jui Kou, Tzu-Ling Shen, Ying-Fang Wang

Abstract:

The objectives of the present study are to characterize nanoparticles generated from the different working type in 3D printing room and the stack flue during 3D printing process. The studied laboratory (10.5 m× 7.2 m × 3.2 m) with a ventilation rate of 500 m³/H is installed a 3D metal printing machine. Direct-reading instrument of a scanning mobility particle sizer (SMPS, Model 3082, TSI Inc., St. Paul, MN, USA) was used to conduct static sampling for nanoparticle number concentration and particle size distribution measurements. The SMPS obtained particle number concentration at every 3 minutes, the diameter of the SMPS ranged from 11~372 nm when the aerosol and sheath flow rates were set at 0.6 and 6 L/min, respectively. The concentrations of background, printing process, clearing operation, and screening operation were performed in the laboratory. On the other hand, we also conducted nanoparticle measurement on the 3D printing machine's stack flue to understand its emission characteristics. Results show that the nanoparticles emitted from the different operation process were the same distribution in the form of the uni-modal with number median diameter (NMD) as approximately 28.3 nm to 29.6 nm. The number concentrations of nanoparticles were 2.55×10³ count/cm³ in laboratory background, 2.19×10³ count/cm³ during printing process, 2.29×10³ count/cm³ during clearing process, 3.05×10³ count/cm³ during screening process, 2.69×10³ count/cm³ in laboratory background after printing process, and 6.75×10³ outside laboratory, respectively. We found that there are no emission nanoparticles during the printing process. However, the number concentration of stack flue nanoparticles in the ongoing print is 1.13×10⁶ count/cm³, and that of the non-printing is 1.63×10⁴ count/cm³, with a NMD of 458 nm and 29.4 nm, respectively. It can be confirmed that the measured particle size belongs to easily penetrate the filter in theory during the printing process, even though the 3D printer has a high-efficiency filtration device. Therefore, it is recommended that the stack flue of the 3D printer would be equipped with an appropriate dust collection device to prevent the operators from exposing these hazardous particles.

Keywords: nanoparticle, particle emission, 3D printing, number concentration

Procedia PDF Downloads 184
2115 Automated Facial Symmetry Assessment for Orthognathic Surgery: Utilizing 3D Contour Mapping and Hyperdimensional Computing-Based Machine Learning

Authors: Wen-Chung Chiang, Lun-Jou Lo, Hsiu-Hsia Lin

Abstract:

This study aimed to improve the evaluation of facial symmetry, which is crucial for planning and assessing outcomes in orthognathic surgery (OGS). Facial symmetry plays a key role in both aesthetic and functional aspects of OGS, making its accurate evaluation essential for optimal surgical results. To address the limitations of traditional methods, a different approach was developed, combining three-dimensional (3D) facial contour mapping with hyperdimensional (HD) computing to enhance precision and efficiency in symmetry assessments. The study was conducted at Chang Gung Memorial Hospital, where data were collected from 2018 to 2023 using 3D cone beam computed tomography (CBCT), a highly detailed imaging technique. A large and comprehensive dataset was compiled, consisting of 150 normal individuals and 2,800 patients, totaling 5,750 preoperative and postoperative facial images. These data were critical for training a machine learning model designed to analyze and quantify facial symmetry. The machine learning model was trained to process 3D contour data from the CBCT images, with HD computing employed to power the facial symmetry quantification system. This combination of technologies allowed for an objective and detailed analysis of facial features, surpassing the accuracy and reliability of traditional symmetry assessments, which often rely on subjective visual evaluations by clinicians. In addition to developing the system, the researchers conducted a retrospective review of 3D CBCT data from 300 patients who had undergone OGS. The patients’ facial images were analyzed both before and after surgery to assess the clinical utility of the proposed system. The results showed that the facial symmetry algorithm achieved an overall accuracy of 82.5%, indicating its robustness in real-world clinical applications. Postoperative analysis revealed a significant improvement in facial symmetry, with an average score increase of 51%. The mean symmetry score rose from 2.53 preoperatively to 3.89 postoperatively, demonstrating the system's effectiveness in quantifying improvements after OGS. These results underscore the system's potential for providing valuable feedback to surgeons and aiding in the refinement of surgical techniques. The study also led to the development of a web-based system that automates facial symmetry assessment. This system integrates HD computing and 3D contour mapping into a user-friendly platform that allows for rapid and accurate evaluations. Clinicians can easily access this system to perform detailed symmetry assessments, making it a practical tool for clinical settings. Additionally, the system facilitates better communication between clinicians and patients by providing objective, easy-to-understand symmetry scores, which can help patients visualize the expected outcomes of their surgery. In conclusion, this study introduced a valuable and highly effective approach to facial symmetry evaluation in OGS, combining 3D contour mapping, HD computing, and machine learning. The resulting system achieved high accuracy and offers a streamlined, automated solution for clinical use. The development of the web-based platform further enhances its practicality, making it a valuable tool for improving surgical outcomes and patient satisfaction in orthognathic surgery.

Keywords: facial symmetry, orthognathic surgery, facial contour mapping, hyperdimensional computing

Procedia PDF Downloads 29
2114 Using Equipment Telemetry Data for Condition-Based maintenance decisions

Authors: John Q. Todd

Abstract:

Given that modern equipment can provide comprehensive health, status, and error condition data via built-in sensors, maintenance organizations have a new and valuable source of insight to take advantage of. This presentation will expose what these data payloads might look like and how they can be filtered, visualized, calculated into metrics, used for machine learning, and generate alerts for further action.

Keywords: condition based maintenance, equipment data, metrics, alerts

Procedia PDF Downloads 190
2113 Hydroclean Smartbin Solution for Plastic Pollution Crisis

Authors: Anish Bhargava

Abstract:

By 2050, there will be more plastic than fish in our oceans. 51 trillion micro-plastics pollute our waters and contaminate the food on our plates, increasing the risk of tumours and diseases such as cancer. Our product is a solution to the ever-growing problem of plastic pollution. We call it the SmartBin. The SmartBin is a cylindrical device which will float just below the surface of the water, able to move with the aid of 4 water thrusters situated on the sides. As it floats, our SmartBin will suck water into itself and pump it out through the bottom. All waste is collected into a reusable filter including microplastics measuring down to 1.5mm. A speaker emitting sound at a frequency of 9 hertz ensures marine life stays away from the SmartBin. Featured along with our product is a smartphone app which will enable the user to designate an area for the SmartBin to cover on a satellite image. The SmartBin will then return to its start position near the shore, configured through the app. As global pressure to tackle water pollution continues to increase, environmental spending increases too. As our product provides an effective solution to this issue, we can seize the opportunity and scale our company. Our product is unparalleled. It can move at a high speed, covering a wide area rather than being restricted to one position. We target not only oceans and sea-shores, but also rivers, lakes, reservoirs and canals, as they are much easier to access and control.

Keywords: water, plastic, pollution, solution, hydroclean, smartbin, cleanup

Procedia PDF Downloads 206
2112 Predictive Semi-Empirical NOx Model for Diesel Engine

Authors: Saurabh Sharma, Yong Sun, Bruce Vernham

Abstract:

Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model.  Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.

Keywords: diesel engine, machine learning, NOₓ emission, semi-empirical

Procedia PDF Downloads 114
2111 Modeling of Nitrogen Solubility in Stainless Steel

Authors: Saeed Ghali, Hoda El-Faramawy, Mamdouh Eissa, Michael Mishreky

Abstract:

Scale-resistant austenitic stainless steel, X45CrNiW 18-9, has been developed, and modified steels produced through partial and total nickel replacement by nitrogen. These modified steels were produced in a 10 kg induction furnace under different nitrogen pressures and were cast into ingots. The produced modified stainless steels were forged, followed by air cooling. The phases of modified stainless steels have been investigated using the Schaeffler diagram, dilatometer, and microstructure observations. Both partial and total replacement of nickel using 0.33-0.50% nitrogen are effective in producing fully austenitic stainless steels. The nitrogen contents were determined and compared with those calculated using the Institute of Metal Science (IMS) equation. The results showed great deviations between the actual nitrogen contents and predicted values through IMS equation. So, an equation has been derived based on chemical composition, pressure, and temperature at 1600oC. [N%] = 0.0078 + 0.0406*X, where X is a function of chemical composition and nitrogen pressure. The derived equation has been used to calculate the nitrogen content of different steels using published data. The results reveal the difficulty of deriving a general equation for the prediction of nitrogen content covering different steel compositions. So, it is necessary to use a narrow composition range.

Keywords: solubility, nitrogen, stainless steel, Schaeffler

Procedia PDF Downloads 239
2110 Design and Optimization of a Small Hydraulic Propeller Turbine

Authors: Dario Barsi, Marina Ubaldi, Pietro Zunino, Robert Fink

Abstract:

A design and optimization procedure is proposed and developed to provide the geometry of a high efficiency compact hydraulic propeller turbine for low head. For the preliminary design of the machine, classic design criteria, based on the use of statistical correlations for the definition of the fundamental geometric parameters and the blade shapes are used. These relationships are based on the fundamental design parameters (i.e., specific speed, flow coefficient, work coefficient) in order to provide a simple yet reliable procedure. Particular attention is paid, since from the initial steps, on the correct conformation of the meridional channel and on the correct arrangement of the blade rows. The preliminary geometry thus obtained is used as a starting point for the hydrodynamic optimization procedure, carried out using a CFD calculation software coupled with a genetic algorithm that generates and updates a large database of turbine geometries. The optimization process is performed using a commercial approach that solves the turbulent Navier Stokes equations (RANS) by exploiting the axial-symmetric geometry of the machine. The geometries generated within the database are therefore calculated in order to determine the corresponding overall performance. In order to speed up the optimization calculation, an artificial neural network (ANN) based on the use of an objective function is employed. The procedure was applied for the specific case of a propeller turbine with an innovative design of a modular type, specific for applications characterized by very low heads. The procedure is tested in order to verify its validity and the ability to automatically obtain the targeted net head and the maximum for the total to total internal efficiency.

Keywords: renewable energy conversion, hydraulic turbines, low head hydraulic energy, optimization design

Procedia PDF Downloads 151
2109 Modeling Engagement with Multimodal Multisensor Data: The Continuous Performance Test as an Objective Tool to Track Flow

Authors: Mohammad H. Taheri, David J. Brown, Nasser Sherkat

Abstract:

Engagement is one of the most important factors in determining successful outcomes and deep learning in students. Existing approaches to detect student engagement involve periodic human observations that are subject to inter-rater reliability. Our solution uses real-time multimodal multisensor data labeled by objective performance outcomes to infer the engagement of students. The study involves four students with a combined diagnosis of cerebral palsy and a learning disability who took part in a 3-month trial over 59 sessions. Multimodal multisensor data were collected while they participated in a continuous performance test. Eye gaze, electroencephalogram, body pose, and interaction data were used to create a model of student engagement through objective labeling from the continuous performance test outcomes. In order to achieve this, a type of continuous performance test is introduced, the Seek-X type. Nine features were extracted including high-level handpicked compound features. Using leave-one-out cross-validation, a series of different machine learning approaches were evaluated. Overall, the random forest classification approach achieved the best classification results. Using random forest, 93.3% classification for engagement and 42.9% accuracy for disengagement were achieved. We compared these results to outcomes from different models: AdaBoost, decision tree, k-Nearest Neighbor, naïve Bayes, neural network, and support vector machine. We showed that using a multisensor approach achieved higher accuracy than using features from any reduced set of sensors. We found that using high-level handpicked features can improve the classification accuracy in every sensor mode. Our approach is robust to both sensor fallout and occlusions. The single most important sensor feature to the classification of engagement and distraction was shown to be eye gaze. It has been shown that we can accurately predict the level of engagement of students with learning disabilities in a real-time approach that is not subject to inter-rater reliability, human observation or reliant on a single mode of sensor input. This will help teachers design interventions for a heterogeneous group of students, where teachers cannot possibly attend to each of their individual needs. Our approach can be used to identify those with the greatest learning challenges so that all students are supported to reach their full potential.

Keywords: affective computing in education, affect detection, continuous performance test, engagement, flow, HCI, interaction, learning disabilities, machine learning, multimodal, multisensor, physiological sensors, student engagement

Procedia PDF Downloads 95
2108 The Study on Blast Effect of Polymer Gel by Trazul Lead Block Test and Concrete Block Test

Authors: Young-Hun Ko, Seung-Jun Kim, Khaqan Baluch, Hyung- Sik Yang

Abstract:

In this study, the polymer gel was used as coupling material in a blasting hole and its comparison was made with other coupling materials like sand, water, and air. Trazul lead block test and AUTODYN numerical analysis were conducted to analyze the effects of the coupling materials on the intensity of the explosion, as well as the verification tests were conducted by using concrete block test. The emulsion explosives were used in decoupling conditions, sand, water, and polymer gel were used as the coupling materials. The lead block test and the numerical analysis showed that the expansion of the blast hole in the lead block was similar to that of the water and gelatin and followed by sand and air conditions. The validation of concrete block test result showed the similar result as Trazul lead block test and the explosion strength was measured at 0.8 for polymer gel, 0.7 for sand, and 0.6 for no coupling material, in comparison to the full charge (1.0) case.

Keywords: Trazul lead block test, AUTODYN numerical analysis, coupling material, polymer gel, soil covering concrete block explosion test

Procedia PDF Downloads 301
2107 Impact of Working Capital Management Strategies on Firm's Value and Profitability

Authors: Jonghae Park, Daesung Kim

Abstract:

The impact of aggressive and conservative working capital‘s strategies on the value and profitability of the firms has been evaluated by applying the panel data regression analysis. The control variables used in the regression models are natural log of firm size, sales growth, and debt. We collected a panel of 13,988 companies listed on the Korea stock market covering the period 2000-2016. The major findings of this study are as follow: 1) We find a significant negative correlation between firm profitability and the number of days inventory (INV) and days accounts payable (AP). The firm’s profitability can also be improved by reducing the number of days of inventory and days accounts payable. 2) We also find a significant positive correlation between firm profitability and the number of days accounts receivable (AR) and cash ratios (CR). In other words, the cash is associated with high corporate profitability. 3) Tobin's analysis showed that only the number of days accounts receivable (AR) and cash ratios (CR) had a significant relationship. In conclusion, companies can increase profitability by reducing INV and increasing AP, but INV and AP did not affect corporate value. In particular, it is necessary to increase CA and decrease AR in order to increase Firm’s profitability and value.

Keywords: working capital, working capital management, firm value, profitability

Procedia PDF Downloads 192