Search results for: single well model experiments of vacuum preloading technology
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 28243

Search results for: single well model experiments of vacuum preloading technology

26653 Model of MSD Risk Assessment at Workplace

Authors: K. Sekulová, M. Šimon

Abstract:

This article focuses on upper-extremity musculoskeletal disorders risk assessment model at workplace. In this model are used risk factors that are responsible for musculoskeletal system damage. Based on statistic calculations the model is able to define what risk of MSD threatens workers who are under risk factors. The model is also able to say how MSD risk would decrease if these risk factors are eliminated.

Keywords: ergonomics, musculoskeletal disorders, occupational diseases, risk factors

Procedia PDF Downloads 547
26652 Identification of Classes of Bilinear Time Series Models

Authors: Anthony Usoro

Abstract:

In this paper, two classes of bilinear time series model are obtained under certain conditions from the general bilinear autoregressive moving average model. Bilinear Autoregressive (BAR) and Bilinear Moving Average (BMA) Models have been identified. From the general bilinear model, BAR and BMA models have been proved to exist for q = Q = 0, => j = 0, and p = P = 0, => i = 0 respectively. These models are found useful in modelling most of the economic and financial data.

Keywords: autoregressive model, bilinear autoregressive model, bilinear moving average model, moving average model

Procedia PDF Downloads 402
26651 A Risk-Based Modeling Approach for Successful Adoption of CAATTs in Audits: An Exploratory Study Applied to Israeli Accountancy Firms

Authors: Alon Cohen, Jeffrey Kantor, Shalom Levy

Abstract:

Technology adoption models are extensively used in the literature to explore drivers and inhibitors affecting the adoption of Computer Assisted Audit Techniques and Tools (CAATTs). Further studies from recent years suggested additional factors that may affect technology adoption by CPA firms. However, the adoption of CAATTs by financial auditors differs from the adoption of technologies in other industries. This is a result of the unique characteristics of the auditing process, which are expressed in the audit risk elements and the risk-based auditing approach, as encoded in the auditing standards. Since these audit risk factors are not part of the existing models that are used to explain technology adoption, these models do not fully correspond to the specific needs and requirements of the auditing domain. The overarching objective of this qualitative research is to fill the gap in the literature, which exists as a result of using generic technology adoption models. Followed by a pretest and based on semi-structured in-depth interviews with 16 Israeli CPA firms of different sizes, this study aims to reveal determinants related to audit risk factors that influence the adoption of CAATTs in audits and proposes a new modeling approach for the successful adoption of CAATTs. The findings emphasize several important aspects: (1) while large CPA firms developed their own inner guidelines to assess the audit risk components, other CPA firms do not follow a formal and validated methodology to evaluate these risks; (2) large firms incorporate a variety of CAATTs, including self-developed advanced tools. On the other hand, small and mid-sized CPA firms incorporate standard CAATTs and still need to catch up to better understand what CAATTs can offer and how they can contribute to the quality of the audit; (3) the top management of mid-sized and small CPA firms should be more proactive and updated about CAATTs capabilities and contributions to audits; and (4) All CPA firms consider professionalism as a major challenge that must be constantly managed to ensure an optimal CAATTs operation. The study extends the existing knowledge of CAATTs adoption by looking at it from a risk-based auditing approach. It suggests a new model for CAATTs adoption by incorporating influencing audit risk factors that auditors should examine when considering CAATTs adoption. Since the model can be used in various audited scenarios and supports strategic, risk-based decisions, it maximizes the great potential of CAATTs on the quality of the audits. The results and insights can be useful to CPA firms, internal auditors, CAATTs developers and regulators. Moreover, it may motivate audit standard-setters to issue updated guidelines regarding CAATTs adoption in audits.

Keywords: audit risk, CAATTs, financial auditing, information technology, technology adoption models

Procedia PDF Downloads 62
26650 PDMS-Free Microfluidic Chips Fabrication and Utilisation for Pulsed Electric Fields Applications

Authors: Arunas Stirke, Neringa Bakute, Gatis Mozolevskis

Abstract:

A technology of microfluidics is an emerging tool in the field of biology, medicine and chemistry. Microfluidic device is also known as ‘lab-on-a-chip’ technology [1]. In moving from macro- to microscale, there is unprecedented control over spatial and temporal gradients and patterns that cannot be captured in conventional Petri dishes and well plates [2]. However, there is not a single standard microfluidic chip designated for all purposes – every different field of studies needs a specific microchip with certain geometries, inlet/outlet, channel depth and other parameters to precisely regulate the required function. Since our group is studying an effect of pulsed electric field (PEF) to the cells, we have manufactured a microfluidic chip designated for high-throughput electroporation of cells. In our microchip, a cell culture chamber is divided into two parallel channels by a membrane, meanwhile electrodes for electroporation are attached to the wall of the channels. Both microchannels have their own inlet and outlet, enabling injection of transfection material separately. Our perspective is to perform electroporation of mammalian cells in two different ways: (1) plasmid and cells are injected in the same microchannel and (2) injected into separate microchannels. Moreover, oxygen and pH sensors are integrated on order to analyse cell viability parameters after PEF treatment.

Keywords: microfluidics, chip, fabrication, electroporation

Procedia PDF Downloads 79
26649 Analysis of Fish Preservation Methods for Traditional Fishermen Boat

Authors: Kusno Kamil, Andi Asni, Sungkono

Abstract:

According to a report of the World Food and Agriculture Agency (FAO): the post-harvest fish losses in Indonesia reaches 30 percent from 170 trillion rupiahs of marine fisheries reserves, then the potential loss reaches 51 trillion rupiahs (end of 2016 data). This condition is caused by traditionally vulnerable fish catches damaged due to disruption of the cold chain of preservation. The physical and chemical changes in fish flesh increase rapidly, especially if exposed to the scorching heat in the middle of the sea, exacerbated by the low awareness of catch hygiene; many unclean catches which contain blood are often treated without special attention and mixed with freshly caught fish, thereby increasing the potential for faster fish spoilage. This background encourages research on traditional fisherman catch preservation methods that aim to find the best and most affordable methods and/or combinations of fish preservation methods so that they can help fishermen increase their fishing duration without worrying that their catch will be damaged, thereby reducing their economic value when returning to the beach to sell their catches. This goal is expected to be achieved through experimental methods of treatment of fresh fish catches in containers with the addition of anti-bacterial copper, liquid smoke solution, and the use of vacuum containers. The other three treatments combined the three previous treatment variables with an electrically powered cooler (temperature 0~4 ᵒC). As a control specimen, the untreated fresh fish (placed in the open air and in the refrigerator) were also prepared for comparison for 1, 3, and 6 days. To test the level of freshness of fish for each treatment, physical observations were used, which were complemented by tests for bacterial content in a trusted laboratory. The content of copper (Cu) in fish meat (which is suspected of having a negative impact on consumers) was also part of the examination on the 6th day of experimentation. The results of physical observations on the test specimens (organoleptic method) showed that preservation assisted by the use of coolers was still better for all treatment variables. The specimens, without cooling, sequentially showed that the best preservation effectiveness was the addition of copper plates, the use of vacuum containers, and then liquid smoke immersion. Especially for liquid smoke, soaking for 6 days of preservation makes the fish meat soft and easy to crumble, even though it doesn't have a bad odor. The visual observation was then complemented by the results of testing the amount of growth (or retardation) of putrefactive bacteria in each treatment of test specimens within similar observation periods. Laboratory measurements report that the minimum amount of putrefactive bacteria achieved by preservation treatment combining cooler with liquid smoke (sample A+), then cooler only (D+), copper layer inside cooler (B+), vacuum container inside cooler (C+), respectively. Other treatments in open air produced a hundred times more putrefactive bacteria. In addition, treatment of the copper layer contaminated the preserved fresh fish more than a thousand times bigger compared to the initial amount, from 0.69 to 1241.68 µg/g.

Keywords: fish, preservation, traditional, fishermen, boat

Procedia PDF Downloads 67
26648 A Self-Coexistence Strategy for Spectrum Allocation Using Selfish and Unselfish Game Models in Cognitive Radio Networks

Authors: Noel Jeygar Robert, V. K.Vidya

Abstract:

Cognitive radio is a software-defined radio technology that allows cognitive users to operate on the vacant bands of spectrum allocated to licensed users. Cognitive radio plays a vital role in the efficient utilization of wireless radio spectrum available between cognitive users and licensed users without making any interference to licensed users. The spectrum allocation followed by spectrum sharing is done in a fashion where a cognitive user has to wait until spectrum holes are identified and allocated when the licensed user moves out of his own allocated spectrum. In this paper, we propose a self –coexistence strategy using bargaining and Cournot game model for achieving spectrum allocation in cognitive radio networks. The game-theoretic model analyses the behaviour of cognitive users in both cooperative and non-cooperative scenarios and provides an equilibrium level of spectrum allocation. Game-theoretic models such as bargaining game model and Cournot game model produce a balanced distribution of spectrum resources and energy consumption. Simulation results show that both game theories achieve better performance compared to other popular techniques

Keywords: cognitive radio, game theory, bargaining game, Cournot game

Procedia PDF Downloads 290
26647 Fair Federated Learning in Wireless Communications

Authors: Shayan Mohajer Hamidi

Abstract:

Federated Learning (FL) has emerged as a promising paradigm for training machine learning models on distributed data without the need for centralized data aggregation. In the realm of wireless communications, FL has the potential to leverage the vast amounts of data generated by wireless devices to improve model performance and enable intelligent applications. However, the fairness aspect of FL in wireless communications remains largely unexplored. This abstract presents an idea for fair federated learning in wireless communications, addressing the challenges of imbalanced data distribution, privacy preservation, and resource allocation. Firstly, the proposed approach aims to tackle the issue of imbalanced data distribution in wireless networks. In typical FL scenarios, the distribution of data across wireless devices can be highly skewed, resulting in unfair model updates. To address this, we propose a weighted aggregation strategy that assigns higher importance to devices with fewer samples during the aggregation process. By incorporating fairness-aware weighting mechanisms, the proposed approach ensures that each participating device's contribution is proportional to its data distribution, thereby mitigating the impact of data imbalance on model performance. Secondly, privacy preservation is a critical concern in federated learning, especially in wireless communications where sensitive user data is involved. The proposed approach incorporates privacy-enhancing techniques, such as differential privacy, to protect user privacy during the model training process. By adding carefully calibrated noise to the gradient updates, the proposed approach ensures that the privacy of individual devices is preserved without compromising the overall model accuracy. Moreover, the approach considers the heterogeneity of devices in terms of computational capabilities and energy constraints, allowing devices to adaptively adjust the level of privacy preservation to strike a balance between privacy and utility. Thirdly, efficient resource allocation is crucial for federated learning in wireless communications, as devices operate under limited bandwidth, energy, and computational resources. The proposed approach leverages optimization techniques to allocate resources effectively among the participating devices, considering factors such as data quality, network conditions, and device capabilities. By intelligently distributing the computational load, communication bandwidth, and energy consumption, the proposed approach minimizes resource wastage and ensures a fair and efficient FL process in wireless networks. To evaluate the performance of the proposed fair federated learning approach, extensive simulations and experiments will be conducted. The experiments will involve a diverse set of wireless devices, ranging from smartphones to Internet of Things (IoT) devices, operating in various scenarios with different data distributions and network conditions. The evaluation metrics will include model accuracy, fairness measures, privacy preservation, and resource utilization. The expected outcomes of this research include improved model performance, fair allocation of resources, enhanced privacy preservation, and a better understanding of the challenges and solutions for fair federated learning in wireless communications. The proposed approach has the potential to revolutionize wireless communication systems by enabling intelligent applications while addressing fairness concerns and preserving user privacy.

Keywords: federated learning, wireless communications, fairness, imbalanced data, privacy preservation, resource allocation, differential privacy, optimization

Procedia PDF Downloads 73
26646 Knowledge Development: How New Information System Technologies Affect Knowledge Development

Authors: Yener Ekiz

Abstract:

Knowledge development is a proactive process that covers collection, analysis, storage and distribution of information that helps to contribute the understanding of the environment. To transfer knowledge correctly and fastly, you have to use new emerging information system technologies. Actionable knowledge is only of value if it is understandable and usable by target users. The purpose of the paper is to enlighten how technology eases and affects the process of knowledge development. While preparing the paper, literature review, survey and interview methodology will be used. The hypothesis is that the technology and knowledge development are inseparable and the technology will formalize the DIKW hierarchy again. As a result, today there is huge data. This data must be classified sharply and quickly.

Keywords: DIKW hierarchy, knowledge development, technology

Procedia PDF Downloads 435
26645 Broad Survey of Fine Root Traits to Investigate the Root Economic Spectrum Hypothesis and Plant-Fire Dynamics Worldwide

Authors: Jacob Lewis Watts, Adam F. A. Pellegrini

Abstract:

Prairies, grasslands, and forests cover an expansive portion of the world’s surface and contribute significantly to Earth’s carbon cycle. The largest driver of carbon dynamics in some of these ecosystems is fire. As the global climate changes, most fire-dominated ecosystems will experience increased fire frequency and intensity, leading to increased carbon flux into the atmosphere and soil nutrient depletion. The plant communities associated with different fire regimes are important for reassimilation of carbon lost during fire and soil recovery. More frequent fires promote conservative plant functional traits aboveground; however, belowground fine root traits are poorly explored and arguably more important drivers of ecosystem function as the primary interface between the soil and plant. The root economic spectrum (RES) hypothesis describes single-dimensional covariation between important fine-root traits along a range of plant strategies from acquisitive to conservative – parallel to the well-established leaf economic spectrum (LES). However, because of the paucity of root trait data, the complex nature of the rhizosphere, and the phylogenetic conservatism of root traits, it is unknown whether the RES hypothesis accurately describes plant nutrient and water acquisition strategies. This project utilizesplants grown in common garden conditions in the Cambridge University Botanic Garden and a meta-analysis of long-term fire manipulation experiments to examine the belowground physiological traits of fire-adapted and non-fire-adapted herbaceous species to 1) test the RES hypothesis and 2) describe the effect of fire regimes on fine root functional traits – which in turn affect carbon and nutrient cycling. A suite of morphological, chemical, and biological root traits (e.g. root diameter, specific root length, percent N, percent mycorrhizal colonization, etc.) of 50 herbaceous species were measuredand tested for phylogenetic conservatism and RES dimensionality. Fire-adapted and non-fire-adapted plants traits were compared using phylogenetic PCA techniques. Preliminary evidence suggests that phylogenetic conservatism may weaken the single-dimensionality of the RES, suggesting that there may not be a single way that plants optimize nutrient and water acquisition and storage in the complex rhizosphere; additionally, fire-adapted species are expected to be more conservative than non-fire-adapted species, which may be indicative of slower carbon cycling with increasing fire frequency and intensity.

Keywords: climate change, fire regimes, root economic spectrum, fine roots

Procedia PDF Downloads 116
26644 The New Media and Their Economic and Socio-Political Imperatives for Africa: A Study of Nigeria

Authors: Chukwukelue Uzodinma Umenyilorah

Abstract:

The advent of the New Media as enabled by information and communication technology from the 19th through the 21st century has no doubt taken its toll on all fronts of human existence; especially in Africa. Apart from shortening the distance between all parts of the world, technology and the new media has also succeeded in making the world a global village. Hence, it is now easy to relay live audio and visual signals across the length and breadth of the world in real time. People now contract and execute businesses across countries, conferences are held and ideas are shared with a simple push of a button. Likewise, political leaders and diplomats are now just a click away from reaching those important decisions that take their country’s fortunes to the next level. On the flip side, ICT and the New Media have also contributed in no small measure in aiding global terrorism and general insecurity around the world. More interesting is the fact that as developing economies, African countries have massively embraced the information technology and this has helped them in keeping up with the trends in the polity of other model democracies around the world. This paper is therefore designed to determine the how much effect ICT and the New Media has exerted on the economic, social and political lives of African. Nigeria shall be used as a case in point for the purpose of this paper.

Keywords: Africa, ICT, new media, Nigeria

Procedia PDF Downloads 251
26643 Prevalence and Characteristics of Torus Palatinus among Western Indonesian Population

Authors: Raka Aldy Nugraha, Kiwah Andanni, Aditya Indra Pratama, Aswin Guntara

Abstract:

Background: Torus palatinus is a bony protuberance in the hard palate. Sex and race are considered as influencing factors for the development of torus palatinus. Hence, the objective of this study was to determine the prevalence and characteristics of torus palatinus and its correlation with sex and ethnicity among Western Indonesian Population. Methods: We conducted a descriptive and analytical study employing cross-sectional design in 274 new students of Universitas Indonesia. Data were collected by using consecutive sampling method through questionnaire-filling and direct oral examination. Subject with racial background other than indigenous Indonesian Mongol were excluded from this study. Data were statistically analyzed using chi square test for categorical variables whereas logistic regression model was employed to assess the correlation between variables of interest with prevalence of torus palatinus. Results: Torus palatinus were found in 212 subjects (77.4%), mostly small in size (< 3 mm) and single in number, with percentage of 50.5% and 90.6%, respectively. The prevalence of torus palatinus were significantly higher in women (OR 2.88; 95% CI: 1.53-5.39; p = 0.001), dominated by medium-sized and single tori. There was no significant correlation between ethnicity and the occurrence of torus palatinus among Western Indonesian population. Conclusion: Torus palatinus was prevalent among Western Indonesian population. It showed significant positive correlation with sex, but not with ethnicity.

Keywords: characteristic, ethnicity, Indonesia, mongoloid, prevalence, sex, Torus palatinus

Procedia PDF Downloads 265
26642 Shape Management Method for Safety Evaluation of Bridge Based on Terrestrial Laser Scanning Using Least Squares

Authors: Gichun Cha, Dongwan Lee, Junkyeong Kim, Aoqi Zhang, Seunghee Park

Abstract:

All the world are studying the construction technology of double deck tunnel in order to respond to the increasing urban traffic demands and environmental changes. Advanced countries have the construction technology of the double deck tunnel structure. but the domestic country began research on it. Construction technologies are important. But Safety evaluation of structure is necessary to prevent possible accidents during construction. Thus, the double deck tunnel was required the shape management of middle slabs. The domestic country is preparing the construction of double deck tunnel for an alternate route and a pleasant urban environment. Shape management of double deck tunnel has been no research because it is a new attempted technology. The present, a similar study is bridge structure for the shape management. Bridge is implemented shape model using terrestrial laser scanning(TLS). Therefore, we proceed research on the bridge slabs because there is a similar structure of double deck tunnel. In the study, we develop shape management method of bridge slabs using TLS. We select the Test-bed for measurement site. This site is bridge located on Sungkyunkwan University Natural Sciences Campus. This bridge has a total length of 34m, the vertical height of 8.7m from the ground. It connects Engineering Building #1 and Engineering Building #2. Point cloud data for shape management is acquired the TLS and We utilized the Leica ScanStation C10/C5 model. We will confirm the Maximum displacement area of middle slabs using Least-Squares Fitting. We expect to raise stability for double deck tunnel through shape management for middle slabs.

Keywords: bridge slabs, least squares, safety evaluation, shape management method, terrestrial laser scanning

Procedia PDF Downloads 238
26641 Exploring the Applications of Neural Networks in the Adaptive Learning Environment

Authors: Baladitya Swaika, Rahul Khatry

Abstract:

Computer Adaptive Tests (CATs) is one of the most efficient ways for testing the cognitive abilities of students. CATs are based on Item Response Theory (IRT) which is based on item selection and ability estimation using statistical methods of maximum information selection/selection from posterior and maximum-likelihood (ML)/maximum a posteriori (MAP) estimators respectively. This study aims at combining both classical and Bayesian approaches to IRT to create a dataset which is then fed to a neural network which automates the process of ability estimation and then comparing it to traditional CAT models designed using IRT. This study uses python as the base coding language, pymc for statistical modelling of the IRT and scikit-learn for neural network implementations. On creation of the model and on comparison, it is found that the Neural Network based model performs 7-10% worse than the IRT model for score estimations. Although performing poorly, compared to the IRT model, the neural network model can be beneficially used in back-ends for reducing time complexity as the IRT model would have to re-calculate the ability every-time it gets a request whereas the prediction from a neural network could be done in a single step for an existing trained Regressor. This study also proposes a new kind of framework whereby the neural network model could be used to incorporate feature sets, other than the normal IRT feature set and use a neural network’s capacity of learning unknown functions to give rise to better CAT models. Categorical features like test type, etc. could be learnt and incorporated in IRT functions with the help of techniques like logistic regression and can be used to learn functions and expressed as models which may not be trivial to be expressed via equations. This kind of a framework, when implemented would be highly advantageous in psychometrics and cognitive assessments. This study gives a brief overview as to how neural networks can be used in adaptive testing, not only by reducing time-complexity but also by being able to incorporate newer and better datasets which would eventually lead to higher quality testing.

Keywords: computer adaptive tests, item response theory, machine learning, neural networks

Procedia PDF Downloads 172
26640 Performance of an Optical Readout Gas Chamber for Charged Particle Track

Authors: Jing Hu, Xiaoping Ouyang

Abstract:

We develop an optical readout gas chamber based on avalanche-induced scintillation for energetic charged particles track. The gas chamber is equipped with a Single Anode Wires (SAW) structure to produce intensive electric field when the measured particles are of low yield or even single. In the presence of an intensive electric field around the single anode, primary electrons, resulting from the incident charged particles when depositing the energy along the track, accelerate to the anode effectively and rapidly. For scintillation gasses, this avalanche of electrons induces multiplying photons comparing with the primary scintillation excited directly from particle energy loss. The electric field distribution for different shape of the SAW structure is analyzed, and finally, an optimal one is used to study the optical readout performance. Using CF4 gas and its mixture with the noble gas, the results indicate that the optical readout characteristics of the chamber are attractive for imaging. Moreover, images of particles track including single particle track from 5.485MeV alpha particles are successfully acquired. The track resolution is quite well for the reason that the electrons undergo less diffusion in the intensive electric field. With the simple and ingenious design, the optical readout gas chamber has a high sensitivity. Since neutrons can be converted to charged particles when scattering, this optical readout gas chamber can be applied to neutron measurement for dark matter, fusion research, and others.

Keywords: optical readout, gas chamber, charged particle track, avalanche-induced scintillation, neutron measurement

Procedia PDF Downloads 265
26639 OmniDrive Model of a Holonomic Mobile Robot

Authors: Hussein Altartouri

Abstract:

In this paper the kinematic and kinetic models of an omnidirectional holonomic mobile robot is presented. The kinematic and kinetic models form the OmniDrive model. Therefore, a mathematical model for the robot equipped with three- omnidirectional wheels is derived. This model which takes into consideration the kinematics and kinetics of the robot, is developed to state space representation. Relative analysis of the velocities and displacements is used for the kinematics of the robot. Lagrange’s approach is considered in this study for deriving the equation of motion. The drive train and the mechanical assembly only of the Festo Robotino® is considered in this model. Mainly the model is developed for motion control. Furthermore, the model can be used for simulation purposes in different virtual environments not only Robotino® View. Further use of the model is in the mechatronics research fields with the aim of teaching and learning the advanced control theories.

Keywords: mobile robot, omni-direction wheel, mathematical model, holonomic mobile robot

Procedia PDF Downloads 602
26638 An Investigation of the Integration of Synchronous Online Tools into Task-Based Language Teaching: The Example of SpeakApps

Authors: Nouf Aljohani

Abstract:

The research project described in this presentation focuses on designing and evaluating oral tasks related to students’ needs and levels to foster communication and negotiation of meaning for a group of female Saudi university students. The significance of the current research project lies in its contribution to determining the usefulness of synchronous technology-mediated interactive group discussion in improving different speaking strategies through using synchronous technology. Also, it discovers how to optimize learning outcomes, expand evaluation for online learning tasks and engaging students’ experience in evaluating synchronous interactive tools and tasks. The researcher used SpeakApps, a synchronous technology, that allows the students to practice oral interaction outside the classroom. Such a course of action was considered necessary due to low English proficiency among Saudi students. According to the author's knowledge, the main factor that causes poor speaking skills is that students do not have sufficient time to communicate outside English language classes. Further, speaking and listening course contents are not well designed to match the Saudi learning context. The methodology included designing speaking tasks to match the educational setting; a CALL framework for designing and evaluating tasks; participant involvement in evaluating these tasks in each online session; and an investigation of the factors that led to the successful implementation of Task-based Language Teaching (TBLT) and using SpeakApps. The analysis and data were drawn from the technology acceptance model surveys, a group interview, teachers’ and students’ weekly reflections, and discourse analysis of students’ interactions.

Keywords: CALL evaluation, synchronous technology, speaking skill, task-based language teaching

Procedia PDF Downloads 308
26637 Soft Robotic Exoskeletal Glove with Single Motor-Driven Tendon-Based Differential Drive

Authors: M. Naveed Akhter, Jawad Aslam, Omer Gillani

Abstract:

To aid and rehabilitate increasing number of patients suffering from spinal cord injury (SCI) and stroke, a lightweight, wearable, and 3D printable exoskeletal glove has been developed. Unlike previously developed metal or fabric-based exoskeletons, this research presents the development of soft exoskeletal glove made of thermoplastic polyurethane (TPU). The drive mechanism consists of a single motor-driven antagonistic tendon to perform extension or flexion of middle and index finger. The tendon-based differential drive has been incorporated to allow for grasping of irregularly shaped objects. The design features easy 3D-printability with TPU without a need for supports. The overall weight of the glove and the actuation unit is approximately 500g. Performance of the glove was tested on a custom test-bench with integrated load cells, and the grip strength was tested to be around 30N per finger while grasping objects of irregular shape.

Keywords: 3D printable, differential drive, exoskeletal glove, rehabilitation, single motor driven

Procedia PDF Downloads 138
26636 Possible Sulfur Induced Superconductivity in Nano-Diamond

Authors: J. Mona, R. R. da Silva, C.-L.Cheng, Y. Kopelevich

Abstract:

We report on a possible occurrence of superconductivity in 5 nm particle size diamond powders treated with sulfur (S) at 500 o C for 10 hours in ~10-2 Torr vacuum. Superconducting-like magnetization hysteresis loops M(H) have been measured up to ~ 50 K by means of the SQUID magnetometer (Quantum Design). Both X-ray (Θ-2Θ geometry) and Raman spectroscopy analyses revealed no impurity or additional phases. Nevertheless, the measured Raman spectra are characteristic to the diamond with embedded disordered carbon and/or graphitic fragments suggesting a link to the previous reports of the local or surface superconductivity in graphite- and amorphous carbon–sulfur composites.

Keywords: nanodiamond, sulfur, superconductivity, Raman spectroscopy

Procedia PDF Downloads 489
26635 Dynamics of a Reaction-Diffusion Problems Modeling Two Predators Competing for a Prey

Authors: Owolabi Kolade Matthew

Abstract:

In this work, we investigate both the analytical and numerical studies of the dynamical model comprising of three species system. We analyze the linear stability of stationary solutions in the one-dimensional multi-system modeling the interactions of two predators and one prey species. The stability analysis has a lot of implications for understanding the various spatiotemporal and chaotic behaviors of the species in the spatial domain. The analysis results presented have established the possibility of the three interacting species to coexist harmoniously, this feat is achieved by combining the local and global analyzes to determine the global dynamics of the system. In the presence of diffusion, a viable exponential time differencing method is applied to multi-species nonlinear time-dependent partial differential equation to address the points and queries that may naturally arise. The scheme is described in detail, and justified by a number of computational experiments.

Keywords: asymptotically stable, coexistence, exponential time differencing method, global and local stability, predator-prey model, nonlinear, reaction-diffusion system

Procedia PDF Downloads 408
26634 Bitplanes Gray-Level Image Encryption Approach Using Arnold Transform

Authors: Ali Abdrhman M. Ukasha

Abstract:

Data security needed in data transmission, storage, and communication to ensure the security. The single step parallel contour extraction (SSPCE) method is used to create the edge map as a key image from the different Gray level/Binary image. Performing the X-OR operation between the key image and each bit plane of the original image for image pixel values change purpose. The Arnold transform used to changes the locations of image pixels as image scrambling process. Experiments have demonstrated that proposed algorithm can fully encrypt 2D Gary level image and completely reconstructed without any distortion. Also shown that the analyzed algorithm have extremely large security against some attacks like salt & pepper and JPEG compression. Its proof that the Gray level image can be protected with a higher security level. The presented method has easy hardware implementation and suitable for multimedia protection in real time applications such as wireless networks and mobile phone services.

Keywords: SSPCE method, image compression-salt- peppers attacks, bitplanes decomposition, Arnold transform, lossless image encryption

Procedia PDF Downloads 433
26633 Estimation of PM10 Concentration Using Ground Measurements and Landsat 8 OLI Satellite Image

Authors: Salah Abdul Hameed Saleh, Ghada Hasan

Abstract:

The aim of this work is to produce an empirical model for the determination of particulate matter (PM10) concentration in the atmosphere using visible bands of Landsat 8 OLI satellite image over Kirkuk city- IRAQ. The suggested algorithm is established on the aerosol optical reflectance model. The reflectance model is a function of the optical properties of the atmosphere, which can be related to its concentrations. The concentration of PM10 measurements was collected using Particle Mass Profiler and Counter in a Single Handheld Unit (Aerocet 531) meter simultaneously by the Landsat 8 OLI satellite image date. The PM10 measurement locations were defined by a handheld global positioning system (GPS). The obtained reflectance values for visible bands (Coastal aerosol, Blue, Green and blue bands) of landsat 8 OLI image were correlated with in-suite measured PM10. The feasibility of the proposed algorithms was investigated based on the correlation coefficient (R) and root-mean-square error (RMSE) compared with the PM10 ground measurement data. A choice of our proposed multispectral model was founded on the highest value correlation coefficient (R) and lowest value of the root mean square error (RMSE) with PM10 ground data. The outcomes of this research showed that visible bands of Landsat 8 OLI were capable of calculating PM10 concentration with an acceptable level of accuracy.

Keywords: air pollution, PM10 concentration, Lansat8 OLI image, reflectance, multispectral algorithms, Kirkuk area

Procedia PDF Downloads 438
26632 Multi-Level Security Measures in Cloud Computing

Authors: Shobha G. Ranjan

Abstract:

Cloud computing is an emerging, on-demand and internet- based technology. Varieties of services like, software, hardware, data storage and infrastructure can be shared though the cloud computing. This technology is highly reliable, cost effective and scalable in nature. It is a must only the authorized users should access these services. Further the time granted to access these services should be taken into account for proper accounting purpose. Currently many organizations do the security measures in many different ways to provide the best cloud infrastructure to their clients, but that’s not the limitation. This paper presents the multi-level security measure technique which is in accordance with the OSI model. In this paper, details of proposed multilevel security measures technique are presented along with the architecture, activities, algorithms and probability of success in breaking authentication.

Keywords: cloud computing, cloud security, integrity, multi-tenancy, security

Procedia PDF Downloads 496
26631 Examining the Design of a Scaled Audio Tactile Model for Enhancing Interpretation of Visually Impaired Visitors in Heritage Sites

Authors: A. Kavita Murugkar, B. Anurag Kashyap

Abstract:

With the Rights for Persons with Disabilities Act (RPWD Act) 2016, the Indian government has made it mandatory for all establishments, including Heritage Sites, to be accessible for People with Disabilities. However, recent access audit surveys done under the Accessible India Campaign by Ministry of Culture indicate that there are very few accessibility measures provided in the Heritage sites for people with disabilities. Though there are some measures for the mobility impaired, surveys brought out that there are almost no provisions for people with vision impairment (PwVI) in heritage sites thus depriving them of a reasonable physical & intellectual access that facilitates an enjoyable experience and enriching interpretation of the Heritage Site. There is a growing need to develop multisensory interpretative tools that can help the PwVI in perceiving heritage sites in the absence of vision. The purpose of this research was to examine the usability of an audio-tactile model as a haptic and sound-based strategy for augmenting the perception and experience of PwVI in a heritage site. The first phase of the project was a multi-stage phenomenological experimental study with visually impaired users to investigate the design parameters for developing an audio-tactile model for PwVI. The findings from this phase included user preferences related to the physical design of the model such as the size, scale, materials, details, etc., and the information that it will carry such as braille, audio output, tactile text, etc. This was followed by the second phase in which a working prototype of an audio-tactile model is designed and developed for a heritage site based on the findings from the first phase of the study. A nationally listed heritage site from the author’s city was selected for making the model. The model was lastly tested by visually impaired users for final refinements and validation. The prototype developed empowers People with Vision Impairment to navigate independently in heritage sites. Such a model if installed in every heritage site, can serve as a technological guide for the Person with Vision Impairment, giving information of the architecture, details, planning & scale of the buildings, the entrances, location of important features, lifts, staircases, and available, accessible facilities. The model was constructed using 3D modeling and digital printing technology. Though designed for the Indian context, this assistive technology for the blind can be explored for wider applications across the globe. Such an accessible solution can change the otherwise “incomplete’’ perception of the disabled visitor, in this case, a visually impaired visitor and augment the quality of their experience in heritage sites.

Keywords: accessibility, architectural perception, audio tactile model , inclusive heritage, multi-sensory perception, visual impairment, visitor experience

Procedia PDF Downloads 104
26630 Optimization Principles of Eddy Current Separator for Mixtures with Different Particle Sizes

Authors: Cao Bin, Yuan Yi, Wang Qiang, Amor Abdelkader, Ali Reza Kamali, Diogo Montalvão

Abstract:

The study of the electrodynamic behavior of non-ferrous particles in time-varying magnetic fields is a promising area of research with wide applications, including recycling of non-ferrous metals, mechanical transmission, and space debris. The key technology for recovering non-ferrous metals is eddy current separation (ECS), which utilizes the eddy current force and torque to separate non-ferrous metals. ECS has several advantages, such as low energy consumption, large processing capacity, and no secondary pollution, making it suitable for processing various mixtures like electronic scrap, auto shredder residue, aluminum scrap, and incineration bottom ash. Improving the separation efficiency of mixtures with different particle sizes in ECS can create significant social and economic benefits. Our previous study investigated the influence of particle size on separation efficiency by combining numerical simulations and separation experiments. Pearson correlation analysis found a strong correlation between the eddy current force in simulations and the repulsion distance in experiments, which confirmed the effectiveness of our simulation model. The interaction effects between particle size and material type, rotational speed, and magnetic pole arrangement were examined. It offer valuable insights for the design and optimization of eddy current separators. The underlying mechanism behind the effect of particle size on separation efficiency was discovered by analyzing eddy current and field gradient. The results showed that the magnitude and distribution heterogeneity of eddy current and magnetic field gradient increased with particle size in eddy current separation. Based on this, we further found that increasing the curvature of magnetic field lines within particles could also increase the eddy current force, providing a optimized method to improving the separation efficiency of fine particles. By combining the results of the studies, a more systematic and comprehensive set of optimization guidelines can be proposed for mixtures with different particle size ranges. The separation efficiency of fine particles could be improved by increasing the rotational speed, curvature of magnetic field lines, and electrical conductivity/density of materials, as well as utilizing the eddy current torque. When designing an ECS, the particle size range of the target mixture should be investigated in advance, and the suitable parameters for separating the mixture can be fixed accordingly. In summary, these results can guide the design and optimization of ECS, and also expand the application areas for ECS.

Keywords: eddy current separation, particle size, numerical simulation, metal recovery

Procedia PDF Downloads 84
26629 Competitive Advantages of a Firm without Fundamental Technology: A Case Study of Sony, Casio and Nintendo

Authors: Kiyohiro Yamazaki

Abstract:

A purpose of this study is to examine how a firm without fundamental technology is able to gain the competitive advantage. This paper examines three case studies, Sony in the flat display TV industry, Casio in the digital camera industry and Nintendo in the home game machine industry. This paper maintain the firms without fundamental technology construct two advantages, economic advantage and organizational advantage. An economic advantage involves the firm can select either high-tech or cheap devices out of several device makers, and change the alternatives cheaply and quickly. In addition, organizational advantage means that a firm without fundamental technology is not restricted by organizational inertia and cognitive restraints, and exercises the characteristic of strength.

Keywords: firm without fundamental technology, economic advantage, organizational advantage, Sony, Casio, Nintendo

Procedia PDF Downloads 284
26628 A Constitutive Model for Time-Dependent Behavior of Clay

Authors: T. N. Mac, B. Shahbodaghkhan, N. Khalili

Abstract:

A new elastic-viscoplastic (EVP) constitutive model is proposed for the analysis of time-dependent behavior of clay. The proposed model is based on the bounding surface plasticity and the concept of viscoplastic consistency framework to establish continuous transition from plasticity to rate dependent viscoplasticity. Unlike the overstress based models, this model will meet the consistency condition in formulating the constitutive equation for EVP model. The procedure of deriving the constitutive relationship is also presented. Simulation results and comparisons with experimental data are then presented to demonstrate the performance of the model.

Keywords: bounding surface, consistency theory, constitutive model, viscosity

Procedia PDF Downloads 488
26627 Nickel Oxide-Nitrogen-Doped Carbon (Ni/NiOx/NC) Derived from Pyrolysis of 2-Aminoterephthalic Acid for Electrocatalytic Oxidation of Ammonia

Authors: Yu-Jen Shih, Juan-Zhang Lou

Abstract:

Nitrogenous compounds, such as NH4+/NH3 and NO3-, have become important contaminants in water resources. Excessive concentration of NH3 leads to eutrophication, which poses a threat to aquatic organisms in the environment. Electrochemical oxidation emerged as a promising water treatment technology, offering advantages such as simplicity, small-scale operation, and minimal reliance on additional chemicals. In this study, a nickel-based metal-organic framework (Ni-MOF) was synthesized using 2-amino terephthalic acid (BDC-NH2) and nickel nitrate. The Ni-MOF was further carbonized as derived nickel oxide and nitrogen-carbon composite, Ni/NiOx/NC. The nickel oxide within the 2D porous carbon texture served as active sites for ammonia oxidation. Results of characterization showed that the Ni-MOF was a hexagonal and flaky nanoparticle. With increasing carbonization temperature, the nickel ions in the organic framework re-crystallized as NiO clusters on the surfaces of the 2D carbon. The electrochemical surface area of Ni/NiOx/NC significantly increased as to improve the efficiency of ammonia oxidation. The phase transition of Ni(OH)2⇌NiOOH at around +0.8 V was the primary mediator of electron transfer. Batch electrolysis was conducted under constant current and constant potential modes. The electrolysis parameters included pyrolysis temperatures, pH, current density, initial feed concentration, and electrode potential. The constant current batch experiments indicated that via carbonization at 800 °C, Ni/NiOx/NC(800) was able to decrease the ammonium nitrogen of 50 mg-N/L to below 1 ppm within 4 hours at a current density of 3 mA/cm2 and pH 11 with negligible oxygenated nitrogen formation. The constant potential experiments confirmed that N2 nitrogen selectivity was enhanced up to 90% at +0.8 V.

Keywords: electrochemical oxidation, nickel oxyhydroxide, metal-organic framework, ammonium, nitrate

Procedia PDF Downloads 57
26626 How Much the Role of Fertilizers Management and Wheat Planting Methods on Its Yield Improvement?

Authors: Ebrahim Izadi-Darbandi, Masoud Azad, Masumeh Dehghan

Abstract:

In order to study the effects of nitrogen and phosphoruse management and wheat sowing method on wheat yield, two experiments was performed as factorial, based on completely randomized design with three replications at Research Farm, Faculty of Agriculture, Ferdowsi University of Mashhad, Iran in 2009. In the first experiment nitrogen application rates (100kg ha-1, 200 kg ha-1, 300 kg ha-1), phosphorus application rates (100 kg ha-1, 200 kg ha-1) and two levels of their application methods (Broadcast and Band) were studied. The second experiment treatments included of wheat sowing methods (single-row with 30 cm distance and twine row on 60 cm width ridges), as main plots and nitrogen and phosphorus application methods (Broadcast and Band) as sub plots (150 kg ha-1). Phosphorus and nitrogen sources for fertilization at both experiment were respectively super phosphate, applied before wheat sowing and incorporated with soil and urea, applied in two phases (50% pre plant) and (50%) near wheat shooting. Results from first experiment showed that the effect of fertilizers application methods were significant (p≤0.01) on wheat yield increasing. Band application of phosphorus and nitrogen were increased biomass and seed yield of wheat with nine and 15% respectively compared to their broadcast application. The interaction between the effects of nitrogen and phosphorus application rate with phosphorus and nitrogen application methods, showed that band application of fertilizers and the rate of application of 200kg/ha phosphorus and 300kg/ha nitrogen were the best methods in wheat yield improvement. The second experiment also showed that the effect of wheat sowing method and fertilizers application methods were significant (p≤0.01) on wheat seed and biomass yield improvement. Wheat twine row on 60 cm width ridges sowing method, increased its biomass and seed yield for 22% and 30% respectively compared to single-row with 30 cm. Wheat sowing method and fertilizers application methods interaction indicated that band application of fertilizers and wheat twine row on 60 cm width ridges sowing method was the best treatment on winter wheat yield improvement. In conclusion these results indicated that nitrogen and phosphorus management in wheat and modifying wheat sowing method have important role in increasing fertilizers use efficiency.

Keywords: band application, broadcast application, rate of fertilizer application, wheat seed yield, wheat biomass yield

Procedia PDF Downloads 460
26625 Bioproduction of L(+)-Lactic Acid and Purification by Ion Exchange Mechanism

Authors: Zelal Polat, Şebnem Harsa, Semra Ülkü

Abstract:

Lactic acid exists in nature optically in two forms, L(+), D(-)-lactic acid, and has been used in food, leather, textile, pharmaceutical and cosmetic industries. Moreover, L(+)-lactic acid constitutes the raw material for the production of poly-L-lactic acid which is used in biomedical applications. Microbially produced lactic acid was aimed to be recovered from the fermentation media efficiently and economically. Among the various downstream operations, ion exchange chromatography is highly selective and yields a low cost product recovery within a short period of time. In this project, Lactobacillus casei NRRL B-441 was used for the production of L(+)-lactic acid from whey by fermentation at pH 5.5 and 37°C that took 12 hours. The product concentration was 50 g/l with 100% L(+)-lactic acid content. Next, the suitable resin was selected due to its high sorption capacity with rapid equilibrium behavior. Dowex marathon WBA, weakly basic anion exchanger in OH form reached the equilibrium in 15 minutes. The batch adsorption experiments were done approximately at pH 7.0 and 30°C and sampling was continued for 20 hours. Furthermore, the effect of temperature and pH was investigated and their influence was found to be unimportant. All the adsorption/desorption experiments were applied to both model lactic acid and biomass free fermentation broth. The ion exchange equilibria of lactic acid and L(+)-lactic acid in fermentation broth on Dowex marathon WBA was explained by Langmuir isotherm. The maximum exchange capacity (qm) for model lactic acid was 0.25 g La/g wet resin and for fermentation broth 0.04 g La/g wet resin. The equilibrium loading and exchange efficiency of L(+)-lactic acid in fermentation broth were reduced as a result of competition by other ionic species. The competing ions inhibit the binding of L(+)-lactic acid to the free sites of ion exchanger. Moreover, column operations were applied to recover adsorbed lactic acid from the ion exchanger. 2.0 M HCl was the suitable eluting agent to recover the bound L(+)-lactic acid with a flowrate of 1 ml/min at ambient temperature. About 95% of bound L(+)-lactic acid was recovered from Dowex marathon WBA. The equilibrium was reached within 15 minutes. The aim of this project was to investigate the purification of L(+)-lactic acid with ion exchange method from fermentation broth. The additional goals were to investigate the end product purity, to obtain new data on the adsorption/desorption behaviours of lactic acid and applicability of the system in industrial usage.

Keywords: fermentation, ion exchange, lactic acid, purification, whey

Procedia PDF Downloads 500
26624 Algorithm Optimization to Sort in Parallel by Decreasing the Number of the Processors in SIMD (Single Instruction Multiple Data) Systems

Authors: Ali Hosseini

Abstract:

Paralleling is a mechanism to decrease the time necessary to execute the programs. Sorting is one of the important operations to be used in different systems in a way that the proper function of many algorithms and operations depend on sorted data. CRCW_SORT algorithm executes ‘N’ elements sorting in O(1) time on SIMD (Single Instruction Multiple Data) computers with n^2/2-n/2 number of processors. In this article having presented a mechanism by dividing the input string by the hinge element into two less strings the number of the processors to be used in sorting ‘N’ elements in O(1) time has decreased to n^2/8-n/4 in the best state; by this mechanism the best state is when the hinge element is the middle one and the worst state is when it is minimum. The findings from assessing the proposed algorithm by other methods on data collection and number of the processors indicate that the proposed algorithm uses less processors to sort during execution than other methods.

Keywords: CRCW, SIMD (Single Instruction Multiple Data) computers, parallel computers, number of the processors

Procedia PDF Downloads 304