Search results for: Mohd Tariq
19 Green Lean TQM Human Resource Management Practices in Malaysian Automotive Companies
Authors: Noor Azlina Mohd Salleh, Salmiah Kasolang, Ahmed Jaffar
Abstract:
Green Lean Total Quality Management (LTQM) Human Resource Management (HRM) System is a system comprises of HRM in Environmental Management System (EMS) practices which is integrated to TQM with Lean Manufacturing (LM) principles. HRM is essential especially in dealing with low motivation and less productive employees. The ultimate goal of this system is to focus on achieving total human resource development that is motivated and capable to optimize their creativity to be a part of Green and Lean TQM organization. A survey questionnaire was developed and distributed to 30 highly active automotive vendors in Malaysia and analyzed by Minitab v16 and SPSS v17. It was found out companies that are practicing Green LTQM HRM practices have generated more revenue and have RND capability. However, years of company establishment do not affect the openness of the company to adapt new initiatives that can help to improve the effectiveness of the operations. It was also found out the importance of training, communication and rewards for employees. The Green LTQM HRM practices framework model established in this study hopefully will give preliminary insight especially to companies that are still looking for system that can improve their productivity from managing human resource. This is preliminary study that combined 4 awards practices, ISO/TS16949, Toyota Production System SAEJ4000, MAJAICO Lean Production System and EMS focusing on highly active companies that have been involved in MAJAICO Program and Proton Vendor Development Program. Future study can be conducted to know the status at other industry as well as case study pertaining to this system.
Keywords: Automotive Industry, Lean Manufacturing, Operational Engineering Management, Total Quality Management. Environmental Management System.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 418918 Effectiveness of a Malaysian Workplace Intervention Study on Physical Activity Levels
Authors: M. Z. Bin Mohd Ghazali, N. C. Wilson, A. F. Bin Ahmad Fuad, M. A. H. B. Musa, M. U. Mohamad Sani, F. Zulkifli, M. S. Zainal Abidin
Abstract:
Physical activity levels are low in Malaysia and this study was undertaken to determine if a four week work-based intervention program would be effective in changing physical activity levels. The study was conducted in a Malaysian Government Department and had three stages: baseline data collection, four-week intervention and two-month post intervention data collection. During the intervention and two-month post intervention phases, physical activity levels (determined by a pedometer) and basic health profiles (BMI, abdominal obesity, blood pressure) were measured. Staff (58 males, 47 females) with an average age of 33 years completed baseline data collection. Pedometer steps averaged 7,102 steps/day at baseline, although male step counts were significantly higher than females (7,861 vs. 6114). Health profiles were poor: over 50% were overweight/obese (males 66%, females 40%); hypertension (males 23%, females 6%); excess waist circumference (males 52%, females 17%). While 86 staff participated in the intervention, only 49 regularly reported their steps. There was a significant increase (17%) in average daily steps from 8,965 (week 1) to 10,436 (week 4). Unfortunately, participation in the intervention program was avoided by the less healthy staff. Two months after the intervention there was no significant difference in average steps/day, despite the fact that 89% of staff reporting they planned to make long-term changes to their lifestyle. An unexpected average increase of 2kg in body weight occurred in participants, although this was less than the 5.6kg in non-participants. A number of recommendations are made for future interventions, including the conclusion that pedometers were a useful tool and popular with participants.
Keywords: Pedometers, walking, health, intervention.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 147917 Analysis of Driver Point of Regard Determinations with Eye-Gesture Templates Using Receiver Operating Characteristic
Authors: Siti Nor Hafizah binti Mohd Zaid, Mohamed Abdel-Maguid, Abdel-Hamid Soliman
Abstract:
An Advance Driver Assistance System (ADAS) is a computer system on board a vehicle which is used to reduce the risk of vehicular accidents by monitoring factors relating to the driver, vehicle and environment and taking some action when a risk is identified. Much work has been done on assessing vehicle and environmental state but there is still comparatively little published work that tackles the problem of driver state. Visual attention is one such driver state. In fact, some researchers claim that lack of attention is the main cause of accidents as factors such as fatigue, alcohol or drug use, distraction and speeding all impair the driver-s capacity to pay attention to the vehicle and road conditions [1]. This seems to imply that the main cause of accidents is inappropriate driver behaviour in cases where the driver is not giving full attention while driving. The work presented in this paper proposes an ADAS system which uses an image based template matching algorithm to detect if a driver is failing to observe particular windscreen cells. This is achieved by dividing the windscreen into 24 uniform cells (4 rows of 6 columns) and matching video images of the driver-s left eye with eye-gesture templates drawn from images of the driver looking at the centre of each windscreen cell. The main contribution of this paper is to assess the accuracy of this approach using Receiver Operating Characteristic analysis. The results of our evaluation give a sensitivity value of 84.3% and a specificity value of 85.0% for the eye-gesture template approach indicating that it may be useful for driver point of regard determinations.
Keywords: Advanced Driver Assistance Systems, Eye-Tracking, Hazard Detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 163316 Dengue Disease Mapping with Standardized Morbidity Ratio and Poisson-gamma Model: An Analysis of Dengue Disease in Perak, Malaysia
Authors: N. A. Samat, S. H. Mohd Imam Ma’arof
Abstract:
Dengue disease is an infectious vector-borne viral disease that is commonly found in tropical and sub-tropical regions, especially in urban and semi-urban areas, around the world and including Malaysia. There is no currently available vaccine or chemotherapy for the prevention or treatment of dengue disease. Therefore prevention and treatment of the disease depend on vector surveillance and control measures. Disease risk mapping has been recognized as an important tool in the prevention and control strategies for diseases. The choice of statistical model used for relative risk estimation is important as a good model will subsequently produce a good disease risk map. Therefore, the aim of this study is to estimate the relative risk for dengue disease based initially on the most common statistic used in disease mapping called Standardized Morbidity Ratio (SMR) and one of the earliest applications of Bayesian methodology called Poisson-gamma model. This paper begins by providing a review of the SMR method, which we then apply to dengue data of Perak, Malaysia. We then fit an extension of the SMR method, which is the Poisson-gamma model. Both results are displayed and compared using graph, tables and maps. Results of the analysis shows that the latter method gives a better relative risk estimates compared with using the SMR. The Poisson-gamma model has been demonstrated can overcome the problem of SMR when there is no observed dengue cases in certain regions. However, covariate adjustment in this model is difficult and there is no possibility for allowing spatial correlation between risks in adjacent areas. The drawbacks of this model have motivated many researchers to propose other alternative methods for estimating the risk.
Keywords: Dengue disease, Disease mapping, Standardized Morbidity Ratio, Poisson-gamma model, Relative risk.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 329515 Body Composition Index Predict Children’s Motor Skills Proficiency
Authors: Sarina Md Yusof, Suhana Aiman, Mohd Khairi Zawi, Hosni Hasan, Azila Azreen Md Radzi
Abstract:
Failure in mastery of motor skills proficiency during childhood has been seen as a detrimental factor for children to be physically active. Lack of motor skills proficiency tends to reduce children’s competency and confidence level to participate in physical activity. As a consequence of less participation in physical activity, children will turn to be overweight and obese. It has been suggested that children who master motor skill proficiency will be more involved in physical activity thus preventing them from being overweight. Obesity has become a serious childhood health issues worldwide. Previous studies have found that children who were overweight and obese were generally less active however these studies focused on one gender. This study aims to compare motor skill proficiency of underweight, normal-weight, overweight and obese young boys as well as to determine the relationship between motor skills proficiency and body composition. 112 boys aged between 8 to 10 years old participated in this study. Participants were assigned to four groups; underweight, normal-weight, overweight and obese using BMI-age percentile chart for children. Bruininks- Oseretsky Test Second Edition-Short Form was administered to assess their motor skill proficiency. Meanwhile, body composition was determined by the skinfold thickness measurement. Result indicated that underweight and normal children were superior in motor skills proficiency compared to overweight and obese children (p < 0.05). A significant strong inverse correlation between motor skills proficiency and body composition (r = -0.849) is noted. The findings of this study could be explained by non-contributory mass that carried by overweight and obese children leads to biomechanical movement inefficiency which will become detrimental to motor skills proficiency. It can be concluded that motor skills proficiency is inversely correlated with body composition.
Keywords: Motor skills proficiency, body composition, obesity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 332714 Validation on 3D Surface Roughness Algorithm for Measuring Roughness of Psoriasis Lesion
Authors: M.H. Ahmad Fadzil, Esa Prakasa, Hurriyatul Fitriyah, Hermawan Nugroho, Azura Mohd Affandi, S.H. Hussein
Abstract:
Psoriasis is a widespread skin disease affecting up to 2% population with plaque psoriasis accounting to about 80%. It can be identified as a red lesion and for the higher severity the lesion is usually covered with rough scale. Psoriasis Area Severity Index (PASI) scoring is the gold standard method for measuring psoriasis severity. Scaliness is one of PASI parameter that needs to be quantified in PASI scoring. Surface roughness of lesion can be used as a scaliness feature, since existing scale on lesion surface makes the lesion rougher. The dermatologist usually assesses the severity through their tactile sense, therefore direct contact between doctor and patient is required. The problem is the doctor may not assess the lesion objectively. In this paper, a digital image analysis technique is developed to objectively determine the scaliness of the psoriasis lesion and provide the PASI scaliness score. Psoriasis lesion is modelled by a rough surface. The rough surface is created by superimposing a smooth average (curve) surface with a triangular waveform. For roughness determination, a polynomial surface fitting is used to estimate average surface followed by a subtraction between rough and average surface to give elevation surface (surface deviations). Roughness index is calculated by using average roughness equation to the height map matrix. The roughness algorithm has been tested to 444 lesion models. From roughness validation result, only 6 models can not be accepted (percentage error is greater than 10%). These errors occur due the scanned image quality. Roughness algorithm is validated for roughness measurement on abrasive papers at flat surface. The Pearson-s correlation coefficient of grade value (G) of abrasive paper and Ra is -0.9488, its shows there is a strong relation between G and Ra. The algorithm needs to be improved by surface filtering, especially to overcome a problem with noisy data.
Keywords: psoriasis, roughness algorithm, polynomial surfacefitting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 249113 Cold Flow Investigation of Primary Zone Characteristics in Combustor Utilizing Axial Air Swirler
Authors: Yehia A. Eldrainy, Mohammad Nazri Mohd. Jaafar, Tholudin Mat Lazim
Abstract:
This paper presents a cold flow simulation study of a small gas turbine combustor performed using laboratory scale test rig. The main objective of this investigation is to obtain physical insight of the main vortex, responsible for the efficient mixing of fuel and air. Such models are necessary for predictions and optimization of real gas turbine combustors. Air swirler can control the combustor performance by assisting in the fuel-air mixing process and by producing recirculation region which can act as flame holders and influences residence time. Thus, proper selection of a swirler is needed to enhance combustor performance and to reduce NOx emissions. Three different axial air swirlers were used based on their vane angles i.e., 30°, 45°, and 60°. Three-dimensional, viscous, turbulent, isothermal flow characteristics of the combustor model operating at room temperature were simulated via Reynolds- Averaged Navier-Stokes (RANS) code. The model geometry has been created using solid model, and the meshing has been done using GAMBIT preprocessing package. Finally, the solution and analysis were carried out in a FLUENT solver. This serves to demonstrate the capability of the code for design and analysis of real combustor. The effects of swirlers and mass flow rate were examined. Details of the complex flow structure such as vortices and recirculation zones were obtained by the simulation model. The computational model predicts a major recirculation zone in the central region immediately downstream of the fuel nozzle and a second recirculation zone in the upstream corner of the combustion chamber. It is also shown that swirler angles changes have significant effects on the combustor flowfield as well as pressure losses.
Keywords: cold flow, numerical simulation, combustor;turbulence, axial swirler.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 220512 Classification of Extreme Ground-Level Ozone Based on Generalized Extreme Value Model for Air Monitoring Station
Authors: Siti Aisyah Zakaria, Nor Azrita Mohd Amin, Noor Fadhilah Ahmad Radi, Nasrul Hamidin
Abstract:
Higher ground-level ozone (GLO) concentration adversely affects human health, vegetations as well as activities in the ecosystem. In Malaysia, most of the analysis on GLO concentration are carried out using the average value of GLO concentration, which refers to the centre of distribution to make a prediction or estimation. However, analysis which focuses on the higher value or extreme value in GLO concentration is rarely explored. Hence, the objective of this study is to classify the tail behaviour of GLO using generalized extreme value (GEV) distribution estimation the return level using the corresponding modelling (Gumbel, Weibull, and Frechet) of GEV distribution. The results show that Weibull distribution which is also known as short tail distribution and considered as having less extreme behaviour is the best-fitted distribution for four selected air monitoring stations in Peninsular Malaysia, namely Larkin, Pelabuhan Kelang, Shah Alam, and Tanjung Malim; while Gumbel distribution which is considered as a medium tail distribution is the best-fitted distribution for Nilai station. The return level of GLO concentration in Shah Alam station is comparatively higher than other stations. Overall, return levels increase with increasing return periods but the increment depends on the type of the tail of GEV distribution’s tail. We conduct this study by using maximum likelihood estimation (MLE) method to estimate the parameters at four selected stations in Peninsular Malaysia. Next, the validation for the fitted block maxima series to GEV distribution is performed using probability plot, quantile plot and likelihood ratio test. Profile likelihood confidence interval is tested to verify the type of GEV distribution. These results are important as a guide for early notification on future extreme ozone events.
Keywords: Extreme value theory, generalized extreme value distribution, ground-level ozone, return level.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 51911 Development of Manufacturing Simulation Model for Semiconductor Fabrication
Authors: Syahril Ridzuan Ab Rahim, Ibrahim Ahmad, Mohd Azizi Chik, Ahmad Zafir Md. Rejab, and U. Hashim
Abstract:
This research presents the development of simulation modeling for WIP management in semiconductor fabrication. Manufacturing simulation modeling is needed for productivity optimization analysis due to the complex process flows involved more than 35 percent re-entrance processing steps more than 15 times at same equipment. Furthermore, semiconductor fabrication required to produce high product mixed with total processing steps varies from 300 to 800 steps and cycle time between 30 to 70 days. Besides the complexity, expansive wafer cost that potentially impact the company profits margin once miss due date is another motivation to explore options to experiment any analysis using simulation modeling. In this paper, the simulation model is developed using existing commercial software platform AutoSched AP, with customized integration with Manufacturing Execution Systems (MES) and Advanced Productivity Family (APF) for data collections used to configure the model parameters and data source. Model parameters such as processing steps cycle time, equipment performance, handling time, efficiency of operator are collected through this customization. Once the parameters are validated, few customizations are made to ensure the prior model is executed. The accuracy for the simulation model is validated with the actual output per day for all equipments. The comparison analysis from result of the simulation model compared to actual for achieved 95 percent accuracy for 30 days. This model later was used to perform various what if analysis to understand impacts on cycle time and overall output. By using this simulation model, complex manufacturing environment like semiconductor fabrication (fab) now have alternative source of validation for any new requirements impact analysis.Keywords: Advanced Productivity Family (APF), Complementary Metal Oxide Semiconductor (CMOS), Manufacturing Execution Systems (MES), Work In Progress (WIP).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 322210 A Comparative Study on Biochar from Slow Pyrolysis of Corn Cob and Cassava Wastes
Authors: Adilah Shariff, Nurhidayah Mohamed Noor, Alexander Lau, Muhammad Azwan Mohd Ali
Abstract:
Biomass such as corn and cassava wastes if left to decay will release significant quantities of greenhouse gases (GHG) including carbon dioxide and methane. The biomass wastes can be converted into biochar via thermochemical process such as slow pyrolysis. This approach can reduce the biomass wastes as well as preserve its carbon content. Biochar has the potential to be used as a carbon sequester and soil amendment. The aim of this study is to investigate the characteristics of the corn cob, cassava stem, and cassava rhizome in order to identify their potential as pyrolysis feedstocks for biochar production. This was achieved by using the proximate and elemental analyses as well as calorific value and lignocellulosic determination. The second objective is to investigate the effect of pyrolysis temperature on the biochar produced. A fixed bed slow pyrolysis reactor was used to pyrolyze the corn cob, cassava stem, and cassava rhizome. The pyrolysis temperatures were varied between 400 °C and 600 °C, while the heating rate and the holding time were fixed at 5 °C/min and 1 hour, respectively. Corn cob, cassava stem, and cassava rhizome were found to be suitable feedstocks for pyrolysis process because they contained a high percentage of volatile matter more than 80 mf wt.%. All the three feedstocks contained low nitrogen and sulphur content less than 1 mf wt.%. Therefore, during the pyrolysis process, the feedstocks give off very low rate of GHG such as nitrogen oxides and sulphur oxides. Independent of the types of biomass, the percentage of biochar yield is inversely proportional to the pyrolysis temperature. The highest biochar yield for each studied temperature is from slow pyrolysis of cassava rhizome as the feedstock contained the highest percentage of ash compared to the other two feedstocks. The percentage of fixed carbon in all the biochars increased as the pyrolysis temperature increased. The increment of pyrolysis temperature from 400 °C to 600 °C increased the fixed carbon of corn cob biochar, cassava stem biochar and cassava rhizome biochar by 26.35%, 10.98%, and 6.20% respectively. Irrespective of the pyrolysis temperature, all the biochars produced were found to contain more than 60 mf wt.% fixed carbon content, much higher than its feedstocks.
Keywords: Biochar, biomass, cassava wastes, corn cob, pyrolysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21539 Time Domain and Frequency Domain Analyses of Measured Metocean Data for Malaysian Waters
Authors: Duong Vannak, Mohd Shahir Liew, Guo Zheng Yew
Abstract:
Data of wave height and wind speed were collected from three existing oil fields in South China Sea – offshore Peninsular Malaysia, Sarawak and Sabah regions. Extreme values and other significant data were employed for analysis. The data were recorded from 1999 until 2008. The results show that offshore structures are susceptible to unacceptable motions initiated by wind and waves with worst structural impacts caused by extreme wave heights. To protect offshore structures from damage, there is a need to quantify descriptive statistics and determine spectra envelope of wind speed and wave height, and to ascertain the frequency content of each spectrum for offshore structures in the South China Sea shallow waters using measured time series. The results indicate that the process is nonstationary; it is converted to stationary process by first differencing the time series. For descriptive statistical analysis, both wind speed and wave height have significant influence on the offshore structure during the northeast monsoon with high mean wind speed of 13.5195 knots ( = 6.3566 knots) and the high mean wave height of 2.3597 m ( = 0.8690 m). Through observation of the spectra, there is no clear dominant peak and the peaks fluctuate randomly. Each wind speed spectrum and wave height spectrum has its individual identifiable pattern. The wind speed spectrum tends to grow gradually at the lower frequency range and increasing till it doubles at the higher frequency range with the mean peak frequency range of 0.4104 Hz to 0.4721 Hz, while the wave height tends to grow drastically at the low frequency range, which then fluctuates and decreases slightly at the high frequency range with the mean peak frequency range of 0.2911 Hz to 0.3425 Hz.
Keywords: Metocean, Offshore Engineering, Time Series, Descriptive Statistics, Autospectral Density Function, Wind, Wave.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 36798 Analysis of Combustion, Performance and Emission Characteristics of Turbocharged LHR Extended Expansion DI Diesel Engine
Authors: Mohd.F.Shabir, P. Tamilporai, B. Rajendra Prasath
Abstract:
The fundamental aim of extended expansion concept is to achieve higher work done which in turn leads to higher thermal efficiency. This concept is compatible with the application of turbocharger and LHR engine. The Low Heat Rejection engine was developed by coating the piston crown, cylinder head inside with valves and cylinder liner with partially stabilized zirconia coating of 0.5 mm thickness. Extended expansion in diesel engines is termed as Miller cycle in which the expansion ratio is increased by reducing the compression ratio by modifying the inlet cam for late inlet valve closing. The specific fuel consumption reduces to an appreciable level and the thermal efficiency of the extended expansion turbocharged LHR engine is improved. In this work, a thermodynamic model was formulated and developed to simulate the LHR based extended expansion turbocharged direct injection diesel engine. It includes a gas flow model, a heat transfer model, and a two zone combustion model. Gas exchange model is modified by incorporating the Miller cycle, by delaying inlet valve closing timing which had resulted in considerable improvement in thermal efficiency of turbocharged LHR engines. The heat transfer model, calculates the convective and radiative heat transfer between the gas and wall by taking into account of the combustion chamber surface temperature swings. Using the two-zone combustion model, the combustion parameters and the chemical equilibrium compositions were determined. The chemical equilibrium compositions were used to calculate the Nitric oxide formation rate by assuming a modified Zeldovich mechanism. The accuracy of this model is scrutinized against actual test results from the engine. The factors which affect thermal efficiency and exhaust emissions were deduced and their influences were discussed. In the final analysis it is seen that there is an excellent agreement in all of these evaluations.Keywords: Low Heat Rejection, Miller cycle.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20937 A Comparative Understanding of Critical Problems Faced by Pakistani and Indian Transportation Industry
Authors: Saleh Abduallah Saleh, Mohammad Basir Bin Saud, Mohd Azwardi Md Isa
Abstract:
It is very important for a developing nation to developing their infrastructure on the prime priority because their infrastructure particularly their roads and transportation functions as a blood in the system. Almost 1.1 billion populations share the travel and transportation industry in India. On the other hand, the Pakistan transportation industry is also extensive and elevating about 170 million users of transportation. Indian and Pakistani specifically within bus industry are well connected within and between the urban and rural areas. The transportation industry is radically helping the economic alleviation of both countries. Due to high economic instability, unemployment and poverty rate both countries governments are very serious and committed to help for boosting their economy. They believe that any form of transportation development would play a vital role in the development of land, infrastructure which could indirectly support many other industries’ developments, such as tourism, freighting and shipping businesses, just to mention a few. However, it seems that their previous transportation planning in the due course has failed to meet the fast growing demand. As with the span of time, both the countries are looking forward to a long-term, and economical solutions, because the demand is from time to time keep appreciating and reacting according to other key economic drivers. Content analysis method and case study approach is used in this paper and secondary data from the bureau of statistic is used for case analysis. The paper focused on the mobility concerns of the lower and middle-income people in India and Pakistan. The paper is aimed to highlight the weaknesses, opportunities and limitations resulting from low priority industry for a government, which is making the either country's public suffer. The paper has concluded that the main issue is identified as the slow, inappropriate, and unfavorable decisions which are not in favor of long-term country’s economic development and public interest. The paper also recommends to future research avenues for public and private transportation, which is continuously failing to meet the public expectations.
Keywords: Bus transportation industries, transportation demand, government parallel initiatives, road and traffic congestions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19106 Positivity Rate of Person under Surveillance among Institut Jantung Negara’s Patients with Various COVID-19 Vaccination Status in the First Quarter of 2022, Malaysia
Authors: M. Izzat Md. Nor, N. Jaffar, N. Zaitulakma Md. Zain, N. Izyanti Mohd Suppian, S. Balakrishnan, G. Kandavello
Abstract:
During the Coronavirus (COVID-19) pandemic, Malaysia has been focusing on building herd immunity by introducing vaccination programs into the community. Hospital Standard Operating Procedures (SOP) were developed to prevent inpatient transmission. In this study, we focus on the positivity rate of inpatient Person Under Surveillance (PUS) becoming COVID-19 positive and compare this to the national rate in order to see the outcomes of the patient who becomes COVID-19 positive in relation to their vaccination status. This is a retrospective observational study carried out from 1 January until 30 March 2022 in Institut Jantung Negara (IJN). There were 5,255 patients admitted during the time of this study. Pre-admission Polymerase Chain Reaction (PCR) swab was done for all patients. Patients with positive PCR on pre-admission screening were excluded. The patients who had exposure to COVID-19-positive staff or patients during hospitalization were defined as PUS and were quarantined and monitored for potential COVID-19 infection. Their frequency and risk of exposure (WHO definition) were recorded. On the final day of quarantine, a second PCR swab was performed on PUS patients who exhibit clinical deterioration, whether or not they exhibit COVID-19 symptoms. The severity of COVID-19 infection was defined as category 1-5A. All patients' vaccination status was recorded, and they were divided into three groups: fully immunised, partially immunised, and unvaccinated. We analysed the positivity rate of PUS patients becoming COVID-positive, outcomes, and correlation with the vaccination status. The ratio of positive inpatient PUS to the total inpatient PUS is 492; only 13 became positive, giving a positivity rate of 2.6%. Eight (62%) had multiple exposures. The majority, 8/13(72.7%), had a high-risk exposure, and the remaining 5 had medium-risk exposure. Four (30.8%) were boosted, 7(53.8%) were fully vaccinated, and 2(15.4%) were partial/unvaccinated. Eight patients were in categories 1-2, whilst 38% were in categories 3-5. Vaccination status did not correlate with COVID-19 Category (P = 0.641). One (7.7%) patient died due to COVID-19 complications and sepsis. Within the first quarter of 2022, our institution's positivity rate (2.6%) is significantly lower than the country's (14.4%). High-risk exposure and multiple exposures to positive COVID-19 cases increased the risk of PUS becoming COVID-19 positive despite their underlying vaccination status.
Keywords: COVID-19, boosted, high risk, Malaysia, quarantine, vaccination status.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2535 Adaptive WiFi Fingerprinting for Location Approximation
Authors: Mohd Fikri Azli bin Abdullah, Khairul Anwar bin Kamarul Hatta, Esther Jeganathan
Abstract:
WiFi has become an essential technology that is widely used nowadays. It is famous due to its convenience to be used with mobile devices. This is especially true for Internet users worldwide that use WiFi connections. There are many location based services that are available nowadays which uses Wireless Fidelity (WiFi) signal fingerprinting. A common example that is gaining popularity in this era would be Foursquare. In this work, the WiFi signal would be used to estimate the user or client’s location. Similar to GPS, fingerprinting method needs a floor plan to increase the accuracy of location estimation. Still, the factor of inconsistent WiFi signal makes the estimation defer at different time intervals. Given so, an adaptive method is needed to obtain the most accurate signal at all times. WiFi signals are heavily distorted by external factors such as physical objects, radio frequency interference, electrical interference, and environmental factors to name a few. Due to these factors, this work uses a method of reducing the signal noise and estimation using the Nearest Neighbour based on past activities of the signal to increase the signal accuracy up to more than 80%. The repository yet increases the accuracy by using Artificial Neural Network (ANN) pattern matching. The repository acts as the server cum support of the client side application decision. Numerous previous works has adapted the methods of collecting signal strengths in the repository over the years, but mostly were just static. In this work, proposed solutions on how the adaptive method is done to match the signal received to the data in the repository are highlighted. With the said approach, location estimation can be done more accurately. Adaptive update allows the latest location fingerprint to be stored in the repository. Furthermore, any redundant location fingerprints are removed and only the updated version of the fingerprint is stored in the repository. How the location estimation of the user can be predicted would be highlighted more in the proposed solution section. After some studies on previous works, it is found that the Artificial Neural Network is the most feasible method to deploy in updating the repository and making it adaptive. The Artificial Neural Network functions are to do the pattern matching of the WiFi signal to the existing data available in the repository.
Keywords: Adaptive Repository, Artificial Neural Network, Location Estimation, Nearest Neighbour Euclidean Distance, WiFi RSSI Fingerprinting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34604 A POX Controller Module to Collect Web Traffic Statistics in SDN Environment
Authors: Wisam H. Muragaa, Kamaruzzaman Seman, Mohd Fadzli Marhusin
Abstract:
Software Defined Networking (SDN) is a new norm of networks. It is designed to facilitate the way of managing, measuring, debugging and controlling the network dynamically, and to make it suitable for the modern applications. Generally, measurement methods can be divided into two categories: Active and passive methods. Active measurement method is employed to inject test packets into the network in order to monitor their behaviour (ping tool as an example). Meanwhile the passive measurement method is used to monitor the traffic for the purpose of deriving measurement values. The measurement methods, both active and passive, are useful for the collection of traffic statistics, and monitoring of the network traffic. Although there has been a work focusing on measuring traffic statistics in SDN environment, it was only meant for measuring packets and bytes rates for non-web traffic. In this study, a feasible method will be designed to measure the number of packets and bytes in a certain time, and facilitate obtaining statistics for both web traffic and non-web traffic. Web traffic refers to HTTP requests that use application layer; while non-web traffic refers to ICMP and TCP requests. Thus, this work is going to be more comprehensive than previous works. With a developed module on POX OpenFlow controller, information will be collected from each active flow in the OpenFlow switch, and presented on Command Line Interface (CLI) and wireshark interface. Obviously, statistics that will be displayed on CLI and on wireshark interfaces include type of protocol, number of bytes and number of packets, among others. Besides, this module will show the number of flows added to the switch whenever traffic is generated from and to hosts in the same statistics list. In order to carry out this work effectively, our Python module will send a statistics request message to the switch requesting its current ports and flows statistics in every five seconds; while the switch will reply with the required information in a message called statistics reply message. Thus, POX controller will be notified and updated with any changes could happen in the entire network in a very short time. Therefore, our aim of this study is to prepare a list for the important statistics elements that are collected from the whole network, to be used for any further researches; particularly, those that are dealing with the detection of the network attacks that cause a sudden rise in the number of packets and bytes like Distributed Denial of Service (DDoS).
Keywords: Mininet, OpenFlow, POX controller, SDN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29723 Association between Single Nucleotide Polymorphism of Calpain1 Gene and Meat Tenderness Traits in Different Genotypes of Chicken: Malaysian Native and Commercial Broiler Line
Authors: Abtehal Y. Anaas, Mohd. Nazmi Bin Abd. Manap
Abstract:
Meat Tenderness is one of the most important factors affecting consumers' assessment of meat quality. Variation in meat tenderness is genetically controlled and varies among breeds, and it is also influenced by environmental factors that can affect its creation during rigor mortis and postmortem. The final postmortem meat tenderization relies on the extent of proteolysis of myofibrillar proteins caused by the endogenous activity of the proteolytic calpain system. This calpain system includes different calcium-dependent cysteine proteases, and an inhibitor, calpastatin. It is widely accepted that in farm animals including chickens, the μ-calpain gene (CAPN1) is a physiological candidate gene for meat tenderness. This study aimed to identify the association of single nucleotide polymorphism (SNP) markers in the CAPN1 gene with the tenderness of chicken breast meat from two Malaysian native and commercial broiler breed crosses. Ten, five months old native chickens and ten, 42 days commercial broilers were collected from the local market and breast muscles were removed two hours after slaughter, packed separately in plastic bags and kept at -20ºC for 24 h. The tenderness phenotype for all chickens’ breast meats was determined by Warner-Bratzler Shear Force (WBSF). Thawing and cooking losses were also measured in the same breast samples before using in WBSF determination. Polymerase chain reaction (PCR) was used to identify the previously reported C7198A and G9950A SNPs in the CAPN1 gene and assess their associations with meat tenderness in the two breeds. The broiler breast meat showed lower shear force values and lower thawing loss rates than the native chickens (p<0.05), whereas there were similar in the rates of cooking loss. The study confirms some previous results that the markers CAPN1 C7198A and G9950A were not significantly associated with the variation in meat tenderness in chickens. Therefore, further study is needed to confirm the functional molecular mechanism of these SNPs and evaluate their associations in different chicken populations.
Keywords: CAPNl, chicken, meat tenderness, meat quality, SNPs.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16012 Detailed Sensitive Detection of Impurities in Waste Engine Oils Using Laser Induced Breakdown Spectroscopy, Rotating Disk Electrode Optical Emission Spectroscopy and Surface Plasmon Resonance
Authors: Cherry Dhiman, Ayushi Paliwal, Mohd. Shahid Khan, M. N. Reddy, Vinay Gupta, Monika Tomar
Abstract:
The laser based high resolution spectroscopic experimental techniques such as Laser Induced Breakdown Spectroscopy (LIBS), Rotating Disk Electrode Optical Emission spectroscopy (RDE-OES) and Surface Plasmon Resonance (SPR) have been used for the study of composition and degradation analysis of used engine oils. Engine oils are mainly composed of aliphatic and aromatics compounds and its soot contains hazardous components in the form of fine, coarse and ultrafine particles consisting of wear metal elements. Such coarse particulates matter (PM) and toxic elements are extremely dangerous for human health that can cause respiratory and genetic disorder in humans. The combustible soot from thermal power plants, industry, aircrafts, ships and vehicles can lead to the environmental and climate destabilization. It contributes towards global pollution for land, water, air and global warming for environment. The detection of such toxicants in the form of elemental analysis is a very serious issue for the waste material management of various organic, inorganic hydrocarbons and radioactive waste elements. In view of such important points, the current study on used engine oils was performed. The fundamental characterization of engine oils was conducted by measuring water content and kinematic viscosity test that proves the crude analysis of the degradation of used engine oils samples. The microscopic quantitative and qualitative analysis was presented by RDE-OES technique which confirms the presence of elemental impurities of Pb, Al, Cu, Si, Fe, Cr, Na and Ba lines for used waste engine oil samples in few ppm. The presence of such elemental impurities was confirmed by LIBS spectral analysis at various transition levels of atomic line. The recorded transition line of Pb confirms the maximum degradation which was found in used engine oil sample no. 3 and 4. Apart from the basic tests, the calculations for dielectric constants and refractive index of the engine oils were performed via SPR analysis.
Keywords: Laser induced breakdown spectroscopy, rotating disk electrode optical emission spectroscopy, surface plasmon resonance, ICCD spectrometer, Nd:YAG laser, engine oil.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7521 Solid State Drive End to End Reliability Prediction, Characterization and Control
Authors: Mohd Azman Abdul Latif, Erwan Basiron
Abstract:
A flaw or drift from expected operational performance in one component (NAND, PMIC, controller, DRAM, etc.) may affect the reliability of the entire Solid State Drive (SSD) system. Therefore, it is important to ensure the required quality of each individual component through qualification testing specified using standards or user requirements. Qualification testing is time-consuming and comes at a substantial cost for product manufacturers. A highly technical team, from all the eminent stakeholders is embarking on reliability prediction from beginning of new product development, identify critical to reliability parameters, perform full-blown characterization to embed margin into product reliability and establish control to ensure the product reliability is sustainable in the mass production. The paper will discuss a comprehensive development framework, comprehending SSD end to end from design to assembly, in-line inspection, in-line testing and will be able to predict and to validate the product reliability at the early stage of new product development. During the design stage, the SSD will go through intense reliability margin investigation with focus on assembly process attributes, process equipment control, in-process metrology and also comprehending forward looking product roadmap. Once these pillars are completed, the next step is to perform process characterization and build up reliability prediction modeling. Next, for the design validation process, the reliability prediction specifically solder joint simulator will be established. The SSD will be stratified into Non-Operating and Operating tests with focus on solder joint reliability and connectivity/component latent failures by prevention through design intervention and containment through Temperature Cycle Test (TCT). Some of the SSDs will be subjected to the physical solder joint analysis called Dye and Pry (DP) and Cross Section analysis. The result will be feedbacked to the simulation team for any corrective actions required to further improve the design. Once the SSD is validated and is proven working, it will be subjected to implementation of the monitor phase whereby Design for Assembly (DFA) rules will be updated. At this stage, the design change, process and equipment parameters are in control. Predictable product reliability at early product development will enable on-time sample qualification delivery to customer and will optimize product development validation, effective development resource and will avoid forced late investment to bandage the end-of-life product failures. Understanding the critical to reliability parameters earlier will allow focus on increasing the product margin that will increase customer confidence to product reliability.
Keywords: e2e reliability prediction, SSD, TCT, Solder Joint Reliability, NUDD, connectivity issues, qualifications, characterization and control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 401