Search results for: sustainable supply chain performance
475 Soybean Seed Composition Prediction From Standing Crops Using Planet Scope Satellite Imagery and Machine Learning
Authors: Supria Sarkar, Vasit Sagan, Sourav Bhadra, Meghnath Pokharel, Felix B.Fritschi
Abstract:
Soybean and their derivatives are very important agricultural commodities around the world because of their wide applicability in human food, animal feed, biofuel, and industries. However, the significance of soybean production depends on the quality of the soybean seeds rather than the yield alone. Seed composition is widely dependent on plant physiological properties, aerobic and anaerobic environmental conditions, nutrient content, and plant phenological characteristics, which can be captured by high temporal resolution remote sensing datasets. Planet scope (PS) satellite images have high potential in sequential information of crop growth due to their frequent revisit throughout the world. In this study, we estimate soybean seed composition while the plants are in the field by utilizing PlanetScope (PS) satellite images and different machine learning algorithms. Several experimental fields were established with varying genotypes and different seed compositions were measured from the samples as ground truth data. The PS images were processed to extract 462 hand-crafted vegetative and textural features. Four machine learning algorithms, i.e., partial least squares (PLSR), random forest (RFR), gradient boosting machine (GBM), support vector machine (SVM), and two recurrent neural network architectures, i.e., long short-term memory (LSTM) and gated recurrent unit (GRU) were used in this study to predict oil, protein, sucrose, ash, starch, and fiber of soybean seed samples. The GRU and LSTM architectures had two separate branches, one for vegetative features and the other for textures features, which were later concatenated together to predict seed composition. The results show that sucrose, ash, protein, and oil yielded comparable prediction results. Machine learning algorithms that best predicted the six seed composition traits differed. GRU worked well for oil (R-Squared: of 0.53) and protein (R-Squared: 0.36), whereas SVR and PLSR showed the best result for sucrose (R-Squared: 0.74) and ash (R-Squared: 0.60), respectively. Although, the RFR and GBM provided comparable performance, the models tended to extremely overfit. Among the features, vegetative features were found as the most important variables compared to texture features. It is suggested to utilize many vegetation indices for machine learning training and select the best ones by using feature selection methods. Overall, the study reveals the feasibility and efficiency of PS images and machine learning for plot-level seed composition estimation. However, special care should be given while designing the plot size in the experiments to avoid mixed pixel issues.Keywords: agriculture, computer vision, data science, geospatial technology
Procedia PDF Downloads 138474 CSR Communication Strategies: Stakeholder and Institutional Theories Perspective
Authors: Stephanie Gracelyn Rahaman, Chew Yin Teng, Manjit Singh Sandhu
Abstract:
Corporate scandals have made stakeholders apprehensive of large companies and expect greater transparency in CSR matters. However, companies find it challenging to strategically communicate CSR to intended stakeholders and in the process may fall short on maximizing on CSR efforts. Given that stakeholders have the ability to either reward good companies or take legal action or boycott against corporate brands who do not act socially responsible, companies must create shared understanding of their CSR activities. As a result, communication has become a strategy for many companies to demonstrate CSR engagement and to minimize stakeholder skepticism. The main objective of this research is to examine the types of CSR communication strategies and predictors that guide CSR communication strategies. Employing Morsing & Schultz’s guide on CSR communication strategies, the study integrates stakeholder and institutional theory to develop a conceptual framework. The conceptual framework hypothesized that stakeholder (instrumental and normative) and institutional (regulatory environment, nature of business, mimetic intention, CSR focus and corporate objectives) dimensions would drive CSR communication strategies. Preliminary findings from semi-structured interviews in Malaysia are consistent with the conceptual model in that stakeholder and institutional expectations guide CSR communication strategies. Findings show that most companies use two-way communication strategies. Companies that identified employees, the public or customers as key stakeholders have started to embrace social media to be in-sync with new trends of communication. This is especially with the Gen Y which is their priority. Some companies creatively use multiple communication channels because they recognize different stakeholders favor different communication channels. Therefore, it appears that companies use two-way communication strategies to complement the perceived limitation of one-way communication strategies as some companies prefer a more interactive platform to strategically engage stakeholders in CSR communication. In addition to stakeholders, institutional expectations also play a vital role in influencing CSR communication. Due to industry peer pressures, corporate objectives (attract international investors and customers), companies may be more driven to excel in social performance. For these reasons companies tend to go beyond the basic mandatory requirement, excel in CSR activities and be known as companies that champion CSR. In conclusion, companies use more two-way than one-way communication and companies use a combination of one and two-way communication to target different stakeholders resulting from stakeholder and institutional dimensions. Finally, in order to find out if the conceptual framework actually fits the Malaysian context, companies’ responses for expected organizational outcomes from communicating CSR were gathered from the interview transcripts. Thereafter, findings are presented to show some of the key organizational outcomes (visibility and brand recognition, portray responsible image, attract prospective employees, positive word-of-mouth, etc.) that companies in Malaysia expect from CSR communication. Based on these findings the conceptual framework has been refined to show the new identified organizational outcomes.Keywords: CSR communication, CSR communication strategies, stakeholder theory, institutional theory, conceptual framework, Malaysia
Procedia PDF Downloads 290473 Religious Capital and Entrepreneurial Behavior in Small Businesses: The Importance of Entrepreneurial Creativity
Authors: Waleed Omri
Abstract:
With the growth of the small business sector in emerging markets, developing a better understanding of what drives 'day-to-day' entrepreneurial activities has become an important issue for academicians and practitioners. Innovation, as an entrepreneurial behavior, revolves around individuals who creatively engage in new organizational efforts. In a similar vein, the innovation behaviors and processes at the organizational member level are central to any corporate entrepreneurship strategy. Despite the broadly acknowledged importance of entrepreneurship and innovation at the individual level in the establishment of successful ventures, the literature lacks evidence on how entrepreneurs can effectively harness their skills and knowledge in the workplace. The existing literature illustrates that religion can impact the day-to-day work behavior of entrepreneurs, managers, and employees. Religious beliefs and practices could affect daily entrepreneurial activities by fostering mental abilities and traits such as creativity, intelligence, and self-efficacy. In the present study, we define religious capital as a set of personal and intangible resources, skills, and competencies that emanate from an individual’s religious values, beliefs, practices, and experiences and may be used to increase the quality of economic activities. Religious beliefs and practices give individuals a religious satisfaction, which can lead them to perform better in the workplace. In addition, religious ethics and practices have been linked to various positive employee outcomes in terms of organizational change, job satisfaction, and entrepreneurial intensity. As investigations of their consequences beyond direct task performance are still scarce, we explore if religious capital plays a role in entrepreneurs’ innovative behavior. In sum, this study explores the determinants of individual entrepreneurial behavior by investigating the relationship between religious capital and entrepreneurs’ innovative behavior in the context of small businesses. To further explain and clarify the religious capital-innovative behavior link, the present study proposes a model to examine the mediating role of entrepreneurial creativity. We use both Islamic work ethics (IWE) and Islamic religious practices (IRP) to measure Islamic religious capital. We use structural equation modeling with a robust maximum likelihood estimation to analyze data gathered from 289 Tunisian small businesses and to explore the relationships among the above-described variables. In line with the theory of planned behavior, only religious work ethics are found to increase the innovative behavior of small businesses’ owner-managers. Our findings also clearly demonstrate that the connection between religious capital-related variables and innovative behavior is better understood if the influence of entrepreneurial creativity, as a mediating variable of the aforementioned relationship, is taken into account. By incorporating both religious capital and entrepreneurial creativity into the innovative behavior analysis, this study provides several important practical implications for promoting innovation process in small businesses.Keywords: entrepreneurial behavior, small business, religion, creativity
Procedia PDF Downloads 245472 Study of Chemical State Analysis of Rubidium Compounds in Lα, Lβ₁, Lβ₃,₄ and Lγ₂,₃ X-Ray Emission Lines with Wavelength Dispersive X-Ray Fluorescence Spectrometer
Authors: Harpreet Singh Kainth
Abstract:
Rubidium salts have been commonly used as an electrolyte to improve the efficiency cycle of Li-ion batteries. In recent years, it has been implemented into the large scale for further technological advances to improve the performance rate and better cyclability in the batteries. X-ray absorption spectroscopy (XAS) is a powerful tool for obtaining the information in the electronic structure which involves the chemical state analysis in the active materials used in the batteries. However, this technique is not well suited for the industrial applications because it needs a synchrotron X-ray source and special sample file for in-situ measurements. In contrast to this, conventional wavelength dispersive X-ray fluorescence (WDXRF) spectrometer is nondestructive technique used to study the chemical shift in all transitions (K, L, M, …) and does not require any special pre-preparation planning. In the present work, the fluorescent Lα, Lβ₁ , Lβ₃,₄ and Lγ₂,₃ X-ray spectra of rubidium in different chemical forms (Rb₂CO₃ , RbCl, RbBr, and RbI) have been measured first time with high resolution wavelength dispersive X-ray fluorescence (WDXRF) spectrometer (Model: S8 TIGER, Bruker, Germany), equipped with an Rh anode X-ray tube (4-kW, 60 kV and 170 mA). In ₃₇Rb compounds, the measured energy shifts are in the range (-0.45 to - 1.71) eV for Lα X-ray peak, (0.02 to 0.21) eV for Lβ₁ , (0.04 to 0.21) eV for Lβ₃ , (0.15 to 0.43) eV for Lβ₄ and (0.22 to 0.75) eV for Lγ₂,₃ X-ray emission lines. The chemical shifts in rubidium compounds have been measured by considering Rb₂CO₃ compounds taking as a standard reference. A Voigt function is used to determine the central peak position of all compounds. Both positive and negative shifts have been observed in L shell emission lines. In Lα X-ray emission lines, all compounds show negative shift while in Lβ₁, Lβ₃,₄, and Lγ₂,₃ X-ray emission lines, all compounds show a positive shift. These positive and negative shifts result increase or decrease in X-ray energy shifts. It looks like that ligands attached with central metal atom attract or repel the electrons towards or away from the parent nucleus. This pulling and pushing character of rubidium affects the central peak position of the compounds which causes a chemical shift. To understand the chemical effect more briefly, factors like electro-negativity, line intensity ratio, effective charge and bond length are responsible for the chemical state analysis in rubidium compounds. The effective charge has been calculated from Suchet and Pauling method while the line intensity ratio has been calculated by calculating the area under the relevant emission peak. In the present work, it has been observed that electro-negativity, effective charge and intensity ratio (Lβ₁/Lα, Lβ₃,₄/Lα and Lγ₂,₃/Lα) are inversely proportional to the chemical shift (RbCl > RbBr > RbI), while bond length has been found directly proportional to the chemical shift (RbI > RbBr > RbCl).Keywords: chemical shift in L emission lines, bond length, electro-negativity, effective charge, intensity ratio, Rubidium compounds, WDXRF spectrometer
Procedia PDF Downloads 508471 Digital Twins: Towards an Overarching Framework for the Built Environment
Authors: Astrid Bagireanu, Julio Bros-Williamson, Mila Duncheva, John Currie
Abstract:
Digital Twins (DTs) have entered the built environment from more established industries like aviation and manufacturing, although there has never been a common goal for utilising DTs at scale. Defined as the cyber-physical integration of data between an asset and its virtual counterpart, DT has been identified in literature from an operational standpoint – in addition to monitoring the performance of a built asset. However, this has never been translated into how DTs should be implemented into a project and what responsibilities each project stakeholder holds in the realisation of a DT. What is needed is an approach to translate these requirements into actionable DT dimensions. This paper presents a foundation for an overarching framework specific to the built environment. For the purposes of this research, the UK widely used the Royal Institute of British Architects (RIBA) Plan of Work from 2020 is used as a basis for itemising project stages. The RIBA Plan of Work consists of eight stages designed to inform on the definition, briefing, design, coordination, construction, handover, and use of a built asset. Similar project stages are utilised in other countries; therefore, the recommendations from the interviews presented in this paper are applicable internationally. Simultaneously, there is not a single mainstream software resource that leverages DT abilities. This ambiguity meets an unparalleled ambition from governments and industries worldwide to achieve a national grid of interconnected DTs. For the construction industry to access these benefits, it necessitates a defined starting point. This research aims to provide a comprehensive understanding of the potential applications and ramifications of DT in the context of the built environment. This paper is an integral part of a larger research aimed at developing a conceptual framework for the Architecture, Engineering, and Construction (AEC) sector following a conventional project timeline. Therefore, this paper plays a pivotal role in providing practical insights and a tangible foundation for developing a stage-by-stage approach to assimilate the potential of DT within the built environment. First, the research focuses on a review of relevant literature, albeit acknowledging the inherent constraint of limited sources available. Secondly, a qualitative study compiling the views of 14 DT experts is presented, concluding with an inductive analysis of the interview findings - ultimately highlighting the barriers and strengths of DT in the context of framework development. As parallel developments aim to progress net-zero-centred design and improve project efficiencies across the built environment, the limited resources available to support DTs should be leveraged to propel the industry to reach its digitalisation era, in which AEC stakeholders have a fundamental role in understanding this, from the earliest stages of a project.Keywords: digital twins, decision-making, design, net-zero, built environment
Procedia PDF Downloads 124470 Effects of Dietary Polyunsaturated Fatty Acids and Beta Glucan on Maturity, Immunity and Fry Quality of Pabdah Catfish, Ompok pabda
Authors: Zakir Hossain, Md. Saddam Hossain
Abstract:
A nutritionally balanced diet and selection of appropriate species are important criteria in aquaculture. The present study was conducted to evaluate the effects of polyunsaturated fatty acids (PUFAs) and beta glucan containing diet on growth performance, feed utilization, maturation, immunity, early embryonic and larval development of endangered Pabdah catfish, Ompok pabda. In this study, squid extracted lipids and mushroom powder were used as the source of PUFAs and beta glucan, respectively, and formulated two isonitrogenous diets such as basal or control (CON) diet and treated (PBG) diet with maintaining 30% protein levels. During the study period, similar physicochemical conditions of water such as temperature, pH, and dissolved oxygen (DO) were 26.5±2 °C, 7.4±0.2, and 6.7±0.5 ppm, respectively in each cistern. The results showed that final mean body weight, final mean length gain, food conversion ratio (FCR), specific growth rate (SGR), food conversion efficiency (%), hepatosomatic index (HSI), kidney index (KI), and viscerosomatic index (VSI) were significantly (P<0.01 and P<0.05) higher in fish fed the PBG diet than that of fish fed the CON diet. The length-weight relationship and relative condition factor (K) of O. pabda were significantly (P<0.05) affected by the PBG diet. The gonadosomatic index (GSI), sperm viability, blood serum calcium ion concentrations (Ca²⁺), and vitellogenin level were significantly (P<0.05) higher in fish fed the PBG diet than that of fish fed the CON diet; which was used to the indication of fish maturation. During the spawning season, lipid granules and normal morphological structure were observed in the treated fish liver, whereas fewer lipid granules of liver were observed in the control group. Based on the immunity and stress resistance-related parameters such as hematological indices, antioxidant activity, lysozyme level, respiratory burst activity, blood reactive oxygen species (ROS), complement activity (ACH50 assay), specific IgM, brain AChE, plasma PGOT, and PGPT enzyme activity were significantly (P<0.01 and P<0.05) higher in fish fed the PBG diet than that of fish fed the CON diet. The fecundity, fertilization rate (92.23±2.69%), hatching rate (87.43±2.17 %) and survival (76.62±0.82%) of offspring were significantly higher (P˂0.05) in the PBG diet than in the control. Consequently, early embryonic and larval development was better in PBG treated group than in the control. Therefore, the present study showed that the polyunsaturated fatty acids (PUFAs) and beta glucan enriched experimental diet were more effective and achieved better growth, feed utilization, maturation, immunity, and spawning performances of O. pabda.Keywords: polyunsaturated fatty acids, beta glucan, maturity, immunity, catfish
Procedia PDF Downloads 12469 Pre-Cooling Strategies for the Refueling of Hydrogen Cylinders in Vehicular Transport
Authors: C. Hall, J. Ramos, V. Ramasamy
Abstract:
Hydrocarbon-based fuel vehicles are a major contributor to air pollution due to harmful emissions produced, leading to a demand for cleaner fuel types. A leader in this pursuit is hydrogen, with its application in vehicles producing zero harmful emissions and the only by-product being water. To compete with the performance of conventional vehicles, hydrogen gas must be stored on-board of vehicles in cylinders at high pressures (35–70 MPa) and have a short refueling duration (approximately 3 mins). However, the fast-filling of hydrogen cylinders causes a significant rise in temperature due to the combination of the negative Joule-Thompson effect and the compression of the gas. This can lead to structural failure and therefore, a maximum allowable internal temperature of 85°C has been imposed by the International Standards Organization. The technological solution to tackle the issue of rapid temperature rise during the refueling process is to decrease the temperature of the gas entering the cylinder. Pre-cooling of the gas uses a heat exchanger and requires energy for its operation. Thus, it is imperative to determine the least amount of energy input that is required to lower the gas temperature for cost savings. A validated universal thermodynamic model is used to identify an energy-efficient pre-cooling strategy. The model requires negligible computational time and is applied to previously validated experimental cases to optimize pre-cooling requirements. The pre-cooling characteristics include the location within the refueling timeline and its duration. A constant pressure-ramp rate is imposed to eliminate the effects of rapid changes in mass flow rate. A pre-cooled gas temperature of -40°C is applied, which is the lowest allowable temperature. The heat exchanger is assumed to be ideal with no energy losses. The refueling of the cylinders is modeled with the pre-cooling split in ten percent time intervals. Furthermore, varying burst durations are applied in both the early and late stages of the refueling procedure. The model shows that pre-cooling in the later stages of the refuelling process is more energy-efficient than early pre-cooling. In addition, the efficiency of pre-cooling towards the end of the refueling process is independent of the pressure profile at the inlet. This leads to the hypothesis that pre-cooled gas should be applied as late as possible in the refueling timeline and at very low temperatures. The model had shown a 31% reduction in energy demand whilst achieving the same final gas temperature for a refueling scenario when pre-cooling was applied towards the end of the process. The identification of the most energy-efficient refueling approaches whilst adhering to the safety guidelines is imperative to reducing the operating cost of hydrogen refueling stations. Heat exchangers are energy-intensive and thus, reducing the energy requirement would lead to cost reduction. This investigation shows that pre-cooling should be applied as late as possible and for short durations.Keywords: cylinder, hydrogen, pre-cooling, refueling, thermodynamic model
Procedia PDF Downloads 99468 Balanced Score Card a Tool to Improve Naac Accreditation – a Case Study in Indian Higher Education
Authors: CA Kishore S. Peshori
Abstract:
Introduction: India, a country with vast diversity and huge population is going to have largest young population by 2020. Higher education has and will always be the basic requirement for making a developing nation to a developed nation. To improve any system it needs to be bench-marked. There have been various tools for bench-marking the systems. Education is delivered in India by universities which are mainly funded by government. This universities for delivering the education sets up colleges which are again funded mainly by government. Recently however there has also been autonomy given to universities and colleges. Moreover foreign universities are waiting to enter Indian boundaries. With a large number of universities and colleges it has become more and more necessary to measure this institutes for bench-marking. There have been various tools for measuring the institute. In India college assessments have been made compulsory by UGC. Naac has been offically recognised as the accrediation criteria. The Naac criteria has been based on seven criterias namely: 1. Curricular assessments, 2. Teaching learning and evaluation, 3. Research Consultancy and Extension, 4. Infrastructure and learning resources, 5. Student support and progression, 6. Governance leadership and management, 7. Innovation and best practices. The Naac tries to bench mark the institution for identification, sustainability, dissemination and adaption of best practices. It grades the institution according to this seven criteria and the funding of institution is based on these grades. Many of the colleges are struggling to get best of grades but they have not come across a systematic tool to achieve the results. Balanced Scorecard developed by Kaplan has been a successful tool for corporates to develop best of practices so as to increase their financial performance and also retain and increase their customers so as to grow the organization to next level.It is time to test this tool for an educational institute. Methodology: The paper tries to develop a prototype for college based on the secondary data. Once a prototype is developed the researcher based on questionnaire will try to test this tool for successful implementation. The success of this research will depend on its implementation of BSC on an institute and its grading improved due to this successful implementation. Limitation of time is a major constraint in this research as Naac cycle takes minimum 4 years for accreditation and reaccreditation the methodology will limit itself to secondary data and questionnaire to be circulated to colleges along with the prototype model of BSC. Conclusion: BSC is a successful tool for enhancing growth of an organization. Educational institutes are no exception to these. BSC will only have to be realigned to suit the Naac criteria. Once this prototype is developed the success will be tested only on its implementation but this research paper will be the first step towards developing this tool and will also initiate the success by developing a questionnaire and getting and evaluating the responses for moving to the next level of actual implementationKeywords: balanced scorecard, bench marking, Naac, UGC
Procedia PDF Downloads 274467 Processes Controlling Release of Phosphorus (P) from Catchment Soils and the Relationship between Total Phosphorus (TP) and Humic Substances (HS) in Scottish Loch Waters
Authors: Xiaoyun Hui, Fiona Gentle, Clemens Engelke, Margaret C. Graham
Abstract:
Although past work has shown that phosphorus (P), an important nutrient, may form complexes with aqueous humic substances (HS), the principal component of natural organic matter, the nature of such interactions is poorly understood. Humic complexation may not only enhance P concentrations but it may change its bioavailability within such waters and, in addition, influence its transport within catchment settings. This project is examining the relationships and associations of P, HS, and iron (Fe) in Loch Meadie, Sutherland, North Scotland, a mesohumic freshwater loch which has been assessed as reference condition with respect to P. The aim is to identify characteristic spectroscopic parameters which can enhance the performance of the model currently used to predict reference condition TP levels for highly-coloured Scottish lochs under the Water Framework Directive. In addition to Loch Meadie, samples from other reference condition lochs in north Scotland and Shetland were analysed. By including different types of reference condition lochs (clear water, mesohumic and polyhumic water) this allowed the relationship between total phosphorus (TP) and HS to be more fully explored. The pH, [TP], [Fe], UV/Vis absorbance/spectra, [TOC] and [DOC] for loch water samples have been obtained using accredited methods. Loch waters were neutral to slightly acidic/alkaline (pH 6-8). [TP] in loch waters were lower than 50 µg L-1, and in Loch Meadie waters were typically <10 µg L-1. [Fe] in loch waters were mainly <0.6 mg L-1, but for some loch water samples, [Fe] were in the range 1.0-1.8 mg L-1and there was a positive correlation with [TOC] (r2=0.61). Lochs were classified as clear water, mesohumic or polyhumic based on water colour. The range of colour values of sampled lochs in each category were 0.2–0.3, 0.2–0.5 and 0.5–0.8 a.u. (10 mm pathlength), respectively. There was also a strong positive correlation between [DOC] and water colour (R2=0.84). The UV/Vis spectra (200-700 nm) for water samples were featureless with only a slight “shoulder” observed in the 270–290 nm region. Ultrafiltration was then used to separate colloidal and truly dissolved components from the loch waters and, since it contained the majority of aqueous P and Fe, the colloidal component was fractionated by gel filtration chromatography method. Gel filtration chromatographic fractionation of the colloids revealed two brown-coloured bands which had distinctive UV/Vis spectral features. The first eluting band had larger and more aromatic HS molecules than the second band, and in addition both P and Fe were primarily associated with the larger, more aromatic HS. This result demonstrated that P was able to form complexes with Fe-rich components of HS, and thus provided a scientific basis for the significant correlation between [Fe] and [TP] that the previous monitoring data of reference condition lochs from Scottish Environment Protection Agency (SEPA) showed. The distinctive features of the HS will be used as the basis for an improved spectroscopic tool.Keywords: total phosphorus, humic substances, Scottish loch water, WFD model
Procedia PDF Downloads 546466 The Misuse of Free Cash and Earnings Management: An Analysis of the Extent to Which Board Tenure Mitigates Earnings Management
Authors: Michael McCann
Abstract:
Managerial theories propose that, in joint stock companies, executives may be tempted to waste excess free cash on unprofitable projects to keep control of resources. In order to conceal their projects' poor performance, they may seek to engage in earnings management. On the one hand, managers may manipulate earnings upwards in order to post ‘good’ performances and safeguard their position. On the other, since managers pursuit of unrewarding investments are likely to lead to low long-term profitability, managers will use negative accruals to reduce current year’s earnings, smoothing earnings over time in order to conceal the negative effects. Agency models argue that boards of directors are delegated by shareholders to ensure that companies are governed properly. Part of that responsibility is ensuring the reliability of financial information. Analyses of the impact of board characteristics, particularly board independence on the misuse of free cash flow and earnings management finds conflicting evidence. However, existing characterizations of board independence do not account for such directors gaining firm-specific knowledge over time, influencing their monitoring ability. Further, there is little analysis of the influence of the relative experience of independent directors and executives on decisions surrounding the use of free cash. This paper contributes to this literature regarding the heterogeneous characteristics of boards by investigating the influence of independent director tenure on earnings management and the relative tenures of independent directors and Chief Executives. A balanced panel dataset comprising 51 companies across 11 annual periods from 2005 to 2015 is used for the analysis. In each annual period, firms were classified as conducting earnings management if they had discretionary accruals in the bottom quartile (downwards) and top quartile (upwards) of the distributed values for the sample. Logistical regressions were conducted to determine the marginal impact of independent board tenure and a number of control variables on the probability of conducting earnings management. The findings indicate that both absolute and relative measures of board independence and experience do not have a significant impact on the likelihood of earnings management. It is the level of free cash flow which is the major influence on the probability of earnings management. Higher free cash flow increases the probability of earnings management significantly. The research also investigates whether board monitoring of earnings management is contingent on the level of free cash flow. However, the results suggest that board monitoring is not amplified when free cash flow is higher. This suggests that the extent of earnings management in companies is determined by a range of company, industry and situation-specific factors.Keywords: corporate governance, boards of directors, agency theory, earnings management
Procedia PDF Downloads 236465 Monitoring and Evaluation of Web-Services Quality and Medium-Term Impact on E-Government Agencies' Efficiency
Authors: A. F. Huseynov, N. T. Mardanov, J. Y. Nakhchivanski
Abstract:
This practical research is aimed to improve the management quality and efficiency of public administration agencies providing e-services. The monitoring system developed will provide continuous review of the websites compliance with the selected indicators, their evaluation based on the selected indicators and ranking of services according to the quality criteria. The responsible departments in the government agencies were surveyed; the questionnaire includes issues of management and feedback, e-services provided, and the application of information systems. By analyzing the main affecting factors and barriers, the recommendations will be given that lead to the relevant decisions to strengthen the state agencies competencies for the management and the provision of their services. Component 1. E-services monitoring system. Three separate monitoring activities are proposed to be executed in parallel: Continuous tracing of e-government sites using built-in web-monitoring program; this program generates several quantitative values which are basically related to the technical characteristics and the performance of websites. The expert assessment of e-government sites in accordance with the two general criteria. Criterion 1. Technical quality of the site. Criterion 2. Usability/accessibility (load, see, use). Each high-level criterion is in turn subdivided into several sub-criteria, such as: the fonts and the color of the background (Is it readable?), W3C coding standards, availability of the Robots.txt and the site map, the search engine, the feedback/contact and the security mechanisms. The on-line survey of the users/citizens – a small group of questions embedded in the e-service websites. The questionnaires comprise of the information concerning navigation, users’ experience with the website (whether it was positive or negative), etc. Automated monitoring of web-sites by its own could not capture the whole evaluation process, and should therefore be seen as a complement to expert’s manual web evaluations. All of the separate results were integrated to provide the complete evaluation picture. Component 2. Assessment of the agencies/departments efficiency in providing e-government services. - the relevant indicators to evaluate the efficiency and the effectiveness of e-services were identified; - the survey was conducted in all the governmental organizations (ministries, committees and agencies) that provide electronic services for the citizens or the businesses; - the quantitative and qualitative measures are covering the following sections of activities: e-governance, e-services, the feedback from the users, the information systems at the agencies’ disposal. Main results: 1. The software program and the set of indicators for internet sites evaluation has been developed and the results of pilot monitoring have been presented. 2. The evaluation of the (internal) efficiency of the e-government agencies based on the survey results with the practical recommendations related to the human potential, the information systems used and e-services provided.Keywords: e-government, web-sites monitoring, survey, internal efficiency
Procedia PDF Downloads 305464 An Investigation on the Sandwich Panels with Flexible and Toughened Adhesives under Flexural Loading
Authors: Emre Kara, Şura Karakuzu, Ahmet Fatih Geylan, Metehan Demir, Kadir Koç, Halil Aykul
Abstract:
The material selection in the design of the sandwich structures is very crucial aspect because of the positive or negative influences of the base materials to the mechanical properties of the entire panel. In the literature, it was presented that the selection of the skin and core materials plays very important role on the behavior of the sandwich. Beside this, the use of the correct adhesive can make the whole structure to show better mechanical results and behavior. By this way, the sandwich structures realized in the study were obtained with the combination of aluminum foam core and three different glass fiber reinforced polymer (GFRP) skins using two different commercial adhesives which are based on flexible polyurethane and toughened epoxy. The static and dynamic tests were already applied on the sandwiches with different types of adhesives. In the present work, the static three-point bending tests were performed on the sandwiches having an aluminum foam core with the thickness of 15 mm, the skins with three different types of fabrics ([0°/90°] cross ply E-Glass Biaxial stitched, [0°/90°] cross ply E-Glass Woven and [0°/90°] cross ply S-Glass Woven which have same thickness value of 1.75 mm) and two different commercial adhesives (flexible polyurethane and toughened epoxy based) at different values of support span distances (L= 55, 70, 80, 125 mm) by aiming the analyses of their flexural performance. The skins used in the study were produced via Vacuum Assisted Resin Transfer Molding (VARTM) technique and were easily bonded onto the aluminum foam core with flexible and toughened adhesives under a very low pressure using press machine with the alignment tabs having the total thickness of the whole panel. The main results of the flexural loading are: force-displacement curves obtained after the bending tests, peak force values, absorbed energy, collapse mechanisms, adhesion quality and the effect of the support span length and adhesive type. The experimental results presented that the sandwiches with epoxy based toughened adhesive and the skins made of S-Glass Woven fabrics indicated the best adhesion quality and mechanical properties. The sandwiches with toughened adhesive exhibited higher peak force and energy absorption values compared to the sandwiches with flexible adhesive. The core shear mode occurred in the sandwiches with flexible polyurethane based adhesive through the thickness of the core while the same mode took place in the sandwiches with toughened epoxy based adhesive along the length of the core. The use of these sandwich structures can lead to a weight reduction of the transport vehicles, providing an adequate structural strength under operating conditions.Keywords: adhesive and adhesion, aluminum foam, bending, collapse mechanisms
Procedia PDF Downloads 329463 Determination of 1-Deoxynojirimycin and Phytochemical Profile from Mulberry Leaves Cultivated in Indonesia
Authors: Yasinta Ratna Esti Wulandari, Vivitri Dewi Prasasty, Adrianus Rio, Cindy Geniola
Abstract:
Mulberry is a plant that widely cultivated around the world, mostly for silk industry. In recent years, the study showed that the mulberry leaves have an anti-diabetic effect which mostly comes from the compound known as 1-deoxynojirimycin (DNJ). DNJ is a very potent α-glucosidase inhibitor. It will decrease the degradation rate of carbohydrates in digestive tract, leading to slower glucose absorption and reducing the post-prandial glucose level significantly. The mulberry leaves also known as the best source of DNJ. Since then, the DNJ in mulberry leaves had received a considerable attention, because of the increased number of diabetic patients and the raise of people awareness to find a more natural cure for diabetic. The DNJ content in mulberry leaves varied depend on the mulberry species, leaf’s age, and the plant’s growth environment. Few of the mulberry varieties that were cultivated in Indonesiaare Morus alba var. kanva-2, M. alba var. multicaulis, M. bombycis var. lembang, and M. cathayana. The lack of data concerning phytochemicals contained in the Indonesian mulberry leaves are restraining their use in the medicinal field. The aim of this study is to fully utilize the use of mulberry leaves cultivated in Indonesia as a medicinal herb in local, national, or global community, by determining the DNJ and other phytochemical contents in them. This study used eight leaf samples which are the young leaves and mature leaves of both Morus alba var. kanva-2, M. alba var. multicaulis, M. bombycis var. lembang, and M. cathayana. The DNJ content was analyzed using reverse phase high performance liquid chromatography (HPLC). The stationary phase was silica C18 column and the mobile phase was acetonitrile:acetic acid 0.1% 1:1 with elution rate 1 mL/min. Prior to HPLC analysis the samples were derivatized with FMOC to ensure the DNJ detectable by VWD detector at 254 nm. Results showed that the DNJ content in samples are ranging from 2.90-0.07 mg DNJ/ g leaves, with the highest content found in M. cathayana mature leaves (2.90 ± 0.57 mg DNJ/g leaves). All of the mature leaf samples also found to contain higher amount of DNJ from their respective young leaf samples. The phytochemicals in leaf samples was tested using qualitative test. Result showed that all of the eight leaf samples contain alkaloids, phenolics, flavonoids, tannins, and terpenes. The presence of this phytochemicals contribute to the therapeutic effect of mulberry leaves. The pyrolysis-gas chromatography-mass spectrometry (Py-GC-MS) analysis was also performed to the eight samples to quantitatively determine their phytochemicals content. The pyrolysis temperature was set at 400 °C, with capillary column Phase Rtx-5MS 60 × 0.25 mm ID stationary phase and helium gas mobile phase. Few of the terpenes found are known to have anticancer and antimicrobial properties. From all the results, all of four samples of mulberry leaves which are cultivated in Indonesia contain DNJ and various phytochemicals like alkaloids, phenolics, flavonoids, tannins, and terpenes which are beneficial to our health.Keywords: Morus, 1-deoxynojirimycin, HPLC, Py-GC-MS
Procedia PDF Downloads 331462 Influence of Glass Plates Different Boundary Conditions on Human Impact Resistance
Authors: Alberto Sanchidrián, José A. Parra, Jesús Alonso, Julián Pecharromán, Antonia Pacios, Consuelo Huerta
Abstract:
Glass is a commonly used material in building; there is not a unique design solution as plates with a different number of layers and interlayers may be used. In most façades, a security glazing have to be used according to its performance in the impact pendulum. The European Standard EN 12600 establishes an impact test procedure for classification under the point of view of the human security, of flat plates with different thickness, using a pendulum of two tires and 50 kg mass that impacts against the plate from different heights. However, this test does not replicate the actual dimensions and border conditions used in building configurations and so the real stress distribution is not determined with this test. The influence of different boundary conditions, as the ones employed in construction sites, is not well taking into account when testing the behaviour of safety glazing and there is not a detailed procedure and criteria to determinate the glass resistance against human impact. To reproduce the actual boundary conditions on site, when needed, the pendulum test is arranged to be used "in situ", with no account for load control, stiffness, and without a standard procedure. Fracture stress of small and large glass plates fit a Weibull distribution with quite a big dispersion so conservative values are adopted for admissible fracture stress under static loads. In fact, test performed for human impact gives a fracture strength two or three times higher, and many times without a total fracture of the glass plate. Newest standards, as for example DIN 18008-4, states for an admissible fracture stress 2.5 times higher than the ones used for static and wing loads. Now two working areas are open: a) to define a standard for the ‘in situ’ test; b) to prepare a laboratory procedure that allows testing with more real stress distribution. To work on both research lines a laboratory that allows to test medium size specimens with different border conditions, has been developed. A special steel frame allows reproducing the stiffness of the glass support substructure, including a rigid condition used as reference. The dynamic behaviour of the glass plate and its support substructure have been characterized with finite elements models updated with modal tests results. In addition, a new portable impact machine is being used to get enough force and direction control during the impact test. Impact based on 100 J is used. To avoid problems with broken glass plates, the test have been done using an aluminium plate of 1000 mm x 700 mm size and 10 mm thickness supported on four sides; three different substructure stiffness conditions are used. A detailed control of the dynamic stiffness and the behaviour of the plate is done with modal tests. Repeatability of the test and reproducibility of results prove that procedure to control both, stiffness of the plate and the impact level, is necessary.Keywords: glass plates, human impact test, modal test, plate boundary conditions
Procedia PDF Downloads 308461 Promoting Class Cooperation-Competition (Coo-Petition) and Empowerment to Graduating Architecture Students through a Holistic Planning Approach in Their Thesis Proposals
Authors: Felicisimo Azagra Tejuco Jr.
Abstract:
Mentoring architecture thesis students is a very critical and exhausting task for both the adviser and advisee. It poses the challenges of resource and time management for the candidate while the best professional guidance from the mentor. The University of Santo Tomas (Manila, Philippines) is Asia's oldest university. Among its notable program is its Architecture curriculum. Presently, the five-year Architecture program requires ten semesters of academic coursework. The last three semesters are relevant to each Architecture graduating student's thesis proposal and defense. The thesis proposal is developed and submitted for approval in the subject Research Methods for Architecture (RMA). Data gathering and initial schemes are conducted in Architectural Design (AD), 9, and are finalized and defended in AD 10. In recent years, their graduating students have maintained an average of 300 candidates before the pandemic. They are encouraged to explore any topic of interest or relevance. Since 2019-2020, one thesis class has used a community planning approach in mentoring the class. Compared to other sections, the first meeting of RMA has been allocated for a visioning exercise and assessment of the class's strengths-weaknesses and opportunities-threats (SWOT). Here, the work activities of the group have been finetuned to address some identified concerns while still being aligned with the academic calendar. Occasional peer critics complement class lectures. The course will end with the approval of the student's proposal. The final year or last two semesters of the graduating class will be focused on the approved proposal. Compared to the other class, the 18 weeks of the first semester consist of regular consultations, complemented by lectures from the adviser or guest speakers. Through remote peer consultations, the mentor maximized each meeting in groups of three to five, encouraging constructive criticism among the class. At the end of the first semester, mock presentations to the external jury are conducted to check the design outputs for improvement. The final semester is spent more on the finalization of the plans. Feedback from the previous semester is expected to be integrated into the final outputs. Before the final deliberations, at least two technical rehearsals were conducted per group. Regardless of the outcome, an assessment of each student's performance is held as a class. Personal realizations and observations are encouraged. Through Online surveys, Interviews, and Focused Group Discussions with the former students, the effectiveness of the mentoring strategies was reviewed and evaluated. Initial feedback highlighted the relevance of setting a positive tone for the course, constructive criticisms from peers & experts, and consciousness of deadlines as essential elements for a practical semester.Keywords: cooperation, competition, student empowerment, class vision
Procedia PDF Downloads 79460 Efficiency of Virtual Reality Exercises with Nintendo Wii System on Balance and Independence in Motor Functions in Hemiparetic Patients: A Randomized Controlled Study
Authors: Ayça Utkan Karasu, Elif Balevi Batur, Gülçin Kaymak Karataş
Abstract:
The aim of this study was to examine the efficiency of virtual reality exercises with Nintendo Wii system on balance and independence in motor functions. This randomized controlled assessor-blinded study included 23 stroke inpatients with hemiparesis all within 12 months poststroke. Patients were randomly assigned to control group (n=11) or experimental group (n=12) via block randomization method. Control group participated in a conventional balance rehabilitation programme. Study group received a four-week balance training programme five times per week with a session duration of 20 minutes in addition to the conventional balance rehabilitation programme. Balance was assessed by the Berg’s balance scale, the functional reach test, the timed up and go test, the postural assessment scale for stroke, the static balance index. Also, displacement of centre of pressure sway and centre of pressure displacement during weight shifting was calculated by Emed-SX system. Independence in motor functions was assessed by The Functional Independence Measure (FIM) ambulation and FIM transfer subscales. The outcome measures were evaluated at baseline, 4th week (posttreatment), 8th week (follow-up). Repeated measures analysis of variance was performed for each of the outcome measure. Significant group time interaction was detected in the scores of the Berg’s balance scale, the functional reach test, eyes open anteroposterior and mediolateral center of pressure sway distance, eyes closed anteroposterior center of pressure sway distance, center of pressure displacement during weight shifting to effected side, unaffected side and total centre of pressure displacement during weight shifting (p < 0.05). Time effect was statistically significant in the scores of the Berg’s balance scale, the functional reach test, the timed up and go test, the postural assessment scale for stroke, the static balance index, eyes open anteroposterior and mediolateral center of pressure sway distance, eyes closed mediolateral center of pressure sway distance, the center of pressure displacement during weight shifting to effected side, the functional independence measure ambulation and transfer scores (p < 0.05). Virtual reality exercises with Nintendo Wii system combined with a conventional balance rehabilitation programme enhances balance performance and independence in motor functions in stroke patients.Keywords: balance, hemiplegia, stroke rehabilitation, virtual reality
Procedia PDF Downloads 221459 Use of Progressive Feedback for Improving Team Skills and Fair Marking of Group Tasks
Authors: Shaleeza Sohail
Abstract:
Self, and peer evaluations are some of the main components in almost all group assignments and projects in higher education institutes. These evaluations provide students an opportunity to better understand the learning outcomes of the assignment and/or project. A number of online systems have been developed for this purpose that provides automated assessment and feedback of students’ contribution in a group environment based on self and peer evaluations. All these systems lack a progressive aspect of these assessments and feedbacks which is the most crucial factor for ongoing improvement and life-long learning. In addition, a number of assignments and projects are designed in a manner that smaller or initial assessment components lead to a final assignment or project. In such cases, the evaluation and feedback may provide students an insight into their performance as a group member for a particular component after the submission. Ideally, it should also create an opportunity to improve for next assessment component as well. Self and Peer Progressive Assessment and Feedback System encourages students to perform better in the next assessment by providing a comparative analysis of the individual’s contribution score on an ongoing basis. Hence, the student sees the change in their own contribution scores during the complete project based on smaller assessment components. Self-Assessment Factor is calculated as an indicator of how close the self-perception of the student’s own contribution is to the perceived contribution of that student by other members of the group. Peer-Assessment Factor is calculated to compare the perception of one student’s contribution as compared to the average value of the group. Our system also provides a Group Coherence Factor which shows collectively how group members contribute to the final submission. This feedback is provided for students and teachers to visualize the consistency of members’ contribution perceived by its group members. Teachers can use these factors to judge the individual contributions of the group members in the combined tasks and allocate marks/grades accordingly. This factor is shown to students for all groups undertaking same assessment, so the group members can comparatively analyze the efficiency of their group as compared to other groups. Our System provides flexibility to the instructors for generating their own customized criteria for self and peer evaluations based on the requirements of the assignment. Students evaluate their own and other group members’ contributions on the scale from significantly higher to significantly lower. The preliminary testing of the prototype system is done with a set of predefined cases to explicitly show the relation of system feedback factors to the case studies. The results show that such progressive feedback to students can be used to motivate self-improvement and enhanced team skills. The comparative group coherence can promote a better understanding of the group dynamics in order to improve team unity and fair division of team tasks.Keywords: effective group work, improvement of team skills, progressive feedback, self and peer assessment system
Procedia PDF Downloads 191458 Effect of 12 Weeks Pedometer-Based Workplace Program on Inflammation and Arterial Stiffness in Young Men with Cardiovascular Risks
Authors: Norsuhana Omar, Amilia Aminuddina Zaiton Zakaria, Raifana Rosa Mohamad Sattar, Kalaivani Chellappan, Mohd Alauddin Mohd Ali, Norizam Salamt, Zanariyah Asmawi, Norliza Saari, Aini Farzana Zulkefli, Nor Anita Megat Mohd. Nordin
Abstract:
Inflammation plays an important role in the pathogenesis of vascular dysfunction leading to arterial stiffness. Pulse wave velocity (PWV) and augmentation index (AS), as tools for the assessment of vascular damages are widely used and have been shown to predict cardiovascular disease (CVD). C-reactive protein (CRP) is a marker of inflammation. Several studies noted that regular exercise is associated with reduced arterial stiffness. The lack of exercise among Malaysians and the increasing CVD morbidity and mortality among young men are of concern. In Malaysia data on the workplace exercise intervention is scarce. A programme was designed to enable subjects to increase their level of walking as part of their daily work routine and self-monitored by using pedometers. The aim of this study to evaluate the reducing of inflammation by measuring CRP and improvement arterial stiffness measured by carotid femoral PWV (PWVCF) and AI. A total of 70 young men (20 - 40 years) who were sedentary, achieving less than 5,000 steps/day in casual walking with 2 or more cardiovascular risk factors were recruited in Institute of Vocational Skills for Youth (IKBN Hulu Langat). Subjects were randomly assigned to a control (CG) (n=34; no change in walking) and pedometer group (PG) (n=36; minimum target: 8,000 steps/day). The CRP was measured by using immunological method while PWVCF and AI were measured using Vicorder. All parameters were measured at baseline and after 12 weeks. Data for analysis was conducted using Statistical Package of Social Sciences Version 22 (SPSS Inc., Chicago, IL, USA). At post intervention, the CG step counts were similar (4983 ± 366vs 5697 ± 407steps/day). The PG increased step count from 4996 ± 805 to 10,128 ±511 steps/day (P<0.001). The PG showed significant improvement in anthropometric variables and lipid (time and group effect p<0.001). For vascular assessment, the PG showed significantly decreased for time and effect (p<0.001) for PWV (7.21± 0.83 to 6.42 ± 0.89) m/s; AI (11.88± 6.25 to 8.83 ± 3.7) % and CRP (pre= 2.28 ± 3.09, post=1.08± 1.37mg/L). However, no changes were seen in CG. As a conclusion, a pedometer-based walking programme may be an effective strategy for promoting increased daily physical activity which reduces cardiovascular risk markers and thus improve cardiovascular health in terms of inflammation and arterial stiffness. The community intervention for health maintenance has potential to adopt walking as an exercise and adopting vascular fitness index as the performance measuring tools.Keywords: arterial stiffness, exercise, inflammation, pedometer
Procedia PDF Downloads 354457 DeepNIC a Method to Transform Each Tabular Variable into an Independant Image Analyzable by Basic CNNs
Authors: Nguyen J. M., Lucas G., Ruan S., Digonnet H., Antonioli D.
Abstract:
Introduction: Deep Learning (DL) is a very powerful tool for analyzing image data. But for tabular data, it cannot compete with machine learning methods like XGBoost. The research question becomes: can tabular data be transformed into images that can be analyzed by simple CNNs (Convolutional Neuron Networks)? Will DL be the absolute tool for data classification? All current solutions consist in repositioning the variables in a 2x2 matrix using their correlation proximity. In doing so, it obtains an image whose pixels are the variables. We implement a technology, DeepNIC, that offers the possibility of obtaining an image for each variable, which can be analyzed by simple CNNs. Material and method: The 'ROP' (Regression OPtimized) model is a binary and atypical decision tree whose nodes are managed by a new artificial neuron, the Neurop. By positioning an artificial neuron in each node of the decision trees, it is possible to make an adjustment on a theoretically infinite number of variables at each node. From this new decision tree whose nodes are artificial neurons, we created the concept of a 'Random Forest of Perfect Trees' (RFPT), which disobeys Breiman's concepts by assembling very large numbers of small trees with no classification errors. From the results of the RFPT, we developed a family of 10 statistical information criteria, Nguyen Information Criterion (NICs), which evaluates in 3 dimensions the predictive quality of a variable: Performance, Complexity and Multiplicity of solution. A NIC is a probability that can be transformed into a grey level. The value of a NIC depends essentially on 2 super parameters used in Neurops. By varying these 2 super parameters, we obtain a 2x2 matrix of probabilities for each NIC. We can combine these 10 NICs with the functions AND, OR, and XOR. The total number of combinations is greater than 100,000. In total, we obtain for each variable an image of at least 1166x1167 pixels. The intensity of the pixels is proportional to the probability of the associated NIC. The color depends on the associated NIC. This image actually contains considerable information about the ability of the variable to make the prediction of Y, depending on the presence or absence of other variables. A basic CNNs model was trained for supervised classification. Results: The first results are impressive. Using the GSE22513 public data (Omic data set of markers of Taxane Sensitivity in Breast Cancer), DEEPNic outperformed other statistical methods, including XGBoost. We still need to generalize the comparison on several databases. Conclusion: The ability to transform any tabular variable into an image offers the possibility of merging image and tabular information in the same format. This opens up great perspectives in the analysis of metadata.Keywords: tabular data, CNNs, NICs, DeepNICs, random forest of perfect trees, classification
Procedia PDF Downloads 128456 Association of Body Composition Parameters with Lower Limb Strength and Upper Limb Functional Capacity in Quilombola Remnants
Authors: Leonardo Costa Pereira, Frederico Santos Santana, Mauro Karnikowski, Luís Sinésio Silva Neto, Aline Oliveira Gomes, Marisete Peralta Safons, Margô Gomes De Oliveira Karnikowski
Abstract:
In Brazil, projections of population aging follow all world projections, the birth rate tends to be surpassed by the mortality rate around the year 2045. Historically, the population of Brazilian blacks suffered for several centuries from the oppression of dominant classes. A group, especially of blacks, stands out in relation to territorial, historical and social aspects, and for centuries they have isolated themselves in small communities, in order to maintain their freedom and culture. The isolation of the Quilombola communities generated socioeconomic effects as well as the health of these blacks. Thus, the objective of the present study is to verify the association of body composition parameters with lower and upper limb strength and functional capacity in Quilombola remnants. The research was approved by ethics committee (1,771,159). Anthropometric evaluations of hip and waist circumference, body mass and height were performed. In order to verify the body composition, the relationship between stature and body mass (BM) was performed, generating the body mass index (BMI), as well as the dual-energy X-ray absorptiometry (DEXA) test. The Time Up and Go (TUG) test was used to evaluate the functional capacity, and a maximum repetition test (1MR) for knee extension and handgrip (HG) was applied for strength magnitude analysis. Statistical analysis was performed using the statistical package SPSS 22.0. Shapiro Wilk's normality test was performed. For the possible correlations, the suggestions of the Pearson or Spearman tests were adopted. The results obtained after the interpretation identified that the sample (n = 18) was composed of 66.7% of female individuals with mean age of 66.07 ± 8.95 years. The sample’s body fat percentage (%BF) (35.65 ± 10.73) exceeds the recommendations for age group, as well as the anthropometric parameters of hip (90.91 ± 8.44cm) and waist circumference (80.37 ± 17.5cm). The relationship between height (1.55 ± 0.1m) and body mass (63.44 ± 11.25Kg) generated a BMI of 24.16 ± 7.09Kg/m2, that was considered normal. The TUG performance was 10.71 ± 1.85s. In the 1MR test, 46.67 ± 13.06Kg and in the HG 23.93±7.96Kgf were obtained, respectively. Correlation analyzes were characterized by the high frequency of significant correlations for height, dominant arm mass (DAM), %BF, 1MR and HG variables. In addition, correlations between HG and BM (r = 0.67, p = 0.005), height (r = 0.51, p = 0.004) and DAM (r = 0.55, p = 0.026) were also observed. The strength of the lower limbs correlates with BM (r = 0.69, p = 0.003), height (r = 0.62, p = 0.01) and DAM (r = 0.772, p = 0.001). In this way, we can conclude that not only the simple spatial relationship of mass and height can influence in predictive parameters of strength or functionality, being important the verification of the conditions of the corporal composition. For this population, height seems to be a good predictor of strength and body composition.Keywords: African Continental Ancestry Group, body composition, functional capacity, strength
Procedia PDF Downloads 276455 The Symbolic Power of the IMF: Looking through Argentina’s New Period of Indebtedness
Authors: German Ricci
Abstract:
The research aims to analyse the symbolic power of the International Monetary Fund (IMF) in its relationship with a borrowing country, drawing upon Pierre Bourdieu’s Field Theory. This theory of power, typical of constructivist structuralism, has been minor used in international relations. Thus, selecting this perspective offers a new understanding of how the IMF's power operates and is structured. The IMF makes periodic economic reviews in which the staff evaluates the Government's performance. It also offers “last instance” loans when private external credit is not accessible. This relationship generates great expectations in financial agents because the IMF’s statements indicate the capacity of the Nation-State to meet its payment obligations (or not). Therefore, it is argued that the IMF is a legitimate actor for financial agents concerned about a government facing an economic crisis both for the effects of its immediate economic contribution through loans and the promotion of adjustment programs, helpful to guarantee the payment of the external debt. This legitimacy implies a symbolic power relationship in addition to the already known economic power relationship. Obtaining the IMF's consent implies that the government partially puts its political-economic decisions into play since the monetary policy must be agreed upon with the Fund. This has consequences at the local level. First, it implies that the debtor state must establish a daily relationship with the Fund. This everyday interaction with the Fund influences how officials and policymakers internalize the meaning of political management. On the other hand, if the Government has access to the IMF's seal of approval, the State will be again in a position to re-enter the financial market and go back into debt to face external debt. This means that private creditors increase the chances of collecting the debt and, again, grant credits. Thus, it is argued that the borrowing country submits to the relationship with the IMF in search of the latter's economic and symbolic capital. Access to this symbolic capital has objective and subjective repercussions at the national level that might tend to reproduce the relevance of the financial market and legitimizes the IMF’s intervention during economic crises. The paper has Argentina as its case study, given its historical relationship with the IMF and the relevance of the current indebtedness period, which remains largely unexplored. Argentina’s economy is characterized by recurrent financial crises, and it is the country to which the Fund has lent the most in its entire history. It surpasses more than three times the second, Egypt. In addition, Argentina is currently the country that owes the most to the Fund after receiving the largest loan ever granted by the IMF in 2018, and a new agreement in 2022. While the historical strong association with the Fund culminated in the most acute economic and social crisis in the country’s contemporary history, producing an unprecedented political and institutional crisis in 2001, Argentina still recognized the IMF as the only way out during economic crises.Keywords: IMF, fields theory, symbolic power, Argentina, Bourdieu
Procedia PDF Downloads 71454 Carbon Footprint of Educational Establishments: The Case of the University of Alicante
Authors: Maria R. Mula-Molina, Juan A. Ferriz-Papi
Abstract:
Environmental concerns are increasingly obtaining higher priority in sustainability agenda of educational establishments. This is important not only for its environmental performance in its own right as an organization, but also to present a model for its students. On the other hand, universities play an important role on research and innovative solutions for measuring, analyzing and reducing environmental impacts for different activities. The assessment and decision-making process during the activity of educational establishments is linked to the application of robust indicators. In this way, the carbon footprint is a developing indicator for sustainability that helps understand the direct impact on climate change. But it is not easy to implement. There is a large amount of considering factors involved that increases its complexity, such as different uses at the same time (research, lecturing, administration), different users (students, staff) or different levels of activity (lecturing, exam or holidays periods). The aim of this research is to develop a simplified methodology for calculating and comparing carbon emissions per user at university campus considering two main aspects for carbon accountings: Building operations and transport. Different methodologies applied in other Spanish university campuses are analyzed and compared to obtain a final proposal to be developed in this type of establishments. First, building operation calculation considers the different uses and energy sources consumed. Second, for transport calculation, the different users and working hours are calculated separately, as well as their origin and traveling preferences. For every transport, a different conversion factor is used depending on carbon emissions produced. The final result is obtained as an average of carbon emissions produced per user. A case study is applied to the University of Alicante campus in San Vicente del Raspeig (Spain), where the carbon footprint is calculated. While the building operation consumptions are known per building and month, it does not happen with transport. Only one survey about the habit of transport for users was developed in 2009/2010, so no evolution of results can be shown in this case. Besides, building operations are not split per use, as building services are not monitored separately. These results are analyzed in depth considering all factors and limitations. Besides, they are compared to other estimations in other campuses. Finally, the application of the presented methodology is also studied. The recommendations concluded in this study try to enhance carbon emission monitoring and control. A Carbon Action Plan is then a primary solution to be developed. On the other hand, the application developed in the University of Alicante campus cannot only further enhance the methodology itself, but also render the adoption by other educational establishments more readily possible and yet with a considerable degree of flexibility to cater for their specific requirements.Keywords: building operations, built environment, carbon footprint, climate change, transport
Procedia PDF Downloads 297453 Hygro-Thermal Modelling of Timber Decks
Authors: Stefania Fortino, Petr Hradil, Timo Avikainen
Abstract:
Timber bridges have an excellent environmental performance, are economical, relatively easy to build and can have a long service life. However, the durability of these bridges is the main problem because of their exposure to outdoor climate conditions. The moisture content accumulated in wood for long periods, in combination with certain temperatures, may cause conditions suitable for timber decay. In addition, moisture content variations affect the structural integrity, serviceability and loading capacity of timber bridges. Therefore, the monitoring of the moisture content in wood is important for the durability of the material but also for the whole superstructure. The measurements obtained by the usual sensor-based techniques provide hygro-thermal data only in specific locations of the wood components. In this context, the monitoring can be assisted by numerical modelling to get more information on the hygro-thermal response of the bridges. This work presents a hygro-thermal model based on a multi-phase moisture transport theory to predict the distribution of moisture content, relative humidity and temperature in wood. Below the fibre saturation point, the multi-phase theory simulates three phenomena in cellular wood during moisture transfer, i.e., the diffusion of water vapour in the pores, the sorption of bound water and the diffusion of bound water in the cell walls. In the multi-phase model, the two water phases are separated, and the coupling between them is defined through a sorption rate. Furthermore, an average between the temperature-dependent adsorption and desorption isotherms is used. In previous works by some of the authors, this approach was found very suitable to study the moisture transport in uncoated and coated stress-laminated timber decks. Compared to previous works, the hygro-thermal fluxes on the external surfaces include the influence of the absorbed solar radiation during the time and consequently, the temperatures on the surfaces exposed to the sun are higher. This affects the whole hygro-thermal response of the timber component. The multi-phase model, implemented in a user subroutine of Abaqus FEM code, provides the distribution of the moisture content, the temperature and the relative humidity in a volume of the timber deck. As a case study, the hygro-thermal data in wood are collected from the ongoing monitoring of the stress-laminated timber deck of Tapiola Bridge in Finland, based on integrated humidity-temperature sensors and the numerical results are found in good agreement with the measurements. The proposed model, used to assist the monitoring, can contribute to reducing the maintenance costs of bridges, as well as the cost of instrumentation, and increase safety.Keywords: moisture content, multi-phase models, solar radiation, timber decks, FEM
Procedia PDF Downloads 176452 Assessing the Impact of Physical Inactivity on Dialysis Adequacy and Functional Health in Peritoneal Dialysis Patients
Authors: Mohammad Ali Tabibi, Farzad Nazemi, Nasrin Salimian
Abstract:
Background: Peritoneal dialysis (PD) is a prevalent renal replacement therapy for patients with end-stage renal disease. Despite its benefits, PD patients often experience reduced physical activity and physical function, which can negatively impact dialysis adequacy and overall health outcomes. Despite the known benefits of maintaining physical activity in chronic disease management, the specific interplay between physical inactivity, physical function, and dialysis adequacy in PD patients remains underexplored. Understanding this relationship is essential for developing targeted interventions to enhance patient care and outcomes in this vulnerable population. This study aims to assess the impact of physical inactivity on dialysis adequacy and functional health in PD patients. Methods: This cross-sectional study included 135 peritoneal dialysis patients from multiple dialysis centers. Physical inactivity was measured using the International Physical Activity Questionnaire (IPAQ), while physical function was assessed using the Short Physical Performance Battery (SPPB). Dialysis adequacy was evaluated using the Kt/V ratio. Additional variables such as demographic data, comorbidities, and laboratory parameters were collected to control for potential confounders. Statistical analyses were performed to determine the relationships between physical inactivity, physical function, and dialysis adequacy. Results: The study cohort comprised 70 males and 65 females with a mean age of 55.4 ± 13.2 years. A significant proportion of the patients (65%) were categorized as physically inactive based on IPAQ scores. Inactive patients demonstrated significantly lower SPPB scores (mean 6.2 ± 2.1) compared to their more active counterparts (mean 8.5 ± 1.8, p < 0.001). Dialysis adequacy, as measured by Kt/V, was found to be suboptimal (Kt/V < 1.7) in 48% of the patients. There was a significant positive correlation between physical function scores and Kt/V values (r = 0.45, p < 0.01), indicating that better physical function is associated with higher dialysis adequacy. Also, there was a significant negative correlation between physical inactivity and physical function (r = -0.55, p < 0.01). Additionally, physically inactive patients had lower Kt/V ratios compared to their active counterparts (1.3 ± 0.3 vs. 1.8 ± 0.4, p < 0.05). Multivariate regression analysis revealed that physical inactivity was an independent predictor of reduced dialysis adequacy (β = -0.32, p < 0.01) and poorer physical function (β = -0.41, p < 0.01) after adjusting for age, sex, comorbidities, and dialysis vintage. Conclusion: This study underscores the critical role of physical activity and physical function in maintaining adequate dialysis in peritoneal dialysis patients. These findings highlight the need for targeted interventions to promote physical activity in this population to improve their overall health outcomes. Future research should focus on developing and evaluating exercise programs tailored for PD patients to enhance their physical function and dialysis adequacy. The findings suggest that interventions aimed at increasing physical activity and improving physical function may enhance dialysis adequacy and overall health outcomes in this population. Further research is warranted to explore the mechanisms underlying these associations and to develop targeted strategies for enhancing patient care.Keywords: inactivity, physical function, peritoneal dialysis, dialysis adequacy
Procedia PDF Downloads 36451 An Automated Magnetic Dispersive Solid-Phase Extraction Method for Detection of Cocaine in Human Urine
Authors: Feiyu Yang, Chunfang Ni, Rong Wang, Yun Zou, Wenbin Liu, Chenggong Zhang, Fenjin Sun, Chun Wang
Abstract:
Cocaine is the most frequently used illegal drug globally, with the global annual prevalence of cocaine used ranging from 0.3% to 0.4 % of the adult population aged 15–64 years. Growing consumption trend of abused cocaine and drug crimes are a great concern, therefore urine sample testing has become an important noninvasive sampling whereas cocaine and its metabolites (COCs) are usually present in high concentrations and relatively long detection windows. However, direct analysis of urine samples is not feasible because urine complex medium often causes low sensitivity and selectivity of the determination. On the other hand, presence of low doses of analytes in urine makes an extraction and pretreatment step important before determination. Especially, in gathered taking drug cases, the pretreatment step becomes more tedious and time-consuming. So developing a sensitive, rapid and high-throughput method for detection of COCs in human body is indispensable for law enforcement officers, treatment specialists and health officials. In this work, a new automated magnetic dispersive solid-phase extraction (MDSPE) sampling method followed by high performance liquid chromatography-mass spectrometry (HPLC-MS) was developed for quantitative enrichment of COCs from human urine, using prepared magnetic nanoparticles as absorbants. The nanoparticles were prepared by silanizing magnetic Fe3O4 nanoparticles and modifying them with divinyl benzene and vinyl pyrrolidone, which possesses the ability for specific adsorption of COCs. And this kind of magnetic particle facilitated the pretreatment steps by electromagnetically controlled extraction to achieve full automation. The proposed device significantly improved the sampling preparation efficiency with 32 samples in one batch within 40mins. Optimization of the preparation procedure for the magnetic nanoparticles was explored and the performances of magnetic nanoparticles were characterized by scanning electron microscopy, vibrating sample magnetometer and infrared spectra measurements. Several analytical experimental parameters were studied, including amount of particles, adsorption time, elution solvent, extraction and desorption kinetics, and the verification of the proposed method was accomplished. The limits of detection for the cocaine and cocaine metabolites were 0.09-1.1 ng·mL-1 with recoveries ranging from 75.1 to 105.7%. Compared to traditional sampling method, this method is time-saving and environmentally friendly. It was confirmed that the proposed automated method was a kind of highly effective way for the trace cocaine and cocaine metabolites analyses in human urine.Keywords: automatic magnetic dispersive solid-phase extraction, cocaine detection, magnetic nanoparticles, urine sample testing
Procedia PDF Downloads 204450 Effect of Supplementation of Hay with Noug Seed Cake (Guizotia abyssinica), Wheat Bran and Their Mixtures on Feed Utilization, Digestiblity and Live Weight Change in Farta Sheep
Authors: Fentie Bishaw Wagayie
Abstract:
This study was carried out with the objective of studying the response of Farta sheep in feed intake and live weight change when fed on hay supplemented with noug seed cake (NSC), wheat bran (WB), and their mixtures. The digestibility trial of 7 days and 90 days of feeding trial was conducted using 25 intact male Farta sheep with a mean initial live weight of 16.83 ± 0.169 kg. The experimental animals were arranged randomly into five blocks based on the initial live weight, and the five treatments were assigned randomly to each animal in a block. Five dietary treatments used in the experiment comprised of grass hay fed ad libitum (T1), grass hay ad libitum + 300 g DM WB (T2), grass hay ad libitum + 300 g DM (67% WB: 33% NSC mixture) (T3), grass hay ad libitum + 300 g DM (67% NSC: 33% WB) (T4) and 300 g DM/ head/day NSC (T5). Common salt and water were offered ad libitum. The supplements were offered twice daily at 0800 and 1600 hours. The experimental sheep were kept in individual pens. Supplementation of NSC, WB, and their mixtures significantly increased (p < 0.01) the total dry matter (DM) (665.84-788 g/head/day) and (p < 0.001) crude protein (CP) intake. Unsupplemented sheep consumed significantly higher (p < 0.01) grass hay DM (540.5g/head/day) as compared to the supplemented treatments (365.8-488 g/h/d), except T2. Among supplemented sheep, T5 had significantly higher (p < 0.001) CP intake (99.98 g/head/day) than the others (85.52-90.2 g/head/day). Supplementation significantly improved (p < 0.001) the digestibility of CP (66.61-78.9%), but there was no significant effect (p > 0.05) on DM, OM, NDF, and ADF digestibility between supplemented and control treatments. Very low CP digestibility (11.55%) observed in the basal diet (grass hay) used in this study indicated that feeding sole grass hay could not provide nutrients even for the maintenance requirement of growing sheep. Significant final and daily live weight gain (p < 0.001) in the range of 70.11-82.44 g/head/day was observed in supplemented Farta sheep, but unsupplemented sheep lost weight by 9.11g/head/day. Numerically, among the supplemented treatments, sheep supplemented with a higher proportion of NSC in T4 (201 NSC + 99 g WB) gained more weight than the rest, though not statistically significant (p > 0.05). The absence of statistical difference in daily body weight gain between all supplemented sheep indicated that the supplementation of NSC, WB, and their mixtures had similar potential to provide nutrients. Generally, supplementation of NSC, WB, and their mixtures to the basal grass hay diet improved feed conversion ratio, total DM intake, CP intake, and CP digestibility, and it also improved the growth performance with a similar trend for all supplemented Farta sheep over the control group. Therefore, from a biological point of view, to attain the required level of slaughter body weight within a short period of the growing program, sheep producer can use all the supplement types depending upon their local availability, but in the order of priority, T4, T5, T3, and T2, respectively. However, based on partial budget analysis, supplementation of 300 g DM/head /day NSC (T5) could be recommended as profitable for producers with no capital limitation, whereas T4 supplementation (201 g NSC + 99 WB DM/day) is recommended when there is capital scarcity.Keywords: weight gain, supplement, Farta sheep, hay as basal diet
Procedia PDF Downloads 63449 Melt–Electrospun Polyprophylene Fabrics Functionalized with TiO2 Nanoparticles for Effective Photocatalytic Decolorization
Authors: Z. Karahaliloğlu, C. Hacker, M. Demirbilek, G. Seide, E. B. Denkbaş, T. Gries
Abstract:
Currently, textile industry has played an important role in world’s economy, especially in developing countries. Dyes and pigments used in textile industry are significant pollutants. Most of theirs are azo dyes that have chromophore (-N=N-) in their structure. There are many methods for removal of the dyes from wastewater such as chemical coagulation, flocculation, precipitation and ozonation. But these methods have numerous disadvantages and alternative methods are needed for wastewater decolorization. Titanium-mediated photodegradation has been used generally due to non-toxic, insoluble, inexpensive, and highly reactive properties of titanium dioxide semiconductor (TiO2). Melt electrospinning is an attractive manufacturing process for thin fiber production through electrospinning from PP (Polyprophylene). PP fibers have been widely used in the filtration due to theirs unique properties such as hydrophobicity, good mechanical strength, chemical resistance and low-cost production. In this study, we aimed to investigate the effect of titanium nanoparticle localization and amine modification on the dye degradation. The applicability of the prepared chemical activated composite and pristine fabrics for a novel treatment of dyeing wastewater were evaluated.In this study, a photocatalyzer material was prepared from nTi (titanium dioxide nanoparticles) and PP by a melt-electrospinning technique. The electrospinning parameters of pristine PP and PP/nTi nanocomposite fabrics were optimized. Before functionalization with nTi, the surface of fabrics was activated by a technique using glutaraldehyde (GA) and polyethyleneimine to promote the dye degredation. Pristine PP and PP/nTi nanocomposite melt-electrospun fabrics were characterized using scanning electron microscopy (SEM) and X-Ray Photon Spectroscopy (XPS). Methyl orange (MO) was used as a model compound for the decolorization experiments. Photocatalytic performance of nTi-loaded pristine and nanocomposite melt-electrospun filters was investigated by varying initial dye concentration 10, 20, 40 mg/L). nTi-PP composite fabrics were successfully processed into a uniform, fibrous network of beadless fibers with diameters of 800±0.4 nm. The process parameters were determined as a voltage of 30 kV, a working distance of 5 cm, a temperature of the thermocouple and hotcoil of 260–300 ºC and a flow rate of 0.07 mL/h. SEM results indicated that TiO2 nanoparticles were deposited uniformly on the nanofibers and XPS results confirmed the presence of titanium nanoparticles and generation of amine groups after modification. According to photocatalytic decolarization test results, nTi-loaded GA-treated pristine or nTi-PP nanocomposite fabric filtern have superior properties, especially over 90% decolorization efficiency at GA-treated pristine and nTi-PP composite PP fabrics. In this work, as a photocatalyzer for wastewater treatment, surface functionalized with nTi melt-electrospun fabrics from PP were prepared. Results showed melt-electrospun nTi-loaded GA-tretaed composite or pristine PP fabrics have a great potential for use as a photocatalytic filter to decolorization of wastewater and thus, requires further investigation.Keywords: titanium oxide nanoparticles, polyprophylene, melt-electrospinning
Procedia PDF Downloads 267448 A Machine Learning Approach for Assessment of Tremor: A Neurological Movement Disorder
Authors: Rajesh Ranjan, Marimuthu Palaniswami, A. A. Hashmi
Abstract:
With the changing lifestyle and environment around us, the prevalence of the critical and incurable disease has proliferated. One such condition is the neurological disorder which is rampant among the old age population and is increasing at an unstoppable rate. Most of the neurological disorder patients suffer from some movement disorder affecting the movement of their body parts. Tremor is the most common movement disorder which is prevalent in such patients that infect the upper or lower limbs or both extremities. The tremor symptoms are commonly visible in Parkinson’s disease patient, and it can also be a pure tremor (essential tremor). The patients suffering from tremor face enormous trouble in performing the daily activity, and they always need a caretaker for assistance. In the clinics, the assessment of tremor is done through a manual clinical rating task such as Unified Parkinson’s disease rating scale which is time taking and cumbersome. Neurologists have also affirmed a challenge in differentiating a Parkinsonian tremor with the pure tremor which is essential in providing an accurate diagnosis. Therefore, there is a need to develop a monitoring and assistive tool for the tremor patient that keep on checking their health condition by coordinating them with the clinicians and caretakers for early diagnosis and assistance in performing the daily activity. In our research, we focus on developing a system for automatic classification of tremor which can accurately differentiate the pure tremor from the Parkinsonian tremor using a wearable accelerometer-based device, so that adequate diagnosis can be provided to the correct patient. In this research, a study was conducted in the neuro-clinic to assess the upper wrist movement of the patient suffering from Pure (Essential) tremor and Parkinsonian tremor using a wearable accelerometer-based device. Four tasks were designed in accordance with Unified Parkinson’s disease motor rating scale which is used to assess the rest, postural, intentional and action tremor in such patient. Various features such as time-frequency domain, wavelet-based and fast-Fourier transform based cross-correlation were extracted from the tri-axial signal which was used as input feature vector space for the different supervised and unsupervised learning tools for quantification of severity of tremor. A minimum covariance maximum correlation energy comparison index was also developed which was used as the input feature for various classification tools for distinguishing the PT and ET tremor types. An automatic system for efficient classification of tremor was developed using feature extraction methods, and superior performance was achieved using K-nearest neighbors and Support Vector Machine classifiers respectively.Keywords: machine learning approach for neurological disorder assessment, automatic classification of tremor types, feature extraction method for tremor classification, neurological movement disorder, parkinsonian tremor, essential tremor
Procedia PDF Downloads 154447 Quality by Design in the Optimization of a Fast HPLC Method for Quantification of Hydroxychloroquine Sulfate
Authors: Pedro J. Rolim-Neto, Leslie R. M. Ferraz, Fabiana L. A. Santos, Pablo A. Ferreira, Ricardo T. L. Maia-Jr., Magaly A. M. Lyra, Danilo A F. Fonte, Salvana P. M. Costa, Amanda C. Q. M. Vieira, Larissa A. Rolim
Abstract:
Initially developed as an antimalarial agent, hydroxychloroquine (HCQ) sulfate is often used as a slow-acting antirheumatic drug in the treatment of disorders of connective tissue. The United States Pharmacopeia (USP) 37 provides a reversed-phase HPLC method for quantification of HCQ. However, this method was not reproducible, producing asymmetric peaks in a long analysis time. The asymmetry of the peak may cause an incorrect calculation of the concentration of the sample. Furthermore, the analysis time is unacceptable, especially regarding the routine of a pharmaceutical industry. The aiming of this study was to develop a fast, easy and efficient method for quantification of HCQ sulfate by High Performance Liquid Chromatography (HPLC) based on the Quality by Design (QbD) methodology. This method was optimized in terms of peak symmetry using the surface area graphic as the Design of Experiments (DoE) and the tailing factor (TF) as an indicator to the Design Space (DS). The reference method used was that described at USP 37 to the quantification of the drug. For the optimized method, was proposed a 33 factorial design, based on the QbD concepts. The DS was created with the TF (in a range between 0.98 and 1.2) in order to demonstrate the ideal analytical conditions. Changes were made in the composition of the USP mobile-phase (USP-MP): USP-MP: Methanol (90:10 v/v, 80:20 v/v and 70:30 v/v), in the flow (0.8, 1.0 and 1.2 mL) and in the oven temperature (30, 35, and 40ºC). The USP method allowed the quantification of drug in a long time (40-50 minutes). In addition, the method uses a high flow rate (1,5 mL.min-1) which increases the consumption of expensive solvents HPLC grade. The main problem observed was the TF value (1,8) that would be accepted if the drug was not a racemic mixture, since the co-elution of the isomers can become an unreliable peak integration. Therefore, the optimization was suggested in order to reduce the analysis time, aiming a better peak resolution and TF. For the optimization method, by the analysis of the surface-response plot it was possible to confirm the ideal setting analytical condition: 45 °C, 0,8 mL.min-1 and 80:20 USP-MP: Methanol. The optimized HPLC method enabled the quantification of HCQ sulfate, with a peak of high resolution, showing a TF value of 1,17. This promotes good co-elution of isomers of the HCQ, ensuring an accurate quantification of the raw material as racemic mixture. This method also proved to be 18 times faster, approximately, compared to the reference method, using a lower flow rate, reducing even more the consumption of the solvents and, consequently, the analysis cost. Thus, an analytical method for the quantification of HCQ sulfate was optimized using QbD methodology. This method proved to be faster and more efficient than the USP method, regarding the retention time and, especially, the peak resolution. The higher resolution in the chromatogram peaks supports the implementation of the method for quantification of the drug as racemic mixture, not requiring the separation of isomers.Keywords: analytical method, hydroxychloroquine sulfate, quality by design, surface area graphic
Procedia PDF Downloads 639446 Comparison of Equivalent Linear and Non-Linear Site Response Model Performance in Kathmandu Valley
Authors: Sajana Suwal, Ganesh R. Nhemafuki
Abstract:
Evaluation of ground response under earthquake shaking is crucial in geotechnical earthquake engineering. Damage due to seismic excitation is mainly correlated to local geological and geotechnical conditions. It is evident from the past earthquakes (e.g. 1906 San Francisco, USA, 1923 Kanto, Japan) that the local geology has strong influence on amplitude and duration of ground motions. Since then significant studies has been conducted on ground motion amplification revealing the importance of influence of local geology on ground. Observations from the damaging earthquakes (e.g. Nigata and San Francisco, 1964; Irpinia, 1980; Mexico, 1985; Kobe, 1995; L’Aquila, 2009) divulged that non-uniform damage pattern, particularly in soft fluvio-lacustrine deposit is due to the local amplification of seismic ground motion. Non-uniform damage patterns are also observed in Kathmandu Valley during 1934 Bihar Nepal earthquake and recent 2015 Gorkha earthquake seemingly due to the modification of earthquake ground motion parameters. In this study, site effects resulting from amplification of soft soil in Kathmandu are presented. A large amount of subsoil data was collected and used for defining the appropriate subsoil model for the Kathamandu valley. A comparative study of one-dimensional total-stress equivalent linear and non-linear site response is performed using four strong ground motions for six sites of Kathmandu valley. In general, one-dimensional (1D) site-response analysis involves the excitation of a soil profile using the horizontal component and calculating the response at individual soil layers. In the present study, both equivalent linear and non-linear site response analyses were conducted using the computer program DEEPSOIL. The results show that there is no significant deviation between equivalent linear and non-linear site response models until the maximum strain reaches to 0.06-0.1%. Overall, it is clearly observed from the results that non-linear site response model perform better as compared to equivalent linear model. However, the significant deviation between two models is resulted from other influencing factors such as assumptions made in 1D site response, lack of accurate values of shear wave velocity and nonlinear properties of the soil deposit. The results are also presented in terms of amplification factors which are predicted to be around four times more in case of non-linear analysis as compared to equivalent linear analysis. Hence, the nonlinear behavior of soil prevails the urgent need of study of dynamic characteristics of the soft soil deposit that can specifically represent the site-specific design spectra for the Kathmandu valley for building resilient structures from future damaging earthquakes.Keywords: deep soil, equivalent linear analysis, non-linear analysis, site response
Procedia PDF Downloads 292