Search results for: sieving particle size
661 Information Needs and Seeking Behaviour of Postgraduate Students of Kohat University of Science and Technology, Khyber Pakhtunkhwa, Pakistan
Authors: Saeed Ullah Jan, Muhammad Ali, Misbah Ullah Awan
Abstract:
Purpose: This study investigated the information needs and seeking behaviour, and hurdles to information seeking of Post Graduate students of Kohat University of Science and Technology (KUST), Khyber Pakhtunkhwa. It focused on the information requirements of the post-graduate students of the university, the pattern they use for seeking information, and the difficulties they face while seeking information. Design/Methodology/approach: This study used a quantitative approach, adapting a survey questionnaire method for data collection. The population of this study was composed of M.Phil. and Ph.D. students of 2019 and 2020 in the faculties of Physical and Numerical Sciences, Chemical and Pharmaceutical Sciences, Biological Sciences, and Social Sciences of KUST. The sample size was 260. Students were selected randomly. The study response rate was 77%, and data were analyzed through SPSS (22 versions). Key findings: The study revealed that Most students' information needs were for study and research activities, new knowledge, and career development. To fulfill these needs, the scholars use various sources and resources. The sources they used for information needs were journal articles, textbooks, and research projects commonly. For the information-seeking purpose, often, students prefer books that have some importance. The other factors that played an essential role in selecting material were topical relevance, Novelty, Recommended by colleagues, and publisher's reputation. Most of the students thought that Book Exhibitions, Open Access systems in the Library, and the Display of new arrivals could enhance the students' information-seeking. The main problem seeking information was faced by them was a shortage of printed information resources. Overall they wanted more facilities, enhancement in the library collection, and better services. Delimitations of the study: This study has not included 1) BS and M.Sc. Students of KUST; 2) The colleges and institutions affiliated with KUST; 3) This study was delimited only to the Post Graduate students of KUST. Practical implication(s): The findings of the study motivate the policymakers and authorities of KUST to restructure the information literacy programs to fulfill the scholars' information needs. It may inform the policymakers to know the difficulties faced by scholars during information seeking. Contribution to the knowledge: No significant work has been done on the students' information needs and seeking behaviour at KUST. The study analyzed the information needs and seeking behaviour of post graduate students. It brought a clear picture of information needs and seeking behaviour of scholars and addressed the problems faced by them during the seeking process.Keywords: information needs of Pakistan, information-seeking behaviors, postgraduate students, university libraries, Kohat university of science and technology, Khyber Pakhtunkhwa, Pakistan
Procedia PDF Downloads 91660 Oral Health of Tobacco Chewers: A Cross-Sectional Study in Karachi, Pakistan
Authors: Warsi A. Ibrahim, Qureshi A. Ambrina, Younus M. Anjum
Abstract:
Introduction: Oral lesions related to commercially available Smokeless Tobacco (ST), such as, Pan, Gutka, Mahwa, Naswar is considered a serious challenge for dental health care providers in Pakistan. Majority of labored Pakistani population consume ST, where public transporters and drivers are no exception. It was necessary to identify individuals of this particular population group and screen their oral health and early signs of pre-cancerous lesions so that appropriate preventive measures could be taken to reduce the burden on health providers. Aim of Study: To estimate Prevalence of ST consumption and perception of use, and to evaluate Oral Health status among public drivers of Karachi. Material & methods: A cross-sectional study survey was conducted over duration of 2 months, through convenient sampling. Sample size (n=615) of public drivers (age > 18 years) all over Karachi was gathered. A structured proforma was used to record socio-demographics, addiction profile, perception of use and oral health status (oral lesions, oral sub-mucosal fibrosis and dental caries) of study participants. Data was entered and analyzed using SPSS version 16.0 using descriptive statistics only. Results: Prevalence of ST consumption among the study participants was figured to 92.5%. Out of these almost 70% suffered from one or the other form of oral lesion(s). Four major types of ST consumption were observed out of which 60 % of oral lesion were related to Gutka chewers showing early signs of oral cancer. In addition, occurrence of Oral sub-mucosal fibrosis (OSF) was found to be significantly high around 54.8%. Overall dental caries status was also high, showing on an average 5 teeth of an individual were decayed, missing or filled deviating from WHO normal criteria (mean < 3). It was thus proven from the study that public drivers relied on oral tobacco consumption because it helps them ‘Improve consciousness’ (p-value: < 0.01; using chi-square test). Multivariate analysis showed that there were higher prevalence of smokeless tobacco among highway drivers versus local drivers (A.O.R: 2.82 [0.83-9.61], p-value: < 0.01) Conclusion: Smokeless tobacco (ST) consumption has a direct effect on oral health. However, the type of ST, the duration of consumption are factors which are directly related to the severity. Moreover, Gutka may be considered as having most lethal effects on oral health which may lead to oral cancer and affect individual’s quality of life. Specific preventive programs must be undertaken to reduce the consumption of Gutka among public transporters and drivers.Keywords: smokeless tobacco, oral lesions, drivers, public transporters
Procedia PDF Downloads 308659 Development of Risk Index and Corporate Governance Index: An Application on Indian PSUs
Authors: M. V. Shivaani, P. K. Jain, Surendra S. Yadav
Abstract:
Public Sector Undertakings (PSUs), being government-owned organizations have commitments for the economic and social wellbeing of the society; this commitment needs to be reflected in their risk-taking, decision-making and governance structures. Therefore, the primary objective of the study is to suggest measures that may lead to improvement in performance of PSUs. To achieve this objective two normative frameworks (one relating to risk levels and other relating to governance structure) are being put forth. The risk index is based on nine risks, such as, solvency risk, liquidity risk, accounting risk, etc. and each of the risks have been scored on a scale of 1 to 5. The governance index is based on eleven variables, such as, board independence, diversity, risk management committee, etc. Each of them are scored on a scale of 1 to five. The sample consists of 39 PSUs that featured in Nifty 500 index and, the study covers a 10 year period from April 1, 2005 to March, 31, 2015. Return on assets (ROA) and return on equity (ROE) have been used as proxies of firm performance. The control variables used in the model include, age of firm, growth rate of firm and size of firm. A dummy variable has also been used to factor in the effects of recession. Given the panel nature of data and possibility of endogeneity, dynamic panel data- generalized method of moments (Diff-GMM) regression has been used. It is worth noting that the corporate governance index is positively related to both ROA and ROE, indicating that with the improvement in governance structure, PSUs tend to perform better. Considering the components of CGI, it may be suggested that (i). PSUs ensure adequate representation of women on Board, (ii). appoint a Chief Risk Officer, and (iii). constitute a risk management committee. The results also indicate that there is a negative association between risk index and returns. These results not only validate the framework used to develop the risk index but also provide a yardstick to PSUs benchmark their risk-taking if they want to maximize their ROA and ROE. While constructing the CGI, certain non-compliances were observed, even in terms of mandatory requirements, such as, proportion of independent directors. Such infringements call for stringent penal provisions and better monitoring of PSUs. Further, if the Securities and Exchange Board of India (SEBI) and Ministry of Corporate Affairs (MCA) bring about such reforms in the PSUs and make mandatory the adherence to the normative frameworks put forth in the study, PSUs may have more effective and efficient decision-making, lower risks and hassle free management; all these ultimately leading to better ROA and ROE.Keywords: PSU, risk governance, diff-GMM, firm performance, the risk index
Procedia PDF Downloads 157658 A Review of Atomization Mechanisms Used for Spray Flash Evaporation: Their Effectiveness and Proposal of Rotary Bell Atomizer for Flashing Application
Authors: Murad A. Channa, Mehdi Khiadani. Yasir Al-Abdeli
Abstract:
Considering the severity of water scarcity around the world and its widening at an alarming rate, practical improvements in desalination techniques need to be engineered at the earliest. Atomization is the major aspect of flashing phenomena, yet it has been paid less attention to until now. There is a need to test efficient ways of atomization for the flashing process. Flash evaporation together with reverse osmosis is also a commercially matured desalination technique commonly famous as Multi-stage Flash (MSF). Even though reverse osmosis is massively practical, it is not economical or sustainable compared to flash evaporation. However, flashing evaporation has its drawbacks as well such as lower efficiency of water production per higher consumption of power and time. Flash evaporation is simply the instant boiling of a subcooled liquid which is introduced as droplets in a well-maintained negative environment. This negative pressure inside the vacuum increases the temperature of the liquid droplets far above their boiling point, which results in the release of latent heat, and the liquid droplets turn into vapor which is collected to be condensed back into an impurity-free liquid in a condenser. Atomization is the main difference between pool and spray flash evaporation. Atomization is the heart of the flash evaporation process as it increases the evaporating surface area per drop atomized. Atomization can be categorized into many levels depending on its drop size, which again becomes crucial for increasing the droplet density (drop count) per given flow rate. This review comprehensively summarizes the selective results relating to the methods of atomization and their effectiveness on the evaporation rate from earlier works to date. In addition, the reviewers propose using centrifugal atomization for the flashing application, which brings several advantages viz ultra-fine droplets, uniform droplet density, and the swirling geometry of the spray with kinetically more energetic sprays during their flight. Finally, several challenges of using rotary bell atomizer (RBA) and RBA Sprays inside the chamber have been identified which will be explored in detail. A schematic of rotary bell atomizer (RBA) integration with the chamber has been designed. This powerful centrifugal atomization has the potential to increase potable water production in commercial multi-stage flash evaporators, where it would be preferably advantageous.Keywords: atomization, desalination, flash evaporation, rotary bell atomizer
Procedia PDF Downloads 84657 The Role of Risk Attitudes and Networks on the Migration Decision: Empirical Evidence from the United States
Authors: Tamanna Rimi
Abstract:
A large body of literature has discussed the determinants of migration decision. However, the potential role of individual risk attitudes on migration decision has so far been overlooked. The research on migration literature has studied how the expected income differential influences migration flows for a risk neutral individual. However, migration takes place when there is no expected income differential or even the variability of income appears as lower than in the current location. This migration puzzle motivates a recent trend in the literature that analyzes how attitudes towards risk influence the decision to migrate. However, the significance of risk attitudes on migration decision has been addressed mostly in a theoretical perspective in the mainstream migration literature. The efficient outcome of labor market and overall economy are largely influenced by migration in many countries. Therefore, attitudes towards risk as a determinant of migration should get more attention in empirical studies. To author’s best knowledge, this is the first study that has examined the relationship between relative risk aversion and migration decision in US market. This paper considers movement across United States as a means of migration. In addition, this paper also explores the network effect due to the increasing size of one’s own ethnic group to a source location on the migration decision and how attitudes towards risk vary with network effect. Two ethnic groups (i.e. Asian and Hispanic) have been considered in this regard. For the empirical estimation, this paper uses two sources of data: 1) U.S. census data for social, economic, and health research, 2010 (IPUMPS) and 2) University of Michigan Health and Retirement Study, 2010 (HRS). In order to measure relative risk aversion, this study uses the ‘Two Sample Two-Stage Instrumental Variable (TS2SIV)’ technique. This is a similar method of Angrist (1990) and Angrist and Kruegers’ (1992) ‘Two Sample Instrumental Variable (TSIV)’ technique. Using a probit model, the empirical investigation yields the following results: (i) risk attitude has a significantly large impact on migration decision where more risk averse people are less likely to migrate; (ii) the impact of risk attitude on migration varies by other demographic characteristics such as age and sex; (iii) people with higher concentration of same ethnic households living in a particular place are expected to migrate less from their current place; (iv) the risk attitudes on migration vary with network effect. The overall findings of this paper relating risk attitude, migration decision and network effect can be a significant contribution addressing the gap between migration theory and empirical study in migration literature.Keywords: migration, network effect, risk attitude, U.S. market
Procedia PDF Downloads 162656 Method for Controlling the Groundwater Polluted by the Surface Waters through Injection Wells
Authors: Victorita Radulescu
Abstract:
Introduction: The optimum exploitation of agricultural land in the presence of an aquifer polluted by the surface sources requires close monitoring of groundwater level in both periods of intense irrigation and in absence of the irrigations, in times of drought. Currently in Romania, in the south part of the country, the Baragan area, many agricultural lands are confronted with the risk of groundwater pollution in the absence of systematic irrigation, correlated with the climate changes. Basic Methods: The non-steady flow of the groundwater from an aquifer can be described by the Bousinesq’s partial differential equation. The finite element method was used, applied to the porous media needed for the water mass balance equation. By the proper structure of the initial and boundary conditions may be modeled the flow in drainage or injection systems of wells, according to the period of irrigation or prolonged drought. The boundary conditions consist of the groundwater levels required at margins of the analyzed area, in conformity to the reality of the pollutant emissaries, following the method of the double steps. Major Findings/Results: The drainage condition is equivalent to operating regimes on the two or three rows of wells, negative, as to assure the pollutant transport, modeled with the variable flow in groups of two adjacent nodes. In order to obtain the level of the water table, in accordance with the real constraints, are needed, for example, to be restricted its top level below of an imposed value, required in each node. The objective function consists of a sum of the absolute values of differences of the infiltration flow rates, increased by a large penalty factor when there are positive values of pollutant. In these conditions, a balanced structure of the pollutant concentration is maintained in the groundwater. The spatial coordinates represent the modified parameters during the process of optimization and the drainage flows through wells. Conclusions: The presented calculation scheme was applied to an area having a cross-section of 50 km between two emissaries with various levels of altitude and different values of pollution. The input data were correlated with the measurements made in-situ, such as the level of the bedrock, the grain size of the field, the slope, etc. This method of calculation can also be extended to determine the variation of the groundwater in the aquifer following the flood wave propagation in envoys.Keywords: environmental protection, infiltrations, numerical modeling, pollutant transport through soils
Procedia PDF Downloads 155655 Inbreeding Study Using Runs of Homozygosity in Nelore Beef Cattle
Authors: Priscila A. Bernardes, Marcos E. Buzanskas, Luciana C. A. Regitano, Ricardo V. Ventura, Danisio P. Munari
Abstract:
The best linear unbiased predictor (BLUP) is a method commonly used in genetic evaluations of breeding programs. However, this approach can lead to higher inbreeding coefficients in the population due to the intensive use of few bulls with higher genetic potential, usually presenting some degree of relatedness. High levels of inbreeding are associated to low genetic viability, fertility, and performance for some economically important traits and therefore, should be constantly monitored. Unreliable pedigree data can also lead to misleading results. Genomic information (i.e., single nucleotide polymorphism – SNP) is a useful tool to estimate the inbreeding coefficient. Runs of homozygosity have been used to evaluate homozygous segments inherited due to direct or collateral inbreeding and allows inferring population selection history. This study aimed to evaluate runs of homozygosity (ROH) and inbreeding in a population of Nelore beef cattle. A total of 814 animals were genotyped with the Illumina BovineHD BeadChip and the quality control was carried out excluding SNPs located in non-autosomal regions, with unknown position, with a p-value in the Hardy-Weinberg equilibrium lower than 10⁻⁵, call rate lower than 0.98 and samples with the call rate lower than 0.90. After the quality control, 809 animals and 509,107 SNPs remained for analyses. For the ROH analysis, PLINK software was used considering segments with at least 50 SNPs with a minimum length of 1Mb in each animal. The inbreeding coefficient was calculated using the ratio between the sum of all ROH sizes and the size of the whole genome (2,548,724kb). A total of 25.711 ROH were observed, presenting mean, median, minimum, and maximum length of 3.34Mb, 2Mb, 1Mb, and 80.8Mb, respectively. The number of SNPs present in ROH segments varied from 50 to 14.954. The longest ROH length was observed in one animal, which presented a length of 634Mb (24.88% of the genome). Four bulls were among the 10 animals with the longest extension of ROH, presenting 11% of ROH with length higher than 10Mb. Segments longer than 10Mb indicate recent inbreeding. Therefore, the results indicate an intensive use of few sires in the studied data. The distribution of ROH along the chromosomes showed that chromosomes 5 and 6 presented a large number of segments when compared to other chromosomes. The mean, median, minimum, and maximum inbreeding coefficients were 5.84%, 5.40%, 0.00%, and 24.88%, respectively. Although the mean inbreeding was considered low, the ROH indicates a recent and intensive use of few sires, which should be avoided for the genetic progress of breed.Keywords: autozygosity, Bos taurus indicus, genomic information, single nucleotide polymorphism
Procedia PDF Downloads 150654 Seal and Heal Miracle Ointment: Effects of Cryopreserved and Lyophilized Amniotic Membrane on Experimentally Induced Diabetic Balb/C Mice
Authors: Elizalde D. Bana
Abstract:
Healing restores continuity and form through cell replication; hence, conserving structural integrity. In response to the worldwide pressing problem of chronic wounds in the healthcare delivery system, the researcher aims to provide effective intervention to preserve the structural integrity of the person. The wound healing effects of cryopreserved and lyophilized amniotic membrane (AM) of a term fetus embedded into two (2) concentrations (1.5 % and 1.0 %) of absorption-based ointment has been evaluated in vivo using the excision wound healing model 1x1 cm size. The total protein concentration in full term fetus was determined by the Biuret and Bradford methods, which are based on UV-visible spectroscopy. The percentages of protein presence in 9.5 mg (Mass total sample) of Amniotic membrane ranges between 14.77 – 14.46 % in Bradford method, while slightly lower to 13.78 – 13.80 % concentration in Biuret method, respectively. Bradford method evidently showed higher sensitivity for proteins than Biuret test. Overall, the amniotic membrane is composed principally of proteins in which a copious amount of literature substantially proved its healing abilities. After which, an area of 1 cm by 1 cm skin tissue was excised to its full thickness from the dorsolateral aspect of the isogenic mice and was applied twice a day with the ointment formulation having two (2) concentrations for the diabetic group and non-diabetic group. The wounds of each animal were left undressed and its area was measured every other day by a standard measurement formula from day 2,4,6,8,10,12 and 14. By the 14th day, the ointment containing 1.5 % of AM in absorption-based ointment applied to non-diabetic and diabetic group showed 100 % healing. The wound areas in the animals treated with the standard antibiotic, Mupirocin Ointment (Brand X) showed a 100% healing by the 14th day but with traces of scars, indicating that AM prepared from cryopreservation and lyophilization, at that given concentration, had a better wound healing property than the standard antibiotic. Four (4) multivariate tests were used which showed a significant interaction between days and treatments, meaning that the ointments prepared in two differing concentrations and induced in different groups of the mice had a significant effect on the percent of contraction over time. Furthermore, the evaluations of its effectiveness to wound healing were all significant although in differing degrees. It is observed that the higher the concentrations of amniotic membrane, the more effective are the results.Keywords: wounds, healing, amniotic membrane ointments, biomedical, stem cell
Procedia PDF Downloads 302653 Analysis of Determinants of Growth of Small and Medium Enterprises in Kwara State, Nigeria
Authors: Hussaini Tunde Subairu
Abstract:
Small and Medium Enterprises (SMEs) sectors serve as catalyst for employment generation, national growth, poverty reduction and economic development in developing and developed countries. However, in Nigeria despite copious and plethora of government policies and stimulus schemes directed at SMEs, the sector is still characterized by high rate of failure and discontinuities. This study therefore investigated owners/managers profile, firms characteristics and external factors as possible determinants of SMEs growth from selected SMEs in Kwara State. Primary data were sourced from 200 SMEs respondents registered with the National Association of Small and Medium Enterprises (NASMES) in Kwara State Central Senatorial District. Multiple Regressions Analysis (MRA) was used to analyze the relationship between dependent and independent variables, and pair wise correlation was employed to examine the relationship among independent variables. The Analysis of Variable (ANOVA) was employed to indicate the overall significant of the model The findings revealed that Analysis of variance (ANOVA) put the value of F-statistics at 420.45 and p-value at 0.000 was significant. The values of R2 and Adjusted R2 of 0.9643 and 0.9620 respectively suggested that 96 percent of variations in employment growth were explained by the explanatory variables. The level of technical and managerial education has t- value of 24.14 and p-value of 0.001, length of managers/owners experience in similar trade with t- value of 21.37 and p-value of 0.001, age of managers/owners with t- value of 42.98 and p-value of 0.001, firm age with t- value of 25.91 and p-value of 0.001, numbers of firms in a cluster with t- value of 7.20 and p-value of 0.001, access to formal finance with t-value of 5.56 and p-value of 0.001, firm technology innovation with t- value of 25.32 and p-value of 0.01, institutional support with t- value of 18.89 and p-value of 0.01, globalization with t- value of 9.78 and p-value of 0.01, and infrastructure with t-value of 10.75 and p-value of 0.01. The result also indicated that initial size has t-value of -1.71 and p-value of 0.090 which is consistent with Gibrat’s Law. The study concluded that owners/managers profile, firm specific characteristics and external factors substantially influenced employment growths of SMEs in the study area. Therefore, policy implication should enhance human capital development of SMEs owners/managers, and strengthen fiscal policy thrust through imposition on tariff regime to minimize effect of globalization. Governments at all level must support SMEs growth radically and enhance institutional support for SMEs growth and radically and significantly upgrading key infrastructure as rail/roads, rail, telecommunications, water and power.Keywords: external factors, firm specific characteristics, owners / manager profile, small and medium enterprises
Procedia PDF Downloads 243652 Preliminary Study of the Hydrothermal Polymetallic Ore Deposit at the Karancs Mountain, North-East Hungary
Authors: Eszter Kulcsar, Agnes Takacs, Gabriella B. Kiss, Peter Prakfalvi
Abstract:
The Karancs Mountain is part of the Miocene Inner Carpathian Volcanic Belt and is located in N-NE Hungary, along the Hungarian-Slovakian border. The 14 Ma old andesitic-dacitic units are surrounded by Oligocene sedimentary units (sandstone, siltstone). The host rocks of the mineralisation are siliceous and/or argillaceous volcanic units, quartz veins, hydrothermal breccia, and strongly silicified vuggy rocks, found in the various altered volcanic units. The hydrothermal breccia consists of highly silicified vuggy quartz clasts in quartz matrix. The hydrothermal alteration of the host units shows structural control at the deeper levels. The main ore minerals are galena, pyrite, marcasite, sphalerite, hematite, magnetite, arsenopyrite, anglesite and argentite The mineralisation was first mentioned in 1944 and the first exploration took place between 1961 and 1962 in the area. The first ore geological studies were performed between 1984-1985. The exploration programme was limited only to surface sampling; no drilling programme was performed. Petrographical and preliminary fluid inclusion studies were performed on calcite samples from a galena-bearing vein. Despite the early discovery of the mineralisation, no detailed description is available, thus its size, characteristics, and origin have remained unknown. The aim of this study is to examine the mineralisation, describe the characteristics in detail and to test the possible gold content of the various quartz veins and breccias. Finally, we also investigate the potential relation of the hydrothermal mineralisation to the surrounding similar mineralisations with similar ages (e.g. W-Mátra Mountains in Hungary, Banska Bystrica, Banska Stiavnica in Slovakia) in order to place the mineralisation within the volcanic-hydrothermal evolution of the Miocene Inner Carpathian Belt. As first steps, the study includes field mapping, traditional petrological and ore microscopy; X-ray diffraction analysis; SEM-EDS and EMPA studies on ore minerals, to obtain mineral chemical information. Fluid inclusion petrography and microthermometry and micro-Raman-spectroscopy studies are also planned on quartz-hosted inclusions to investigate the physical and chemical properties of the ore-forming fluid.Keywords: epithermal, Karancs Mountain, Hungary, Miocene Inner Carpathian volcanic belt, polimetallic ore deposit
Procedia PDF Downloads 132651 Linkage Disequilibrium and Haplotype Blocks Study from Two High-Density Panels and a Combined Panel in Nelore Beef Cattle
Authors: Priscila A. Bernardes, Marcos E. Buzanskas, Luciana C. A. Regitano, Ricardo V. Ventura, Danisio P. Munari
Abstract:
Genotype imputation has been used to reduce genomic selections costs. In order to increase haplotype detection accuracy in methods that considers the linkage disequilibrium, another approach could be used, such as combined genotype data from different panels. Therefore, this study aimed to evaluate the linkage disequilibrium and haplotype blocks in two high-density panels before and after the imputation to a combined panel in Nelore beef cattle. A total of 814 animals were genotyped with the Illumina BovineHD BeadChip (IHD), wherein 93 animals (23 bulls and 70 progenies) were also genotyped with the Affymetrix Axion Genome-Wide BOS 1 Array Plate (AHD). After the quality control, 809 IHD animals (509,107 SNPs) and 93 AHD (427,875 SNPs) remained for analyses. The combined genotype panel (CP) was constructed by merging both panels after quality control, resulting in 880,336 SNPs. Imputation analysis was conducted using software FImpute v.2.2b. The reference (CP) and target (IHD) populations consisted of 23 bulls and 786 animals, respectively. The linkage disequilibrium and haplotype blocks studies were carried out for IHD, AHD, and imputed CP. Two linkage disequilibrium measures were considered; the correlation coefficient between alleles from two loci (r²) and the |D’|. Both measures were calculated using the software PLINK. The haplotypes' blocks were estimated using the software Haploview. The r² measurement presented different decay when compared to |D’|, wherein AHD and IHD had almost the same decay. For r², even with possible overestimation by the sample size for AHD (93 animals), the IHD presented higher values when compared to AHD for shorter distances, but with the increase of distance, both panels presented similar values. The r² measurement is influenced by the minor allele frequency of the pair of SNPs, which can cause the observed difference comparing the r² decay and |D’| decay. As a sum of the combinations between Illumina and Affymetrix panels, the CP presented a decay equivalent to a mean of these combinations. The estimated haplotype blocks detected for IHD, AHD, and CP were 84,529, 63,967, and 140,336, respectively. The IHD were composed by haplotype blocks with mean of 137.70 ± 219.05kb, the AHD with mean of 102.10kb ± 155.47, and the CP with mean of 107.10kb ± 169.14. The majority of the haplotype blocks of these three panels were composed by less than 10 SNPs, with only 3,882 (IHD), 193 (AHD) and 8,462 (CP) haplotype blocks composed by 10 SNPs or more. There was an increase in the number of chromosomes covered with long haplotypes when CP was used as well as an increase in haplotype coverage for short chromosomes (23-29), which can contribute for studies that explore haplotype blocks. In general, using CP could be an alternative to increase density and number of haplotype blocks, increasing the probability to obtain a marker close to a quantitative trait loci of interest.Keywords: Bos taurus indicus, decay, genotype imputation, single nucleotide polymorphism
Procedia PDF Downloads 280650 Patient Satisfaction Measurement Using Face-Q for Non-Incisional Double-Eyelid Blepharoplasty with Modified Single-Knot Continuous Buried Suture Technique
Authors: Kwei Huan Liw, Sashi B. Darshan
Abstract:
Background: Double eyelid surgery has become one of the most sought-after aesthetic procedures among Asians. Many surgeons perform surgical blepharoplasty and various other methods of non-incisional blepharoplasty. Face-Q is a validated method of measuring patient satisfaction for facial aesthetic procedures. Here we have analyzed the overall eye satisfaction score, the upper eyelid appraisal score and the adverse effect on eyes score Methods: 274 patients (548 eyes), aged between 18 to 40 years old, were recruited from 2015-2018. Each patient underwent a non-incisional double-eyelid blepharoplasty using a single-knotted continuous buried suture. 3 – 5 stab incisions were made depending on the upper eyelid size. A needle loaded with 7-0 nylon is passed from the lateral most wound through the dermis and the conjunctiva in an alternate fashion into the remaining stab wounds. The suture is then tunneled back laterally in the deeper dermis and knotted securely with the suture end. The knot is then buried within the orbicularis oculi muscle. Each patient was required to fill the Face-Q questionnaire before the procedure and 2 weeks post procedure. The results are described based on the percentage of the maximum achievable score. Patients were reviewed after 12 to 18 months to assess the long-term outcome. Results: The overall eye satisfaction score demonstrated a high level of post-operative satisfaction (97.85%), compared to 27.32% pre-operatively. The appraisal of upper eyelid scores showed drastic improvement in perception post-operatively (95.31%) compared to 21.44% pre-operatively. Adverse effect on eyes score showed a very low post-operative complication rate (0.4%) The long-term follow-up showed 6 cases that had developed asymmetrical folds. Only 1 patient agreed for revision surgery. The other 5 patients were still satisfied with the outcome and were not keen for revision surgery. None of the cases had loosening of knots. Conclusion: Modified single-knot continuous buried suture technique is a simple and non-invasive method to create aesthetically pleasing non-surgical double-eyelids, which has long-term effects. Proper patient selection is crucial and good surgical technique is required to achieve a desirable outcome.Keywords: blepharoplasty, double-eyelid, face-Q, non-incisional
Procedia PDF Downloads 120649 Effect of Fast Fashion on Urban Indian Consumer
Authors: Neha Dimri, Varsha Gupta
Abstract:
Purpose: Fast Fashion trend promotes consumption of low cost high fashion garments at a rapid rate. Frequent change in fashion trend results in higher disposability of Fast Fashion products. To cater for the Fast Fashion appetite of the present day consumer, fashion giants have ramped up production of garments, thus imposing a massive strain on the planet’s natural resources. Also, ethical issues related to cheaper methods of production are of concern. India being a large consumer base has a major role to play in proliferation of the Fast Fashion trend. This paper is an attempt to study the effect of fast fashion trends on the Indian consumer’s behaviour. It also attempts to ascertain the awareness of the consumer about the detrimental effect that the fast fashion trends manifest on the environment. Design /methodology/approach: The survey was conducted using a questionnaire targeted at a set of urban Indian consumers of varied age, profession and socio economic backgrounds. Trends regarding frequency of purchase, expenditure on clothing, disposal methods and awareness about environmental issues were analyzed using the obtained data. Findings: The result of the study indicates that urban Indian consumer has a strong affinity towards fast fashion trends, but is largely unaware of its detrimental effect on the environment and strain on natural resources. Research Limitation/implications: The sample size for survey was only of a hundred consumers, and the same could be expanded for a better estimate of trends. Also, the sample consumers were mostly urban. A big chunk of Indian fashion consumers reside in small towns and the same could be included in the survey. Practical implications: As the true cost of Fast Fashion in terms of environmental and ethical aspects is getting realized worldwide, a big market like India cannot remain isolated from this phenomenon. Globally there has been an increase in demand of ethically produced clothing. It is imperative that the Indian consumer be made aware about the unsustainable nature of Fast Fashion so that he can contribute towards conservation of natural resources and ethical production of garments. Originality/value The research attempts to ascertain consumption pattern of the Indian fashion consumer and also his awareness about the true cost and consequences of Fast Fashion. The inferences may be used by fashion giants to use ‘Green Marketing’ and ‘Social Marketing’ techniques to make the Indian consumer more aware about sustainable fashion and to market their own products as ‘Sustainable, Green and Ethical’.Keywords: consumption, disposable, fast fashion, Indian consumer
Procedia PDF Downloads 311648 Exploring the Vocabulary and Grammar Advantage of US American over British English Speakers at Age 2;0
Authors: Janine Just, Kerstin Meints
Abstract:
The research aims to compare vocabulary size and grammatical development between US American English- and British English-speaking children at age 2;0. As there is evidence that precocious children with large vocabularies develop grammar skills earlier than their typically developing peers, it was investigated if this also holds true across varieties of English. Thus, if US American children start to produce words earlier than their British counterparts, this could mean that US children are also at an advantage in the early developmental stages of acquiring grammar. This research employs a British English adaptation of the MacArthur-Bates CDI Words and Sentences (Lincoln Toddler CDI) to compare vocabulary and also grammar scores with the updated US Toddler CDI norms. At first, the Lincoln TCDI was assessed for its concurrent validity with the Preschool Language Scale (PLS-5 UK). This showed high correlations for the vocabulary and grammar subscales between the tests. In addition, the frequency of the Toddler CDI’s words was also compared using American and British English corpora of adult spoken and written language. A paired-samples t-test found a significant difference in word frequency between the British and the American CDI demonstrating that the TCDI’s words were indeed of higher frequency in British English. We then compared language and grammar scores between US (N = 135) and British children (N = 96). A two-way between groups ANOVA examined if the two samples differed in terms of SES (i.e. maternal education) by investigating the impact of SES and country on vocabulary and sentence complexity. The two samples did not differ in terms of maternal education as the interaction effects between SES and country were not significant. In most cases, scores were not significantly different between US and British children, for example, for overall word production and most grammatical subscales (i.e. use of words, over- regularizations, complex sentences, word combinations). However, in-depth analysis showed that US children were significantly better than British children at using some noun categories (i.e. people, objects, places) and several categories marking early grammatical development (i.e. pronouns, prepositions, quantifiers, helping words). However, the effect sizes were small. Significant differences for grammar were found for irregular word forms and progressive tense suffixes. US children were more advanced in their use of these grammatical categories, but the effect sizes were small. In sum, while differences exist in terms of vocabulary and grammar ability, favouring US children, effect sizes were small. It can be concluded that most British children are ‘catching up’ with their US American peers at age 2;0. Implications of this research will be discussed.Keywords: first language acquisition, grammar, parent report instrument, vocabulary
Procedia PDF Downloads 283647 Personality Characteristics Managerial Skills and Career Preference
Authors: Dinesh Kumar Srivastava
Abstract:
After liberalization of the economy, technical education has seen rapid growth in India. A large number of institutions are offering various engineering and management programmes. Every year, a number of students complete B. Tech/M. Tech and MBA programmes of different institutes, universities in India and search for jobs in the industry. A large number of companies visit educational institutes for campus placements. These companies are interested in hiring competent managers. Most students show preference for jobs from reputed companies and jobs having high compensation. In this context, this study was conducted to understand career preference of postgraduate students and junior executives. Personality characteristics influence work life as well as personal life. In the last two decades, five factor model of personality has been found to be a valid predictor of job performance and job satisfaction. This approach has received support from studies conducted in different countries. It includes neuroticism, extraversion, and openness to experience, agreeableness, and conscientiousness. Similarly three social needs, namely, achievement, affiliation and power influence motivation and performance in certain job functions. Both approaches have been considered in the study. The objective of the study was first, to analyse the relationship between personality characteristics and career preference of students and executives. Secondly, the study analysed the relationship between personality characteristics and skills of students. Three managerial skills namely, conceptual, human and technical have been considered in the study. The sample size of the study was 266 including postgraduate students and junior executives. Respondents have completed BE/B. Tech/MBA programme. Three dimensions of career preference namely, identity, variety and security and three managerial skills were considered as dependent variables. The results indicated that neuroticism was not related to any dimension of career preference. Extraversion was not related to identity, variety and security. It was positively related to three skills. Openness to experience was positively related to skills. Conscientiousness was positively related to variety. It was positively related to three skills. Similarly, the relationship between social needs and career preference was examined using correlation. The results indicated that need for achievement was positively related to variety, identity and security. Need for achievement was positively related to managerial skills Need for affiliation was positively related to three dimensions of career preference as well as managerial skills Need for power was positively related to three dimensions of career preference and managerial skills Social needs appear to be stronger predictor of career preference and managerial skills than big five traits. Findings have implications for selection process in industry.Keywords: big five traits, career preference, personality, social needs
Procedia PDF Downloads 273646 Imputation of Incomplete Large-Scale Monitoring Count Data via Penalized Estimation
Authors: Mohamed Dakki, Genevieve Robin, Marie Suet, Abdeljebbar Qninba, Mohamed A. El Agbani, Asmâa Ouassou, Rhimou El Hamoumi, Hichem Azafzaf, Sami Rebah, Claudia Feltrup-Azafzaf, Nafouel Hamouda, Wed a.L. Ibrahim, Hosni H. Asran, Amr A. Elhady, Haitham Ibrahim, Khaled Etayeb, Essam Bouras, Almokhtar Saied, Ashrof Glidan, Bakar M. Habib, Mohamed S. Sayoud, Nadjiba Bendjedda, Laura Dami, Clemence Deschamps, Elie Gaget, Jean-Yves Mondain-Monval, Pierre Defos Du Rau
Abstract:
In biodiversity monitoring, large datasets are becoming more and more widely available and are increasingly used globally to estimate species trends and con- servation status. These large-scale datasets challenge existing statistical analysis methods, many of which are not adapted to their size, incompleteness and heterogeneity. The development of scalable methods to impute missing data in incomplete large-scale monitoring datasets is crucial to balance sampling in time or space and thus better inform conservation policies. We developed a new method based on penalized Poisson models to impute and analyse incomplete monitoring data in a large-scale framework. The method al- lows parameterization of (a) space and time factors, (b) the main effects of predic- tor covariates, as well as (c) space–time interactions. It also benefits from robust statistical and computational capability in large-scale settings. The method was tested extensively on both simulated and real-life waterbird data, with the findings revealing that it outperforms six existing methods in terms of missing data imputation errors. Applying the method to 16 waterbird species, we estimated their long-term trends for the first time at the entire North African scale, a region where monitoring data suffer from many gaps in space and time series. This new approach opens promising perspectives to increase the accuracy of species-abundance trend estimations. We made it freely available in the r package ‘lori’ (https://CRAN.R-project.org/package=lori) and recommend its use for large- scale count data, particularly in citizen science monitoring programmes.Keywords: biodiversity monitoring, high-dimensional statistics, incomplete count data, missing data imputation, waterbird trends in North-Africa
Procedia PDF Downloads 156645 The Good Form of a Sustainable Creative Learning City Based on “The Theory of a Good City Form“ by Kevin Lynch
Authors: Fatemeh Moosavi, Tumelo Franck Nkoshwane
Abstract:
Peter Drucker the renowned management guru once said, “The best way to predict the future is to create it.” Mr. Drucker is also the man who placed human capital as the most vital resource of any institution. As such any institution bent on creating a better future, requires a competent human capital, one that is able to execute with efficiency and effectiveness the objective a society aspires to. Technology today is accelerating the rate at which many societies transition to knowledge based societies. In this accelerated paradigm, it is imperative that those in leadership establish a platform capable of sustaining the planned future; intellectual capital. The capitalist economy going into the future will not just be sustained by dollars and cents, but by individuals who possess the creativity to enterprise, innovate and create wealth from ideas. This calls for cities of the future, to have this premise at the heart of their future plan, if the objective of designing sustainable and liveable future cities will be realised. The knowledge economy, now transitioning to the creative economy, requires cities of the future to be ‘gardens’ of inspiration, to be places where knowledge, creativity, and innovation can thrive as these instruments are becoming critical assets for creating wealth in the new economic system. Developing nations must accept that learning is a lifelong process that requires keeping abreast with change and should invest in teaching people how to keep learning. The need to continuously update one’s knowledge, turn these cities into vibrant societies, where new ideas create knowledge and in turn enriches the quality of life of the residents. Cities of the future must have as one of their objectives, the ability to motivate their citizens to learn, share knowledge, evaluate the knowledge and use it to create wealth for a just society. The five functional factors suggested by Kevin Lynch;-vitality, meaning/sense, adaptability, access, control, and monitoring should form the basis on which policy makers and urban designers base their plans for future cities. The authors of this paper believe that developing nations “creative economy clusters”, cities where creative industries drive the need for constant new knowledge creating sustainable learning creative cities. Obviously the form, shape and size of these districts should be cognisant of the environmental, cultural and economic characteristics of each locale. Gaborone city in the republic of Botswana is presented as the case study for this paper.Keywords: learning city, sustainable creative city, creative industry, good city form
Procedia PDF Downloads 310644 DNA Hypomethylating Agents Induced Histone Acetylation Changes in Leukemia
Authors: Sridhar A. Malkaram, Tamer E. Fandy
Abstract:
Purpose: 5-Azacytidine (5AC) and decitabine (DC) are DNA hypomethylating agents. We recently demonstrated that both drugs increase the enzymatic activity of the histone deacetylase enzyme SIRT6. Accordingly, we are comparing the changes H3K9 acetylation changes in the whole genome induced by both drugs using leukemia cells. Description of Methods & Materials: Mononuclear cells from the bone marrow of six de-identified naive acute myeloid leukemia (AML) patients were cultured with either 500 nM of DC or 5AC for 72 h followed by ChIP-Seq analysis using a ChIP-validated acetylated-H3K9 (H3K9ac) antibody. Chip-Seq libraries were prepared from treated and untreated cells using SMARTer ThruPLEX DNA- seq kit (Takara Bio, USA) according to the manufacturer’s instructions. Libraries were purified and size-selected with AMPure XP beads at 1:1 (v/v) ratio. All libraries were pooled prior to sequencing on an Illumina HiSeq 1500. The dual-indexed single-read Rapid Run was performed with 1x120 cycles at 5 pM final concentration of the library pool. Sequence reads with average Phred quality < 20, with length < 35bp, PCR duplicates, and those aligning to blacklisted regions of the genome were filtered out using Trim Galore v0.4.4 and cutadapt v1.18. Reads were aligned to the reference human genome (hg38) using Bowtie v2.3.4.1 in end-to-end alignment mode. H3K9ac enriched (peak) regions were identified using diffReps v1.55.4 software using input samples for background correction. The statistical significance of differential peak counts was assessed using a negative binomial test using all individuals as replicates. Data & Results: The data from the six patients showed significant (Padj<0.05) acetylation changes at 925 loci after 5AC treatment versus 182 loci after DC treatment. Both drugs induced H3K9 acetylation changes at different chromosomal regions, including promoters, coding exons, introns, and distal intergenic regions. Ten common genes showed H3K9 acetylation changes by both drugs. Approximately 84% of the genes showed an H3K9 acetylation decrease by 5AC versus 54% only by DC. Figures 1 and 2 show the heatmaps for the top 100 genes and the 99 genes showing H3K9 acetylation decrease after 5AC treatment and DC treatment, respectively. Conclusion: Despite the similarity in hypomethylating activity and chemical structure, the effect of both drugs on H3K9 acetylation change was significantly different. More changes in H3K9 acetylation were observed after 5 AC treatments compared to DC. The impact of these changes on gene expression and the clinical efficacy of these drugs requires further investigation.Keywords: DNA methylation, leukemia, decitabine, 5-Azacytidine, epigenetics
Procedia PDF Downloads 148643 Participatory Cartography for Disaster Reduction in Pogreso, Yucatan Mexico
Authors: Gustavo Cruz-Bello
Abstract:
Progreso is a coastal community in Yucatan, Mexico, highly exposed to floods produced by severe storms and tropical cyclones. A participatory cartography approach was conducted to help to reduce floods disasters and assess social vulnerability within the community. The first step was to engage local authorities in risk management to facilitate the process. Two workshop were conducted, in the first, a poster size printed high spatial resolution satellite image of the town was used to gather information from the participants: eight women and seven men, among them construction workers, students, government employees and fishermen, their ages ranged between 23 and 58 years old. For the first task, participants were asked to locate emblematic places and place them in the image to familiarize with it. Then, they were asked to locate areas that get flooded, the buildings that they use as refuges, and to list actions that they usually take to reduce vulnerability, as well as to collectively come up with others that might reduce disasters. The spatial information generated at the workshops was digitized and integrated into a GIS environment. A printed version of the map was reviewed by local risk management experts, who validated feasibility of proposed actions. For the second workshop, we retrieved the information back to the community for feedback. Additionally a survey was applied in one household per block in the community to obtain socioeconomic, prevention and adaptation data. The information generated from the workshops was contrasted, through T and Chi Squared tests, with the survey data in order to probe the hypothesis that poorer or less educated people, are less prepared to face floods (more vulnerable) and live near or among higher presence of floods. Results showed that a great majority of people in the community are aware of the hazard and are prepared to face it. However, there was not a consistent relationship between regularly flooded areas with people’s average years of education, house services, or house modifications against heavy rains to be prepared to hazards. We could say that the participatory cartography intervention made participants aware of their vulnerability and made them collectively reflect about actions that can reduce disasters produced by floods. They also considered that the final map could be used as a communication and negotiation instrument with NGO and government authorities. It was not found that poorer and less educated people are located in areas with higher presence of floods.Keywords: climate change, floods, Mexico, participatory mapping, social vulnerability
Procedia PDF Downloads 113642 Advanced Techniques in Semiconductor Defect Detection: An Overview of Current Technologies and Future Trends
Authors: Zheng Yuxun
Abstract:
This review critically assesses the advancements and prospective developments in defect detection methodologies within the semiconductor industry, an essential domain that significantly affects the operational efficiency and reliability of electronic components. As semiconductor devices continue to decrease in size and increase in complexity, the precision and efficacy of defect detection strategies become increasingly critical. Tracing the evolution from traditional manual inspections to the adoption of advanced technologies employing automated vision systems, artificial intelligence (AI), and machine learning (ML), the paper highlights the significance of precise defect detection in semiconductor manufacturing by discussing various defect types, such as crystallographic errors, surface anomalies, and chemical impurities, which profoundly influence the functionality and durability of semiconductor devices, underscoring the necessity for their precise identification. The narrative transitions to the technological evolution in defect detection, depicting a shift from rudimentary methods like optical microscopy and basic electronic tests to more sophisticated techniques including electron microscopy, X-ray imaging, and infrared spectroscopy. The incorporation of AI and ML marks a pivotal advancement towards more adaptive, accurate, and expedited defect detection mechanisms. The paper addresses current challenges, particularly the constraints imposed by the diminutive scale of contemporary semiconductor devices, the elevated costs associated with advanced imaging technologies, and the demand for rapid processing that aligns with mass production standards. A critical gap is identified between the capabilities of existing technologies and the industry's requirements, especially concerning scalability and processing velocities. Future research directions are proposed to bridge these gaps, suggesting enhancements in the computational efficiency of AI algorithms, the development of novel materials to improve imaging contrast in defect detection, and the seamless integration of these systems into semiconductor production lines. By offering a synthesis of existing technologies and forecasting upcoming trends, this review aims to foster the dialogue and development of more effective defect detection methods, thereby facilitating the production of more dependable and robust semiconductor devices. This thorough analysis not only elucidates the current technological landscape but also paves the way for forthcoming innovations in semiconductor defect detection.Keywords: semiconductor defect detection, artificial intelligence in semiconductor manufacturing, machine learning applications, technological evolution in defect analysis
Procedia PDF Downloads 51641 Heat Transfer Performance of a Small Cold Plate with Uni-Directional Porous Copper for Cooling Power Electronics
Authors: K. Yuki, R. Tsuji, K. Takai, S. Aramaki, R. Kibushi, N. Unno, K. Suzuki
Abstract:
A small cold plate with uni-directional porous copper is proposed for cooling power electronics such as an on-vehicle inverter with the heat generation of approximately 500 W/cm2. The uni-directional porous copper with the pore perpendicularly orienting the heat transfer surface is soldered to a grooved heat transfer surface. This structure enables the cooling liquid to evaporate in the pore of the porous copper and then the vapor to discharge through the grooves. In order to minimize the cold plate, a double flow channel concept is introduced for the design of the cold plate. The cold plate consists of a base plate, a spacer, and a vapor discharging plate, totally 12 mm in thickness. The base plate has multiple nozzles of 1.0 mm in diameter for the liquid supply and 4 slits of 2.0 mm in width for vapor discharging, and is attached onto the top surface of the porous copper plate of 20 mm in diameter and 5.0 mm in thickness. The pore size is 0.36 mm and the porosity is 36 %. The cooling liquid flows into the porous copper as an impinging jet flow from the multiple nozzles, and then the vapor, which is generated in the pore, is discharged through the grooves and the vapor slits outside the cold plate. A heated test section consists of the cold plate, which was explained above, and a heat transfer copper block with 6 cartridge heaters. The cross section of the heat transfer block is reduced in order to increase the heat flux. The top surface of the block is the grooved heat transfer surface of 10 mm in diameter at which the porous copper is soldered. The grooves are fabricated like latticework, and the width and depth are 1.0 mm and 0.5 mm, respectively. By embedding three thermocouples in the cylindrical part of the heat transfer block, the temperature of the heat transfer surface ant the heat flux are extrapolated in a steady state. In this experiment, the flow rate is 0.5 L/min and the flow velocity at each nozzle is 0.27 m/s. The liquid inlet temperature is 60 °C. The experimental results prove that, in a single-phase heat transfer regime, the heat transfer performance of the cold plate with the uni-directional porous copper is 2.1 times higher than that without the porous copper, though the pressure loss with the porous copper also becomes higher than that without the porous copper. As to the two-phase heat transfer regime, the critical heat flux increases by approximately 35% by introducing the uni-directional porous copper, compared with the CHF of the multiple impinging jet flow. In addition, we confirmed that these heat transfer data was much higher than that of the ordinary single impinging jet flow. These heat transfer data prove high potential of the cold plate with the uni-directional porous copper from the view point of not only the heat transfer performance but also energy saving.Keywords: cooling, cold plate, uni-porous media, heat transfer
Procedia PDF Downloads 295640 Flow Field Optimization for Proton Exchange Membrane Fuel Cells
Authors: Xiao-Dong Wang, Wei-Mon Yan
Abstract:
The flow field design in the bipolar plates affects the performance of the proton exchange membrane (PEM) fuel cell. This work adopted a combined optimization procedure, including a simplified conjugate-gradient method and a completely three-dimensional, two-phase, non-isothermal fuel cell model, to look for optimal flow field design for a single serpentine fuel cell of size 9×9 mm with five channels. For the direct solution, the two-fluid method was adopted to incorporate the heat effects using energy equations for entire cells. The model assumes that the system is steady; the inlet reactants are ideal gases; the flow is laminar; and the porous layers such as the diffusion layer, catalyst layer and PEM are isotropic. The model includes continuity, momentum and species equations for gaseous species, liquid water transport equations in the channels, gas diffusion layers, and catalyst layers, water transport equation in the membrane, electron and proton transport equations. The Bulter-Volumer equation was used to describe electrochemical reactions in the catalyst layers. The cell output power density Pcell is maximized subjected to an optimal set of channel heights, H1-H5, and channel widths, W2-W5. The basic case with all channel heights and widths set at 1 mm yields a Pcell=7260 Wm-2. The optimal design displays a tapered characteristic for channels 1, 3 and 4, and a diverging characteristic in height for channels 2 and 5, producing a Pcell=8894 Wm-2, about 22.5% increment. The reduced channel heights of channels 2-4 significantly increase the sub-rib convection and widths for effectively removing liquid water and oxygen transport in gas diffusion layer. The final diverging channel minimizes the leakage of fuel to outlet via sub-rib convection from channel 4 to channel 5. Near-optimal design without huge loss in cell performance but is easily manufactured is tested. The use of a straight, final channel of 0.1 mm height has led to 7.37% power loss, while the design with all channel widths to be 1 mm with optimal channel heights obtained above yields only 1.68% loss of current density. The presence of a final, diverging channel has greater impact on cell performance than the fine adjustment of channel width at the simulation conditions set herein studied.Keywords: optimization, flow field design, simplified conjugate-gradient method, serpentine flow field, sub-rib convection
Procedia PDF Downloads 296639 Study of Phase Separation Behavior in Flexible Polyurethane Foam
Authors: El Hatka Hicham, Hafidi Youssef, Saghiri Khalid, Ittobane Najim
Abstract:
Flexible polyurethane foam (FPUF) is a low-density cellular material generally used as a cushioning material in many applications such as furniture, bedding, packaging, etc. It is commercially produced during a continuous process, where a reactive mixture of foam chemicals is poured onto a moving conveyor. FPUFs are produced by the catalytic balancing of two reactions involved, the blowing reaction (isocyanate-water) and the gelation reaction (isocyanate-polyol). The microstructure of FPUF is generally composed of soft phases (polyol phases) and rigid domains that separate into two domains of different sizes: the rigid polyurea microdomains and the macrodomains (larger aggregates). The morphological features of FPUF are strongly influenced by the phase separation morphology that plays a key role in determining the global FPUF properties. This phase-separated morphology results from a thermodynamic incompatibility between soft segments derived from aliphatic polyether and hard segments derived from the commonly used aromatic isocyanate. In order to improve the properties of FPUF against the different stresses faced by this material during its use, we report in this work a study of the phase separation phenomenon in FPUF that has been examined using SAXS WAXS and FTIR. Indeed, we have studied with these techniques the effect of water, isocyanates, and alkaline chlorides on the phase separation behavior. SAXS was used to study the morphology of the microphase separated, WAXS to examine the nature of the hard segment packing, and FTIR to investigate the hydrogen bonding characteristics of the materials studied. The prepared foams were shown to have different levels of urea phase connectivity; the increase in water content in the FPUF formulation leads to an increase in the amount of urea formed and consequently the increase of the size of urea aggregates formed. Alkali chlorides (NaCl, KCl, and LiCl) incorporated into FPUF formulations show that is the ability to prevent hydrogen bond formation and subsequently alter the rigid domains. FPUFs prepared by different isocyanate structures showed that urea aggregates are difficult to be formed in foams prepared by asymmetric diisocyanate, while are more easily formed in foams prepared by symmetric and aliphatic diisocyanate.Keywords: flexible polyurethane foam, hard segments, phase separation, soft segments
Procedia PDF Downloads 163638 Study of Open Spaces in Urban Residential Clusters in India
Authors: Renuka G. Oka
Abstract:
From chowks to streets to verandahs to courtyards; residential open spaces are very significantly placed in traditional urban neighborhoods of India. At various levels of intersection, the open spaces with their attributes like juxtaposition with the built fabric, scale, climate sensitivity and response, multi-functionality, etc. reflect and respond to the patterns of human interactions. Also, these spaces tend to be quite well utilized. On the other hand, it is a common specter to see an imbalanced utilization of open spaces in newly/recently planned residential clusters. This is maybe due to lack of activity generators around or wrong locations or excess provisions or improper incorporation of aforementioned design attributes. These casual observations suggest the necessity for a systematic study of current residential open spaces. The exploratory study thus attempts to draw lessons through a structured inspection of residential open spaces to understand the effective environment as revealed through their use patterns. Here, residential open spaces are considered in a wider sense to incorporate all the un-built fabric around. These thus, include both use spaces and access space. For the study, open spaces in ten exemplary housing clusters/societies built during the last ten years across India are studied. A threefold inquiry is attempted in this direction. The first relates to identifying and determining the effects of various physical functions like space organization, size, hierarchy, thermal and optical comfort, etc. on the performance of residential open spaces. The second part sets out to understand socio-cultural variations in values, lifestyle, and beliefs which determine activity choices and behavioral preferences of users for respective residential open spaces. The third inquiry further observes the application of these research findings to the design process to derive meaningful and qualitative design advice. However, the study also emphasizes to develop a suitable framework of analysis and to carve out appropriate methods and approaches to probe into these aspects of the inquiry. Given this emphasis, a considerable portion of the research details out the conceptual framework for the study. This framework is supported by an in-depth search of available literature. The findings are worked out for design solutions which integrate the open space systems with the overall design process for residential clusters. The open spaces in residential areas present great complexities both in terms of their use patterns and determinants of their functional responses. The broad aim of the study is, therefore, to arrive at reconsideration of standards and qualitative parameters used by designers – on the basis of more substantial inquiry into the use patterns of open spaces in residential areas.Keywords: open spaces, physical and social determinants, residential clusters, use patterns
Procedia PDF Downloads 148637 Adaptation of Hough Transform Algorithm for Text Document Skew Angle Detection
Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye
Abstract:
The skew detection and correction form an important part of digital document analysis. This is because uncompensated skew can deteriorate document features and can complicate further document image processing steps. Efficient text document analysis and digitization can rarely be achieved when a document is skewed even at a small angle. Once the documents have been digitized through the scanning system and binarization also achieved, document skew correction is required before further image analysis. Research efforts have been put in this area with algorithms developed to eliminate document skew. Skew angle correction algorithms can be compared based on performance criteria. Most important performance criteria are accuracy of skew angle detection, range of skew angle for detection, speed of processing the image, computational complexity and consequently memory space used. The standard Hough Transform has successfully been implemented for text documentation skew angle estimation application. However, the standard Hough Transform algorithm level of accuracy depends largely on how much fine the step size for the angle used. This consequently consumes more time and memory space for increase accuracy and, especially where number of pixels is considerable large. Whenever the Hough transform is used, there is always a tradeoff between accuracy and speed. So a more efficient solution is needed that optimizes space as well as time. In this paper, an improved Hough transform (HT) technique that optimizes space as well as time to robustly detect document skew is presented. The modified algorithm of Hough Transform presents solution to the contradiction between the memory space, running time and accuracy. Our algorithm starts with the first step of angle estimation accurate up to zero decimal place using the standard Hough Transform algorithm achieving minimal running time and space but lacks relative accuracy. Then to increase accuracy, suppose estimated angle found using the basic Hough algorithm is x degree, we then run again basic algorithm from range between ±x degrees with accuracy of one decimal place. Same process is iterated till level of desired accuracy is achieved. The procedure of our skew estimation and correction algorithm of text images is implemented using MATLAB. The memory space estimation and process time are also tabulated with skew angle assumption of within 00 and 450. The simulation results which is demonstrated in Matlab show the high performance of our algorithms with less computational time and memory space used in detecting document skew for a variety of documents with different levels of complexity.Keywords: hough-transform, skew-detection, skew-angle, skew-correction, text-document
Procedia PDF Downloads 159636 Analysis of Sea Waves Characteristics and Assessment of Potential Wave Power in Egyptian Mediterranean Waters
Authors: Ahmed A. El-Gindy, Elham S. El-Nashar, Abdallah Nafaa, Sameh El-Kafrawy
Abstract:
The generation of energy from marine energy became one of the most preferable resources since it is a clean source and friendly to environment. Egypt has long shores along Mediterranean with important cities that need energy resources with significant wave energy. No detailed studies have been done on wave energy distribution in the Egyptian waters. The objective of this paper is to assess the energy wave power available in the Egyptian waters for the choice of the most suitable devices to be used in this area. This paper deals the characteristics and power of the offshore waves in the Egyptian waters. Since the field observations of waves are not frequent and need much technical work, the European Centre for Medium-Range Weather Forecasts (ECMWF) interim reanalysis data in Mediterranean, with a grid size 0.75 degree, which is a relatively course grid, are considered in the present study for preliminary assessment of sea waves characteristics and power. The used data covers the period from 2012 to 2014. The data used are significant wave height (swh), mean wave period (mwp) and wave direction taken at six hourly intervals, at seven chosen stations, and at grid points covering the Egyptian waters. The wave power (wp) formula was used to calculate energy flux. Descriptive statistical analysis including monthly means and standard deviations of the swh, mwp, and wp. The percentiles of wave heights and their corresponding power are done, as a tool of choice of the best technology suitable for the site. The surfer is used to show spatial distributions of wp. The analysis of data at chosen 7 stations determined the potential of wp off important Egyptian cities. Offshore of Al Saloum and Marsa Matruh, the highest wp occurred in January and February (16.93-18.05) ± (18.08-22.12) kw/m while the lowest occurred in June and October (1.49-1.69) ± (1.45-1.74) kw/m. In front of Alexandria and Rashid, the highest wp occurred in January and February (16.93-18.05) ± (18.08-22.12) kw/m while the lowest occurred in June and September (1.29-2.01) ± (1.31-1.83) kw/m. In front of Damietta and Port Said, the highest wp occurred in February (14.29-17.61) ± (21.61-27.10) kw/m and the lowest occurred in June (0.94-0.96) ± (0.71-0.72) kw/m. In winter, the probabilities of waves higher than 0.8 m in percentage were, at Al Saloum and Marsa Matruh (76.56-80.33) ± (11.62-12.05), at Alexandria and Rashid (73.67-74.79) ± (16.21-18.59) and at Damietta and Port Said (66.28-68.69) ± (17.88-17.90). In spring, the percentiles were, at Al Saloum and Marsa Matruh, (48.17-50.92) ± (5.79-6.56), at Alexandria and Rashid, (39.38-43.59) ± (9.06-9.34) and at Damietta and Port Said, (31.59-33.61) ± (10.72-11.25). In summer, the probabilities were, at Al Saloum and Marsa Matruh (57.70-66.67) ± (4.87-6.83), at Alexandria and Rashid (59.96-65.13) ± (9.14-9.35) and at Damietta and Port Said (46.38-49.28) ± (10.89-11.47). In autumn, the probabilities were, at Al Saloum and Marsa Matruh (58.75-59.56) ± (2.55-5.84), at Alexandria and Rashid (47.78-52.13) ± (3.11-7.08) and at Damietta and Port Said (41.16-42.52) ± (7.52-8.34).Keywords: distribution of sea waves energy, Egyptian Mediterranean waters, waves characteristics, waves power
Procedia PDF Downloads 191635 Unveiling Comorbidities in Irritable Bowel Syndrome: A UK BioBank Study utilizing Supervised Machine Learning
Authors: Uswah Ahmad Khan, Muhammad Moazam Fraz, Humayoon Shafique Satti, Qasim Aziz
Abstract:
Approximately 10-14% of the global population experiences a functional disorder known as irritable bowel syndrome (IBS). The disorder is defined by persistent abdominal pain and an irregular bowel pattern. IBS significantly impairs work productivity and disrupts patients' daily lives and activities. Although IBS is widespread, there is still an incomplete understanding of its underlying pathophysiology. This study aims to help characterize the phenotype of IBS patients by differentiating the comorbidities found in IBS patients from those in non-IBS patients using machine learning algorithms. In this study, we extracted samples coding for IBS from the UK BioBank cohort and randomly selected patients without a code for IBS to create a total sample size of 18,000. We selected the codes for comorbidities of these cases from 2 years before and after their IBS diagnosis and compared them to the comorbidities in the non-IBS cohort. Machine learning models, including Decision Trees, Gradient Boosting, Support Vector Machine (SVM), AdaBoost, Logistic Regression, and XGBoost, were employed to assess their accuracy in predicting IBS. The most accurate model was then chosen to identify the features associated with IBS. In our case, we used XGBoost feature importance as a feature selection method. We applied different models to the top 10% of features, which numbered 50. Gradient Boosting, Logistic Regression and XGBoost algorithms yielded a diagnosis of IBS with an optimal accuracy of 71.08%, 71.427%, and 71.53%, respectively. Among the comorbidities most closely associated with IBS included gut diseases (Haemorrhoids, diverticular diseases), atopic conditions(asthma), and psychiatric comorbidities (depressive episodes or disorder, anxiety). This finding emphasizes the need for a comprehensive approach when evaluating the phenotype of IBS, suggesting the possibility of identifying new subsets of IBS rather than relying solely on the conventional classification based on stool type. Additionally, our study demonstrates the potential of machine learning algorithms in predicting the development of IBS based on comorbidities, which may enhance diagnosis and facilitate better management of modifiable risk factors for IBS. Further research is necessary to confirm our findings and establish cause and effect. Alternative feature selection methods and even larger and more diverse datasets may lead to more accurate classification models. Despite these limitations, our findings highlight the effectiveness of Logistic Regression and XGBoost in predicting IBS diagnosis.Keywords: comorbidities, disease association, irritable bowel syndrome (IBS), predictive analytics
Procedia PDF Downloads 118634 The Development of Noctiluca scintillans Algal Bloom in Coastal Waters of Muscat, Sulanate of Oman
Authors: Aysha Al Sha'aibi
Abstract:
Algal blooms of the dinoflagellate species Noctiluca scintillans became frequent events in Omani waters. The current study aims at elucidating the abundance, size variation and observations on the feeding mechanism performed by this species during the winter bloom. An attempt was made, to relate observed biological parameters of the Noctiluca population to environmental factors. Field studies spanned the period from December 2014 to April 2015. Samples were collected from Bandar Rawdah (Muscat region) by Bongo nets, twice per week, from the surface and the integrated upper mixed layer. The measured environmental variables were: temperature, salinity, dissolved oxygen, chlorophyll a, turbidity, nitrite, phosphate, wind speed and rainfall. During the winter bloom (from December 2014 through February 2015), the abundance exhibited the highest concentration on 17 February (640.24×106 cell.L-1) in oblique samples and 83.9x103 cell.L-1 in surface samples, with a subsequent decline up to the end of April. The average number of food vacuoles inside Noctiluca cells was 1.5 per cell; the percentage of feeding Noctiluca compared to the entire population varied from 0.01% to 0.03%. Both the surface area of the Noctiluca symbionts (Pedinomonas noctilucae) and cell diameter were maximal in December. In oblique samples the highest average cell diameter and the surface area of symbiont algae were 751.7 µm and 179.2x103 µm2 respectively. In surface samples, highest average cell diameter and the surface area of symbionts were 760 µm and 284.05x103 µm2 respectively. No significant correlations were detected between Noctiluca’s biological parameters and environmental variables except for the correlation between cell diameter and chlorophyll a, also between symbiotic algae surface area and chlorophyll a. The high correlation of chlorophyll a was as a reason of endosymbiotic algae Pedinomonas noctilucae and green Noctiluca enhanced chlorophyll during bloom. All correlations among biological parameters were significant; they are perhaps one of major factors that mediating high growth rates, generating millions of cell per liter in a short time range. The results gained from this study will provide a beneficial background for understanding deeply the development of coastal algal blooms of Noctiluca scintillans. Moreover, results could be used in different applications related to marine environment.Keywords: abundance, feeding activities, Noctiluca scintillans, Oman
Procedia PDF Downloads 435633 Consumer Reactions to Hospitality Social Robots Across Cultures
Authors: Lisa C. Wan
Abstract:
To address customers’ safety concerns, more and more hospitality companies are using service robots to provide contactless services. For many companies, the switch from human employees to service robots to lower the contagion risk during and after the pandemic may be permanent. The market size for hospitality service robots is estimated to reach US$3,083 million by 2030, registering a CAGR of 25.5% from 2021 to 2030. While service robots may effectively reduce interpersonal contacts and health risk, it also eliminates the social interactions desired by customers. A recent survey revealed that more than 60% of Americans feel lonely during the pandemic. People who are traveling can also feel isolated when they are at a hotel far away from home. It is therefore important for the hospitality companies to understand whether and how social robots can remedy deprived social connection not only due to a pandemic but also for a trip away from home in the post-pandemic future. This study complements extant hospitality literature regarding service robots by examining how service robots can forge social connections with customers. The service robots we are concerned with are those that can interact and communicate with humans; we broadly refer to them as social robots. We define a social robot as one that is equipped with interaction capabilities – it can either be one that directly interacts with the consumer or one through which the consumer can interact with other humans. Drawing on the theories of mind perception, we propose that service robots can foster social connectedness and increase the perception of social competence of the robot, but these effects will vary across cultures. By applying theories of mind perception and cultural dimension to the hospitality setting, this study shows that service robots that are equipped with social connection function will receive a more favorable evaluation from the consumers and enhance their intention to visit a hotel. The more favorable reaction to social robots is stronger for collectivists (i.e., Asians) than individualists (i.e., Westerners). To our knowledge, this is among the first studies to investigate the impact of culture on consumer reactions to social robots in the hospitality and tourism context. Moreover, this research extends the literature by examining whether people imbue non-human entities (i.e., telepresence social robots) with social competence. Because social robots that foster social connection with humans are still rare in hospitality and tourism, this aspect is an underexplored research area. Our study is the first to propose that, just like their human counterparts that possess relevant social skills, social robots’ interaction capabilities (e.g., telepresence robots) are used to infer social competence. More studies will be conducted to examine consumer reactions to humanoid (vs. non-humanoid) robot in the hospitality settings to generalize our research findings.Keywords: service robots, COVID-19, social connection, cultures
Procedia PDF Downloads 103632 A Model of the Universe without Expansion of Space
Authors: Jia-Chao Wang
Abstract:
A model of the universe without invoking space expansion is proposed to explain the observed redshift-distance relation and the cosmic microwave background radiation (CMB). The main hypothesized feature of the model is that photons traveling in space interact with the CMB photon gas. This interaction causes the photons to gradually lose energy through dissipation and, therefore, experience redshift. The interaction also causes some of the photons to be scattered off their track toward an observer and, therefore, results in beam intensity attenuation. As observed, the CMB exists everywhere in space and its photon density is relatively high (about 410 per cm³). The small average energy of the CMB photons (about 6.3×10⁻⁴ eV) can reduce the energies of traveling photons gradually and will not alter their momenta drastically as in, for example, Compton scattering, to totally blur the images of distant objects. An object moving through a thermalized photon gas, such as the CMB, experiences a drag. The cause is that the object sees a blue shifted photon gas along the direction of motion and a redshifted one in the opposite direction. An example of this effect can be the observed CMB dipole: The earth travels at about 368 km/s (600 km/s) relative to the CMB. In the all-sky map from the COBE satellite, radiation in the Earth's direction of motion appears 0.35 mK hotter than the average temperature, 2.725 K, while radiation on the opposite side of the sky is 0.35 mK colder. The pressure of a thermalized photon gas is given by Pγ = Eγ/3 = αT⁴/3, where Eγ is the energy density of the photon gas and α is the Stefan-Boltzmann constant. The observed CMB dipole, therefore, implies a pressure difference between the two sides of the earth and results in a CMB drag on the earth. By plugging in suitable estimates of quantities involved, such as the cross section of the earth and the temperatures on the two sides, this drag can be estimated to be tiny. But for a photon traveling at the speed of light, 300,000 km/s, the drag can be significant. In the present model, for the dissipation part, it is assumed that a photon traveling from a distant object toward an observer has an effective interaction cross section pushing against the pressure of the CMB photon gas. For the attenuation part, the coefficient of the typical attenuation equation is used as a parameter. The values of these two parameters are determined by fitting the 748 µ vs. z data points compiled from 643 supernova and 105 γ-ray burst observations with z values up to 8.1. The fit is as good as that obtained from the lambda cold dark matter (ΛCDM) model using online cosmological calculators and Planck 2015 results. The model can be used to interpret Hubble's constant, Olbers' paradox, the origin and blackbody nature of the CMB radiation, the broadening of supernova light curves, and the size of the observable universe.Keywords: CMB as the lowest energy state, model of the universe, origin of CMB in a static universe, photon-CMB photon gas interaction
Procedia PDF Downloads 134